nhaliday + error   275

Ask HN: Getting into NLP in 2018? | Hacker News
syllogism (spaCy author):
I think it's probably a bad strategy to try to be the "NLP guy" to potential employers. You'd do much better off being a software engineer on a project with people with ML or NLP expertise.

NLP projects fail a lot. If you line up a job as a company's first NLP person, you'll probably be setting yourself up for failure. You'll get handed an idea that can't work, you won't know enough about how to push back to change it into something that might, etc. After the project fails, you might get a chance to fail at a second one, but maybe not a third. This isn't a great way to move into any new field.

I think a cunning plan would be to angle to be the person who "productionises" models.
...
.--
...

Basically, don't just work on having more powerful solutions. Make sure you've tried hard to have easier problems as well --- that part tends to be higher leverage.

https://news.ycombinator.com/item?id=14008752
https://news.ycombinator.com/item?id=12916498
https://algorithmia.com/blog/introduction-natural-language-processing-nlp
hn  q-n-a  discussion  tech  programming  machine-learning  nlp  strategy  career  planning  human-capital  init  advice  books  recommendations  course  unit  links  automation  project  examples  applications  multi  mooc  lectures  video  data-science  org:com  roadmap  summary  error  applicability-prereqs  ends-means  telos-atelos  cost-benefit 
2 days ago by nhaliday
Software Testing Anti-patterns | Hacker News
I haven't read this but both the article and commentary/discussion look interesting from a glance

hmm: https://news.ycombinator.com/item?id=16896390
In small companies where there is no time to "waste" on tests, my view is that 80% of the problems can be caught with 20% of the work by writing integration tests that cover large areas of the application. Writing unit tests would be ideal, but time-consuming. For a web project, that would involve testing all pages for HTTP 200 (< 1 hour bash script that will catch most major bugs), automatically testing most interfaces to see if filling data and clicking "save" works. Of course, for very important/dangerous/complex algorithms in the code, unit tests are useful, but generally, that represents a very low fraction of a web application's code.
hn  commentary  techtariat  discussion  programming  engineering  methodology  best-practices  checklists  thinking  correctness  api  interface-compatibility  jargon  list  metabuch  objektbuch  workflow  documentation  debugging  span-cover  checking  metrics  abstraction  within-without  characterization  error  move-fast-(and-break-things)  minimum-viable  efficiency  multi  poast  pareto  coarse-fine 
5 weeks ago by nhaliday
Linus's Law - Wikipedia
Linus's Law is a claim about software development, named in honor of Linus Torvalds and formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999).[1][2] The law states that "given enough eyeballs, all bugs are shallow";

--

In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate.[4] While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".[5][6]

Although detection of even deliberately inserted flaws[7][8] can be attributed to Raymond's claim, the persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum.[9][10][11][12] Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain.[12] In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".[11] Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.

Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty programs: https://academic.oup.com/cybersecurity/article/3/2/81/4524054

https://hbfs.wordpress.com/2009/03/31/how-many-eyeballs-to-make-a-bug-shallow/
wiki  reference  aphorism  ideas  stylized-facts  programming  engineering  linux  worse-is-better/the-right-thing  correctness  debugging  checking  best-practices  security  error  scale  ubiquity  collaboration  oss  realness  empirical  evidence-based  multi  study  info-econ  economics  intricacy  plots  manifolds  techtariat  cracker-prog  os  systems  magnitude  quantitative-qualitative  number  threat-modeling 
5 weeks ago by nhaliday
Overcoming Bias : What’s So Bad About Concentration?
And occurs to me to mention that when these models allow “free entry”, i.e., when the number of firms is set by the constraint that they must all expect to make non-negative profits, then such models consistently predict that too many firms enter, not too few. These models suggest that we should worry more about insufficient, not excess, concentration.
ratty  hanson  economics  industrial-org  contrarianism  critique  models  GT-101  game-theory  examples  market-power  rent-seeking  regulation  increase-decrease  signum  error  markets  biases  efficiency 
7 weeks ago by nhaliday
Karol Kuczmarski's Blog – A Haskell retrospective
Even in this hypothetical scenario, I posit that the value proposition of Haskell would still be a tough sell.

There is this old quote from Bjarne Stroustrup (creator of C++) where he says that programming languages divide into those everyone complains about, and those that no one uses.
The first group consists of old, established technologies that managed to accrue significant complexity debt through years and decades of evolution. All the while, they’ve been adapting to the constantly shifting perspectives on what are the best industry practices. Traces of those adaptations can still be found today, sticking out like a leftover appendix or residual tail bone — or like the built-in support for XML in Java.

Languages that “no one uses”, on the other hand, haven’t yet passed the industry threshold of sufficient maturity and stability. Their ecosystems are still cutting edge, and their future is uncertain, but they sometimes champion some really compelling paradigm shifts. As long as you can bear with things that are rough around the edges, you can take advantage of their novel ideas.

Unfortunately for Haskell, it manages to combine the worst parts of both of these worlds.

On one hand, it is a surprisingly old language, clocking more than two decades of fruitful research around many innovative concepts. Yet on the other hand, it bears the signs of a fresh new technology, with relatively few production-grade libraries, scarce coverage of some domains (e.g. GUI programming), and not too many stories of commercial successes.

There are many ways to do it
String theory
Errors and how to handle them
Implicit is better than explicit
Leaky modules
Namespaces are apparently a bad idea
Wild records
Purity beats practicality
techtariat  reflection  functional  haskell  programming  pls  realness  facebook  pragmatic  cost-benefit  legacy  libraries  types  intricacy  engineering  tradeoffs  frontier  homo-hetero  duplication  strings  composition-decomposition  nitty-gritty  error  error-handling  coupling-cohesion  critique  ecosystem  c(pp)  aphorism 
august 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
Cleaner, more elegant, and harder to recognize | The Old New Thing
Really easy
Writing bad error-code-based code
Writing bad exception-based code

Hard
Writing good error-code-based code

Really hard
Writing good exception-based code

--

Really easy
Recognizing that error-code-based code is badly-written
Recognizing the difference between bad error-code-based code and
not-bad error-code-based code.

Hard
Recognizing that error-code-base code is not badly-written

Really hard
Recognizing that exception-based code is badly-written
Recognizing that exception-based code is not badly-written
Recognizing the difference between bad exception-based code
and not-bad exception-based code

https://ra3s.com/wordpress/dysfunctional-programming/2009/07/15/return-code-vs-exception-handling/
https://nedbatchelder.com/blog/200501/more_exception_handling_debate.html
techtariat  org:com  microsoft  working-stiff  pragmatic  carmack  error  error-handling  programming  rhetoric  debate  critique  pls  search  structure  cost-benefit  comparison  summary  intricacy  certificates-recognition  commentary  multi  contrarianism  correctness  quality  code-dive  cracker-prog 
july 2019 by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup

...

The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.

...

The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness  computer-memory  error-handling  resources-effects  latency-throughput 
june 2019 by nhaliday
classification - ImageNet: what is top-1 and top-5 error rate? - Cross Validated
Now, in the case of top-1 score, you check if the top class (the one having the highest probability) is the same as the target label.

In the case of top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities).
nibble  q-n-a  overflow  machine-learning  deep-learning  metrics  comparison  ranking  top-n  classification  computer-vision  benchmarks  dataset  accuracy  error  jargon 
june 2019 by nhaliday
[1803.00085] Chinese Text in the Wild
We introduce Chinese Text in the Wild, a very large dataset of Chinese text in street view images.

...

We give baseline results using several state-of-the-art networks, including AlexNet, OverFeat, Google Inception and ResNet for character recognition, and YOLOv2 for character detection in images. Overall Google Inception has the best performance on recognition with 80.5% top-1 accuracy, while YOLOv2 achieves an mAP of 71.0% on detection. Dataset, source code and trained models will all be publicly available on the website.
nibble  pdf  papers  preprint  machine-learning  deep-learning  deepgoog  state-of-art  china  asia  writing  language  dataset  error  accuracy  computer-vision  pic  ocr  org:mat  benchmarks  questions 
may 2019 by nhaliday
Basic Error Rates
This page describes human error rates in a variety of contexts.

Most of the error rates are for mechanical errors. A good general figure for mechanical error rates appears to be about 0.5%.

Of course the denominator differs across studies. However only fairly simple actions are used in the denominator.

The Klemmer and Snyder study shows that much lower error rates are possible--in this case for people whose job consisted almost entirely of data entry.

The error rate for more complex logic errors is about 5%, based primarily on data on other pages, especially the program development page.
org:junk  list  links  objektbuch  data  database  error  accuracy  human-ml  machine-learning  ai  pro-rata  metrics  automation  benchmarks  marginal  nlp  language  density  writing  dataviz  meta:reading  speedometer 
may 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
Why is Software Engineering so difficult? - James Miller
basic message: No silver bullet!

most interesting nuggets:
Scale and Complexity
- Windows 7 > 50 million LOC
Expect a staggering number of bugs.

Bugs?
- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.
- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.

Bug removal
- Testing typically exercises only half the code.

Better bug removal?
- There are better ways to do testing that do produce fantastic programs.”
- Are we sure about this fact?
* No, its only an opinion!
* In general Software Engineering has ....
NO FACTS!

So why not do this?
- The costs are unbelievable.
- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.
pdf  slides  engineering  nitty-gritty  programming  best-practices  roots  comparison  cost-benefit  software  systematic-ad-hoc  structure  error  frontier  debugging  checking  formal-methods  context  detail-architecture  intricacy  big-picture  system-design  correctness  scale  scaling-tech  shipping  money  data  stylized-facts  street-fighting  objektbuch  pro-rata  estimate  pessimism  degrees-of-freedom  volo-avolo  no-go  things  thinking  summary  quality  density  methodology 
may 2019 by nhaliday
maintenance - Why do dynamic languages make it more difficult to maintain large codebases? - Software Engineering Stack Exchange
Now here is the key point I have been building up to: there is a strong correlation between a language being dynamically typed and a language also lacking all the other facilities that make lowering the cost of maintaining a large codebase easier, and that is the key reason why it is more difficult to maintain a large codebase in a dynamic language. And similarly there is a correlation between a language being statically typed and having facilities that make programming in the larger easier.
programming  worrydream  plt  hmm  comparison  pls  carmack  techtariat  types  engineering  productivity  pro-rata  input-output  correlation  best-practices  composition-decomposition  error  causation  confounding  devtools  jvm  scala  open-closed  cost-benefit  static-dynamic  design  system-design 
may 2019 by nhaliday
Teach debugging
A friend of mine and I couldn't understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed.

1. Write down the problem
2. Think real hard
3. Write down the solution

The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we'd been asked to tackle before. On the day he assigned the project, the professor exhorted us to begin early. Over the next few weeks, we heard rumors that some of our classmates worked day and night without making progress.

...

And then, just after midnight, a number of our newfound buddies from dinner reported successes. Half of those who started from scratch had working designs. Others were despondent, because their design was still broken in some subtle, non-obvious way. As I talked with one of those students, I began poring over his design. And after a few minutes, I realized that the Feynman method wasn't the only way forward: it should be possible to systematically apply a mechanical technique repeatedly to find the source of our problems. Beneath all the abstractions, our projects consisted purely of NAND gates (woe to those who dug around our toolbox enough to uncover dynamic logic), which outputs a 0 only when both inputs are 1. If the correct output is 0, both inputs should be 1. The input that isn't is in error, an error that is, itself, the output of a NAND gate where at least one input is 0 when it should be 1. We applied this method recursively, finding the source of all the problems in both our designs in under half an hour.

How To Debug Any Program: https://www.blinddata.com/blog/how-to-debug-any-program-9
May 8th 2019 by Saketh Are

Start by Questioning Everything

...

When a program is behaving unexpectedly, our attention tends to be drawn first to the most complex portions of the code. However, mistakes can come in all forms. I've personally been guilty of rushing to debug sophisticated portions of my code when the real bug was that I forgot to read in the input file. In the following section, we'll discuss how to reliably focus our attention on the portions of the program that need correction.

Then Question as Little as Possible

Suppose that we have a program and some input on which its behavior doesn’t match our expectations. The goal of debugging is to narrow our focus to as small a section of the program as possible. Once our area of interest is small enough, the value of the incorrect output that is being produced will typically tell us exactly what the bug is.

In order to catch the point at which our program diverges from expected behavior, we must inspect the intermediate state of the program. Suppose that we select some point during execution of the program and print out all values in memory. We can inspect the results manually and decide whether they match our expectations. If they don't, we know for a fact that we can focus on the first half of the program. It either contains a bug, or our expectations of what it should produce were misguided. If the intermediate state does match our expectations, we can focus on the second half of the program. It either contains a bug, or our understanding of what input it expects was incorrect.

Question Things Efficiently

For practical purposes, inspecting intermediate state usually doesn't involve a complete memory dump. We'll typically print a small number of variables and check whether they have the properties we expect of them. Verifying the behavior of a section of code involves:

1. Before it runs, inspecting all values in memory that may influence its behavior.
2. Reasoning about the expected behavior of the code.
3. After it runs, inspecting all values in memory that may be modified by the code.

Reasoning about expected behavior is typically the easiest step to perform even in the case of highly complex programs. Practically speaking, it's time-consuming and mentally strenuous to write debug output into your program and to read and decipher the resulting values. It is therefore advantageous to structure your code into functions and sections that pass a relatively small amount of information between themselves, minimizing the number of values you need to inspect.

...

Finding the Right Question to Ask

We’ve assumed so far that we have available a test case on which our program behaves unexpectedly. Sometimes, getting to that point can be half the battle. There are a few different approaches to finding a test case on which our program fails. It is reasonable to attempt them in the following order:

1. Verify correctness on the sample inputs.
2. Test additional small cases generated by hand.
3. Adversarially construct corner cases by hand.
4. Re-read the problem to verify understanding of input constraints.
5. Design large cases by hand and write a program to construct them.
6. Write a generator to construct large random cases and a brute force oracle to verify outputs.
techtariat  dan-luu  engineering  programming  debugging  IEEE  reflection  stories  education  higher-ed  checklists  iteration-recursion  divide-and-conquer  thinking  ground-up  nitty-gritty  giants  feynman  error  input-output  structure  composition-decomposition  abstraction  systematic-ad-hoc  reduction  teaching  state  correctness  multi  oly  oly-programming  metabuch  neurons  problem-solving  wire-guided  marginal  strategy  tactics  methodology  simplification-normalization 
may 2019 by nhaliday
quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.

edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]
q-n-a  stackex  programming  engineering  nitty-gritty  error  flux-stasis  books  recommendations  software  checking  debugging  pro-rata  pls  comparison  parsimony  measure  data  objektbuch  speculation  accuracy  density  correctness  estimate  street-fighting  multi  quality  stylized-facts  methodology 
april 2019 by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Dying and Rising Gods - Dictionary definition of Dying and Rising Gods | Encyclopedia.com: FREE online dictionary
https://en.wikipedia.org/wiki/Dying-and-rising_deity
While the concept of a "dying-and-rising god" has a longer history, it was significantly advocated by Frazer's Golden Bough (1906–1914). At first received very favourably, the idea was attacked by Roland de Vaux in 1933, and was the subject of controversial debate over the following decades.[31] One of the leading scholars in the deconstruction of Frazer's "dying-and-rising god" category was Jonathan Z. Smith, whose 1969 dissertation discusses Frazer's Golden Bough,[32] and who in Mircea Eliade's 1987 Encyclopedia of religion wrote the "Dying and rising gods" entry, where he dismisses the category as "largely a misnomer based on imaginative reconstructions and exceedingly late or highly ambiguous texts", suggesting a more detailed categorisation into "dying gods" and "disappearing gods", arguing that before Christianity, the two categories were distinct and gods who "died" did not return, and those who returned never truly "died".[33][34] Smith gave a more detailed account of his views specifically on the question of parallels to Christianity in Drudgery Divine (1990).[35] Smith's 1987 article was widely received, and during the 1990s, scholarly consensus seemed to shift towards his rejection of the concept as oversimplified, although it continued to be invoked by scholars writing about Ancient Near Eastern mythology.[36] As of 2009, the Encyclopedia of Psychology and Religion summarizes the current scholarly consensus as ambiguous, with some scholars rejecting Frazer's "broad universalist category" preferring to emphasize the differences between the various traditions, while others continue to view the category as applicable.[9] Gerald O'Collins states that surface-level application of analogous symbolism is a case of parallelomania which exaggerate the importance of trifling resemblances, long abandoned by mainstream scholars.[37]

Beginning with an overview of the Athenian ritual of growing and withering herb gardens at the Adonis festival, in his book The Gardens of Adonis Marcel Detienne suggests that rather than being a stand-in for crops in general (and therefore the cycle of death and rebirth), these herbs (and Adonis) were part of a complex of associations in the Greek mind that centered on spices.[38] These associations included seduction, trickery, gourmandizing, and the anxieties of childbirth.[39] From his point of view, Adonis's death is only one datum among the many that must be used to analyze the festival, the myth, and the god.[39][40]
wiki  reference  myth  ritual  religion  christianity  theos  conquest-empire  intricacy  contrarianism  error  gavisti  culture  europe  mediterranean  history  iron-age  the-classics  MENA  leadership  government  gender  sex  cycles  death  mystic  multi  sexuality  food  correlation  paganism 
june 2018 by nhaliday
Commentary: Predictions and the brain: how musical sounds become rewarding
https://twitter.com/AOEUPL_PHE/status/1004807377076604928
https://archive.is/FgNHG
did i just learn something big?

Prerecorded music has ABSOLUTELY NO
SURVIVAL reward. Zero. It does not help
with procreation (well, unless you're the
one making the music, then you get
endless sex) and it does not help with
individual survival.
As such, one must seriously self test
(n=1) prerecorded music actually holds
you back.
If you're reading this and you try no
music for 2 weeks and fail, hit me up. I
have some mind blowing stuff to show
you in how you can control others with
music.
study  psychology  cog-psych  yvain  ssc  models  speculation  music  art  aesthetics  evolution  evopsych  accuracy  meta:prediction  neuro  neuro-nitgrit  neurons  error  roots  intricacy  hmm  wire-guided  machiavelli  dark-arts  predictive-processing  reinforcement  multi  science-anxiety 
june 2018 by nhaliday
Who We Are | West Hunter
I’m going to review David Reich’s new book, Who We Are and How We Got Here. Extensively: in a sense I’ve already been doing this for a long time. Probably there will be a podcast. The GoFundMe link is here. You can also send money via Paypal (Use the donate button), or bitcoins to 1Jv4cu1wETM5Xs9unjKbDbCrRF2mrjWXr5. In-kind donations, such as orichalcum or mithril, are always appreciated.

This is the book about the application of ancient DNA to prehistory and history.

height difference between northern and southern europeans: https://westhunt.wordpress.com/2018/03/29/who-we-are-1/
mixing, genocide of males, etc.: https://westhunt.wordpress.com/2018/03/29/who-we-are-2-purity-of-essence/
rapid change in polygenic traits (appearance by Kevin Mitchell and funny jab at Brad Delong ("regmonkey")): https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/
schiz, bipolar, and IQ: https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/#comment-105605
Dan Graur being dumb: https://westhunt.wordpress.com/2018/04/02/the-usual-suspects/
prediction of neanderthal mixture and why: https://westhunt.wordpress.com/2018/04/03/who-we-are-3-neanderthals/
New Guineans tried to use Denisovan admixture to avoid UN sanctions (by "not being human"): https://westhunt.wordpress.com/2018/04/04/who-we-are-4-denisovans/
also some commentary on decline of Out-of-Africa, including:
"Homo Naledi, a small-brained homonin identified from recently discovered fossils in South Africa, appears to have hung around way later that you’d expect (up to 200,000 years ago, maybe later) than would be the case if modern humans had occupied that area back then. To be blunt, we would have eaten them."

Live Not By Lies: https://westhunt.wordpress.com/2018/04/08/live-not-by-lies/
Next he slams people that suspect that upcoming genetic genetic analysis will, in most cases, confirm traditional stereotypes about race – the way the world actually looks.

The people Reich dumps on are saying perfectly reasonable things. He criticizes Henry Harpending for saying that he’d never seen an African with a hobby. Of course, Henry had actually spent time in Africa, and that’s what he’d seen. The implication is that people in Malthusian farming societies – which Africa was not – were selected to want to work, even where there was no immediate necessity to do so. Thus hobbies, something like a gerbil running in an exercise wheel.

He criticized Nicholas Wade, for saying that different races have different dispositions. Wade’s book wasn’t very good, but of course personality varies by race: Darwin certainly thought so. You can see differences at birth. Cover a baby’s nose with a cloth: Chinese and Navajo babies quietly breathe through their mouth, European and African babies fuss and fight.

Then he attacks Watson, for asking when Reich was going to look at Jewish genetics – the kind that has led to greater-than-average intelligence. Watson was undoubtedly trying to get a rise out of Reich, but it’s a perfectly reasonable question. Ashkenazi Jews are smarter than the average bear and everybody knows it. Selection is the only possible explanation, and the conditions in the Middle ages – white-collar job specialization and a high degree of endogamy, were just what the doctor ordered.

Watson’s a prick, but he’s a great prick, and what he said was correct. Henry was a prince among men, and Nick Wade is a decent guy as well. Reich is totally out of line here: he’s being a dick.

Now Reich may be trying to burnish his anti-racist credentials, which surely need some renewal after having pointing out that race as colloquially used is pretty reasonable, there’s no reason pops can’t be different, people that said otherwise ( like Lewontin, Gould, Montagu, etc. ) were lying, Aryans conquered Europe and India, while we’re tied to the train tracks with scary genetic results coming straight at us. I don’t care: he’s being a weasel, slandering the dead and abusing the obnoxious old genius who laid the foundations of his field. Reich will also get old someday: perhaps he too will someday lose track of all the nonsense he’s supposed to say, or just stop caring. Maybe he already has… I’m pretty sure that Reich does not like lying – which is why he wrote this section of the book (not at all logically necessary for his exposition of the ancient DNA work) but the required complex juggling of lies and truth required to get past the demented gatekeepers of our society may not be his forte. It has been said that if it was discovered that someone in the business was secretly an android, David Reich would be the prime suspect. No Talleyrand he.

https://westhunt.wordpress.com/2018/04/12/who-we-are-6-the-americas/
The population that accounts for the vast majority of Native American ancestry, which we will call Amerinds, came into existence somewhere in northern Asia. It was formed from a mix of Ancient North Eurasians and a population related to the Han Chinese – about 40% ANE and 60% proto-Chinese. Is looks as if most of the paternal ancestry was from the ANE, while almost all of the maternal ancestry was from the proto-Han. [Aryan-Transpacific ?!?] This formation story – ANE boys, East-end girls – is similar to the formation story for the Indo-Europeans.

https://westhunt.wordpress.com/2018/04/18/who-we-are-7-africa/
In some ways, on some questions, learning more from genetics has left us less certain. At this point we really don’t know where anatomically humans originated. Greater genetic variety in sub-Saharan African has been traditionally considered a sign that AMH originated there, but it possible that we originated elsewhere, perhaps in North Africa or the Middle East, and gained extra genetic variation when we moved into sub-Saharan Africa and mixed with various archaic groups that already existed. One consideration is that finding recent archaic admixture in a population may well be a sign that modern humans didn’t arise in that region ( like language substrates) – which makes South Africa and West Africa look less likely. The long-continued existence of homo naledi in South Africa suggests that modern humans may not have been there for all that long – if we had co-existed with homo naledi, they probably wouldn’t lasted long. The oldest known skull that is (probably) AMh was recently found in Morocco, while modern humans remains, already known from about 100,000 years ago in Israel, have recently been found in northern Saudi Arabia.

While work by Nick Patterson suggests that modern humans were formed by a fusion between two long-isolated populations, a bit less than half a million years ago.

So: genomics had made recent history Africa pretty clear. Bantu agriculuralists expanded and replaced hunter-gatherers, farmers and herders from the Middle East settled North Africa, Egypt and northeaat Africa, while Nilotic herdsmen expanded south from the Sudan. There are traces of earlier patterns and peoples, but today, only traces. As for questions back further in time, such as the origins of modern humans – we thought we knew, and now we know we don’t. But that’s progress.

https://westhunt.wordpress.com/2018/04/18/reichs-journey/
David Reich’s professional path must have shaped his perspective on the social sciences. Look at the record. He starts his professional career examining the role of genetics in the elevated prostate cancer risk seen in African-American men. Various social-science fruitcakes oppose him even looking at the question of ancestry ( African vs European). But they were wrong: certain African-origin alleles explain the increased risk. Anthropologists (and human geneticists) were sure (based on nothing) that modern humans hadn’t interbred with Neanderthals – but of course that happened. Anthropologists and archaeologists knew that Gustaf Kossina couldn’t have been right when he said that widespread material culture corresponded to widespread ethnic groups, and that migration was the primary explanation for changes in the archaeological record – but he was right. They knew that the Indo-European languages just couldn’t have been imposed by fire and sword – but Reich’s work proved them wrong. Lots of people – the usual suspects plus Hindu nationalists – were sure that the AIT ( Aryan Invasion Theory) was wrong, but it looks pretty good today.

Some sociologists believed that caste in India was somehow imposed or significantly intensified by the British – but it turns out that most jatis have been almost perfectly endogamous for two thousand years or more…

It may be that Reich doesn’t take these guys too seriously anymore. Why should he?

varnas, jatis, aryan invastion theory: https://westhunt.wordpress.com/2018/04/22/who-we-are-8-india/

europe and EEF+WHG+ANE: https://westhunt.wordpress.com/2018/05/01/who-we-are-9-europe/

https://www.nationalreview.com/2018/03/book-review-david-reich-human-genes-reveal-history/
The massive mixture events that occurred in the recent past to give rise to Europeans and South Asians, to name just two groups, were likely “male mediated.” That’s another way of saying that men on the move took local women as brides or concubines. In the New World there are many examples of this, whether it be among African Americans, where most European ancestry seems to come through men, or in Latin America, where conquistadores famously took local women as paramours. Both of these examples are disquieting, and hint at the deep structural roots of patriarchal inequality and social subjugation that form the backdrop for the emergence of many modern peoples.
west-hunter  scitariat  books  review  sapiens  anthropology  genetics  genomics  history  antiquity  iron-age  world  europe  gavisti  aDNA  multi  politics  culture-war  kumbaya-kult  social-science  academia  truth  westminster  environmental-effects  embodied  pop-diff  nordic  mediterranean  the-great-west-whale  germanic  the-classics  shift  gene-flow  homo-hetero  conquest-empire  morality  diversity  aphorism  migration  migrant-crisis  EU  africa  MENA  gender  selection  speed  time  population-genetics  error  concrete  econotariat  economics  regression  troll  lol  twitter  social  media  street-fighting  methodology  robust  disease  psychiatry  iq  correlation  usa  obesity  dysgenics  education  track-record  people  counterexample  reason  thinking  fisher  giants  old-anglo  scifi-fantasy  higher-ed  being-right  stories  reflection  critique  multiplicative  iteration-recursion  archaics  asia  developing-world  civil-liberty  anglo  oceans  food  death  horror  archaeology  gnxp  news  org:mag  right-wing  age-of-discovery  latin-america  ea 
march 2018 by nhaliday
Mistakes happen for a reason | Bloody shovel
Which leads me to this article by Scott Alexander. He elaborates on an idea by one of his ingroup about their being two ways of looking at things, “mistake theory” and “conflict theory”. Mistake theory claims that political opposition comes from a different understanding of issues: if people had the same amount of knowledge and proper theories to explain it, they would necessarily agree. Conflict theory states that people disagree because their interests conflict, the conflict is zero-sum so there’s no reason to agree, the only question is how to resolve the conflict.

I was speechless. I am quite used to Mr. Alexander and his crowd missing the point on purpose, but this was just too much. Mistake theory and Conflict theory are not parallel things. “Mistake theory” is just the natural, tribalist way of thinking. It assumes an ingroup, it assumes the ingroup has a codified way of thinking about things, and it interprets all disagreement as a lack of understanding of the obviously objective and universal truths of the ingroup religion. There is a reason why liberals call “ignorant” all those who disagree with them. Christians used to be rather more charitable on this front and asked for “faith”, which they also assumed was difficult to achieve.

Conflict theory is one of the great achievements of the human intellect; it is an objective, useful and predictively powerful way of analyzing human disagreement. There is a reason why Marxist historiography revolutionized the world and is still with us: Marx made a strong point that human history was based on conflict. Which is true. It is tautologically true. If you understand evolution it stands to reason that all social life is about conflict. The fight for genetical survival is ultimately zero-sum, and even in those short periods of abundance when it is not, the fight for mating supremacy is very much zero-sum, and we are all very much aware of that today. Marx focused on class struggle for political reasons, which is wrong, but his focus on conflict was a gust of fresh air for those who enjoy objective analysis.

Incidentally the early Chinese thinkers understood conflict theory very well, which is why Chinese civilization is still around, the oldest on earth. A proper understanding of conflict does not come without its drawbacks, though. Mistakes happen for a reason. Pat Buchanan actually does understand why USG open the doors to trade with China. Yes, Whig history was part of it, but that’s just the rhetoric used to justify the idea. The actual motivation to trade with China was making money short term. Lots of money. Many in the Western elite have made huge amounts of money with the China trade. Money that conveniently was funneled to whichever political channels it had to do in order to keep the China trade going. Even without Whig history, even without the clueless idea that China would never become a political great power, the short-term profits to be made were big enough to capture the political process in the West and push for it. Countries don’t have interests: people do.

That is true, and should be obvious, but there are dangers to the realization. There’s a reason why people dislike cynics. People don’t want to know the truth. It’s hard to coordinate around the truth, especially when the truth is that humans are selfish assholes constantly in conflict. Mistakes happen because people find it convenient to hide the truth; and “mistake theory” happens because policing the ingroup patterns of thought, limiting the capability of people of knowing too much, is politically useful. The early Chinese kingdoms developed a very sophisticated way of analyzing objective reality. The early kingdoms were also full of constant warfare, rebellions and elite betrayals; all of which went on until the introduction in the 13th century of a state ideology (neoconfucianism) based on complete humbug and a massively unrealistic theory on human nature. Roman literature is refreshingly objective and to the point. Romans were also murderous bastards who assassinated each other all the time. It took the massive pile of nonsense which we call the Christian canon to get Europeans to cooperate in a semi-stable basis.

But guess what? Conflict theory also exists for a reason. And the reason is to extricate oneself from the ingroup, to see things how they actually are, and to undermine the state religion from the outside. Marxists came up with conflict theory because they knew they had little to expect from fighting from within the system. Those low-status workers who still regarded their mainstream society as being the ingroup they very sharply called “alienated”, and by using conflict theory they showed what the ingroup ideology was actually made of. Pat Buchanan and his cuck friends should take the message and stop assuming that the elite is playing for the same team as they are. The global elite, of America and its vassals, is not mistaken. They are playing for themselves: to raise their status above yours, to drop their potential rivals into eternal misery and to rule forever over them. China, Syria, and everything else, is about that.

https://bloodyshovel.wordpress.com/2018/03/09/mistakes-happen-for-a-reason/#comment-18834
Heh heh. It’s a lost art. The Greeks and Romans were realists about it (except Cicero, that idealistic bastard). They knew language, being the birthright of man, was just another way (and a damn powerful one) to gain status, make war, and steal each other’s women. Better be good at wielding it.
gnon  right-wing  commentary  china  asia  current-events  politics  ideology  coalitions  government  statesmen  leviathan  law  axioms  authoritarianism  usa  democracy  antidemos  trade  nationalism-globalism  elite  error  whiggish-hegelian  left-wing  paleocon  history  mostly-modern  world-war  impetus  incentives  interests  self-interest  signaling  homo-hetero  hypocrisy  meta:rhetoric  debate  language  universalism-particularism  tribalism  us-them  zero-positive-sum  absolute-relative  class  class-warfare  communism  polanyi-marx  westminster  realness  cynicism-idealism  truth  coordination  cooperate-defect  medieval  confucian  iron-age  mediterranean  the-classics  literature  canon  europe  the-great-west-whale  occident  sinosphere  orient  nl-and-so-can-you  world  conquest-empire  malthus  status  egalitarianism-hierarchy  evolution  conceptual-vocab  christianity  society  anthropology  metabuch  hidden-motives  X-not-about-Y  dark-arts  illusion  martial  war  cohesion  military  correlation  causation  roots  japan  comparison  long-short-run  mul 
march 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
Reid Hofmann and Peter Thiel and technology and politics - Marginal REVOLUTION
econotariat  marginal-rev  links  video  interview  thiel  barons  randy-ayndy  cryptocurrency  ai  communism  individualism-collectivism  civil-liberty  sv  tech  automation  speedometer  stagnation  technology  politics  current-events  trends  democracy  usa  malthus  zero-positive-sum  china  asia  stanford  news  org:local  polarization  economics  cycles  growth-econ  zeitgeist  housing  urban-rural  california  the-west  decentralized  privacy  anonymity  inequality  multi  winner-take-all  realpolitik  machiavelli  error  order-disorder  leviathan  dirty-hands  the-world-is-just-atoms  heavy-industry  embodied  engineering  reflection  trump  2016-election  pessimism  definite-planning  optimism  left-wing  right-wing  steel-man  managerial-state  orwellian  vampire-squid  contrarianism  age-generation  econ-productivity  compensation  time-series  feudal  gnosis-logos 
february 2018 by nhaliday
Self-Serving Bias | Slate Star Codex
Since reading Tabarrok’s post, I’ve been trying to think of more examples of this sort of thing, especially in medicine. There are way too many discrepancies in approved medications between countries to discuss every one of them, but did you know melatonin is banned in most of Europe? (Europeans: did you know melatonin is sold like candy in the United States?) Did you know most European countries have no such thing as “medical school”, but just have college students major in medicine, and then become doctors once they graduate from college? (Europeans: did you know Americans have to major in some random subject in college, and then go to a separate place called “medical school” for four years to even start learning medicine?) Did you know that in Puerto Rico, you can just walk into a pharmacy and get any non-scheduled drug you want without a doctor’s prescription? (source: my father; I have never heard anyone else talk about this, and nobody else even seems to think it is interesting enough to be worth noting).

...

And then there’s the discussion from the recent discussion of Madness and Civilization about how 18th century doctors thought hot drinks will destroy masculinity and ruin society. Nothing that’s happened since has really disproved this – indeed, a graph of hot drink consumption, decline of masculinity, and ruinedness of society would probably show a pretty high correlation – it’s just somehow gotten tossed in the bin marked “ridiculous” instead of the bin marked “things we have to worry about”.
🤔🤔
ratty  yvain  ssc  commentary  econotariat  marginal-rev  economics  labor  regulation  civil-liberty  randy-ayndy  markets  usa  the-west  comparison  europe  EU  cost-disease  medicine  education  higher-ed  error  gender  rot  lol  aphorism  zeitgeist  rationality  biases  flux-stasis 
january 2018 by nhaliday
Behaving Discretely: Heuristic Thinking in the Emergency Department
I find compelling evidence of heuristic thinking in this setting: patients arriving in the emergency department just after their 40th birthday are roughly 10% more likely to be tested for and 20% more likely to be diagnosed with ischemic heart disease (IHD) than patients arriving just before this date, despite the fact that the incidence of heart disease increases smoothly with age.

Figure 1: Proportion of ED patients tested for heart attack
pdf  study  economics  behavioral-econ  field-study  biases  heuristic  error  healthcare  medicine  meta:medicine  age-generation  aging  cardio  bounded-cognition  shift  trivia  cocktail  pro-rata 
december 2017 by nhaliday
Biopolitics | West Hunter
I have said before that no currently popular ideology acknowledges well-established results of behavioral genetics, quantitative genetics, or psychometrics. Or evolutionary psychology.

What if some ideology or political tradition did? what could they do? What problems could they solve, what capabilities would they have?

Various past societies knew a few things along these lines. They knew that there were significant physical and behavioral differences between the sexes, which is forbidden knowledge in modern academia. Some knew that close inbreeding had negative consequences, which knowledge is on its way to the forbidden zone as I speak. Some cultures with wide enough geographical experience had realistic notions of average cognitive differences between populations. Some people had a rough idea about regression to the mean [ in dynasties], and the Ottomans came up with a highly unpleasant solution – the law of fratricide. The Romans, during the Principate, dealt with the same problem through imperial adoption. The Chinese exam system is in part aimed at the same problem.

...

At least some past societies avoided the social patterns leading to the nasty dysgenic trends we are experiencing today, but for the most part that is due to the anthropic principle: if they’d done something else you wouldn’t be reading this. Also to between-group competition: if you fuck your self up when others don’t, you may be well be replaced. Which is still the case.

If you were designing an ideology from scratch you could make use of all of these facts – not that thinking about genetics and selection hands you the solution to every problem, but you’d have more strings to your bow. And, off the top of your head, you’d understand certain trends that are behind the mountains of Estcarp, for our current ruling classes : invisible and unthinkable, That Which Must Not Be Named. .

https://westhunt.wordpress.com/2017/10/08/biopolitics/#comment-96613
“The closest…s the sort of libertarianism promulgated by Charles Murray”
Not very close..
A government that was fully aware of the implications and possibilities of human genetics, one that had the usual kind of state goals [ like persistence and increased power] , would not necessarily be particularly libertarian.

https://westhunt.wordpress.com/2017/10/08/biopolitics/#comment-96797
And giving tax breaks to college-educated liberals to have babies wouldn’t appeal much to Trump voters, methinks.

It might be worth making a reasonably comprehensive of the facts and preferences that a good liberal is supposed to embrace and seem to believe. You would have to be fairly quick about it, before it changes. Then you could evaluate about the social impact of having more of them.

Rise and Fall: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/
Every society selects for something: generally it looks as if the direction of selection pressue is more or less an accident. Although nations and empires in the past could have decided to select men for bravery or intelligence, there’s not much sign that anyone actually did this. I mean, they would have known how, if they’d wanted to, just as they knew how to select for destriers, coursers, and palfreys. It was still possible to know such things in the Middle Ages, because Harvard did not yet exist.

A rising empire needs quality human capital, which implies that at minimum that budding imperial society must not have been strongly dysgenic. At least not in the beginning. But winning changes many things, possibly including selective pressures. Imagine an empire with substantial urbanization, one in which talented guys routinely end up living in cities – cities that were demographic sinks. That might change things. Or try to imagine an empire in which survival challenges are greatly reduced, at least for elites, so that people have nothing to keep their minds off their minds and up worshiping Magna Mater. Imagine that an empire that conquers a rival with interesting local pathogens and brings some of them home. Or one that uses up a lot of its manpower conquering less-talented subjects and importing masses of those losers into the imperial heartland.

If any of those scenarios happened valid, they might eventually result in imperial decline – decline due to decreased biological capital.

Right now this is speculation. If we knew enough about the GWAS hits for intelligence, and had enough ancient DNA, we might be able to observe that rise and fall, just as we see dysgenic trends in contemporary populations. But that won’t happen for a long time. Say, a year.

hmm: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/#comment-100350
“Although nations and empires in the past could have decided to select men for bravery or intelligence, there’s not much sign that anyone actually did this.”

Maybe the Chinese imperial examination could effectively have been a selection for intelligence.
--
Nope. I’ve modelled it: the fraction of winners is far too small to have much effect, while there were likely fitness costs from the arduous preparation. Moreover, there’s a recent
paper [Detecting polygenic adaptation in admixture graphs] that looks for indications of when selection for IQ hit northeast Asia: quite a while ago. Obvious though, since Japan has similar scores without ever having had that kind of examination system.

decline of British Empire and utility of different components: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/#comment-100390
Once upon a time, India was a money maker for the British, mainly because they appropriate Bengali tax revenue, rather than trade. The rest of the Empire was not worth much: it didn’t materially boost British per-capita income or military potential. Silesia was worth more to Germany, conferred more war-making power, than Africa was to Britain.
--
If you get even a little local opposition, a colony won’t pay for itself. I seem to remember that there was some, in Palestine.
--
Angels from on high paid for the Boer War.

You know, someone in the 50’s asked for the numbers – how much various colonies cost and how much they paid.

Turned out that no one had ever asked. The Colonial Office had no idea.
west-hunter  scitariat  discussion  ideas  politics  polisci  sociology  anthropology  cultural-dynamics  social-structure  social-science  evopsych  agri-mindset  pop-diff  kinship  regression-to-mean  anthropic  selection  group-selection  impact  gender  gender-diff  conquest-empire  MENA  history  iron-age  mediterranean  the-classics  china  asia  sinosphere  technocracy  scifi-fantasy  aphorism  alt-inst  recruiting  applications  medieval  early-modern  institutions  broad-econ  biodet  behavioral-gen  gnon  civilization  tradition  leviathan  elite  competition  cocktail  🌞  insight  sapiens  arbitrage  paying-rent  realness  kumbaya-kult  war  slippery-slope  unintended-consequences  deep-materialism  inequality  malthus  dysgenics  multi  murray  poast  speculation  randy-ayndy  authoritarianism  time-preference  patience  long-short-run  leadership  coalitions  ideology  rant  westminster  truth  flux-stasis  new-religion  identity-politics  left-wing  counter-revolution  fertility  signaling  status  darwinian  orwellian  ability-competence  organizing 
october 2017 by nhaliday
Definite optimism as human capital | Dan Wang
I’ve come to the view that creativity and innovative capacity aren’t a fixed stock, coiled and waiting to be released by policy. Now, I know that a country will not do well if it has poor infrastructure, interest rate management, tax and regulation levels, and a whole host of other issues. But getting them right isn’t sufficient to promote innovation; past a certain margin, when they’re all at rational levels, we ought to focus on promoting creativity and drive as a means to propel growth.

...

When I say “positive” vision, I don’t mean that people must see the future as a cheerful one. Instead, I’m saying that people ought to have a vision at all: A clear sense of how the technological future will be different from today. To have a positive vision, people must first expand their imaginations. And I submit that an interest in science fiction, the material world, and proximity to industry all help to refine that optimism. I mean to promote imagination by direct injection.

...

If a state has lost most of its jobs for electrical engineers, or nuclear engineers, or mechanical engineers, then fewer young people in that state will study those practices, and technological development in related fields slow down a little further. When I bring up these thoughts on resisting industrial decline to economists, I’m unsatisfied with their responses. They tend to respond by tautology (“By definition, outsourcing improves on the status quo”) or arithmetic (see: gains from comparative advantage, Ricardo). These kinds of logical exercises are not enough. I would like for more economists to consider a human capital perspective for preserving manufacturing expertise (to some degree).

I wonder if the so-called developed countries should be careful of their own premature deindustrialization. The US industrial base has faltered, but there is still so much left to build. Until we’ve perfected asteroid mining and super-skyscrapers and fusion rockets and Jupiter colonies and matter compilers, we can’t be satisfied with innovation confined mostly to the digital world.

Those who don’t mind the decline of manufacturing employment like to say that people have moved on to higher-value work. But I’m not sure that this is usually the case. Even if there’s an endlessly capacious service sector to absorb job losses in manufacturing, it’s often the case that these new jobs feature lower productivity growth and involve greater rent-seeking. Not everyone is becoming hedge fund managers and machine learning engineers. According to BLS, the bulk of service jobs are in 1. government (22 million), 2. professional services (19m), 3. healthcare (18m), 4. retail (15m), and 5. leisure and hospitality (15m). In addition to being often low-paying but still competitive, a great deal of service sector jobs tend to stress capacity for emotional labor over capacity for manual labor. And it’s the latter that tends to be more present in fields involving technological upgrading.

...

Here’s a bit more skepticism of service jobs. In an excellent essay on declining productivity growth, Adair Turner makes the point that many service jobs are essentially zero-sum. I’d like to emphasize and elaborate on that idea here.

...

Call me a romantic, but I’d like everyone to think more about industrial lubricants, gas turbines, thorium reactors, wire production, ball bearings, underwater cables, and all the things that power our material world. I abide by a strict rule never to post or tweet about current political stuff; instead I try to draw more attention to the world of materials. And I’d like to remind people that there are many things more edifying than following White House scandals.

...

First, we can all try to engage more actively with the material world, not merely the digital or natural world. Go ahead and pick an industrial phenomenon and learn more about it. Learn more about the history of aviation, and what it took to break the sound barrier; gaze at the container ships as they sail into port, and keep in mind that they carry 90 percent of the goods you see around you; read about what we mold plastics to do; meditate on the importance of steel in civilization; figure out what’s driving the decline in the cost of solar energy production, or how we draw electricity from nuclear fission, or what it takes to extract petroleum or natural gas from the ground.

...

Here’s one more point that I’d like to add on Girard at college: I wonder if to some extent current dynamics are the result of the liberal arts approach of “college teaches you how to think, not what to think.” I’ve never seen much data to support this wonderful claim that college is good at teaching critical thinking skills. Instead, students spend most of their energies focused on raising or lowering the status of the works they study or the people around them, giving rise to the Girardian terror that has gripped so many campuses.

College as an incubator of Girardian terror: http://danwang.co/college-girardian-terror/
It’s hard to construct a more perfect incubator for mimetic contagion than the American college campus. Most 18-year-olds are not super differentiated from each other. By construction, whatever distinctions any does have are usually earned through brutal, zero-sum competitions. These tournament-type distinctions include: SAT scores at or near perfection; being a top player on a sports team; gaining master status from chess matches; playing first instrument in state orchestra; earning high rankings in Math Olympiad; and so on, culminating in gaining admission to a particular college.

Once people enter college, they get socialized into group environments that usually continue to operate in zero-sum competitive dynamics. These include orchestras and sport teams; fraternities and sororities; and many types of clubs. The biggest source of mimetic pressures are the classes. Everyone starts out by taking the same intro classes; those seeking distinction throw themselves into the hardest classes, or seek tutelage from star professors, and try to earn the highest grades.

Mimesis Machines and Millennials: http://quillette.com/2017/11/02/mimesis-machines-millennials/
In 1956, a young Liverpudlian named John Winston Lennon heard the mournful notes of Elvis Presley’s Heartbreak Hotel, and was transformed. He would later recall, “nothing really affected me until I heard Elvis. If there hadn’t been an Elvis, there wouldn’t have been the Beatles.” It is an ancient human story. An inspiring model, an inspired imitator, and a changed world.

Mimesis is the phenomenon of human mimicry. Humans see, and they strive to become what they see. The prolific Franco-Californian philosopher René Girard described the human hunger for imitation as mimetic desire. According to Girard, mimetic desire is a mighty psychosocial force that drives human behavior. When attempted imitation fails, (i.e. I want, but fail, to imitate my colleague’s promotion to VP of Business Development), mimetic rivalry arises. According to mimetic theory, periodic scapegoating—the ritualistic expelling of a member of the community—evolved as a way for archaic societies to diffuse rivalries and maintain the general peace.

As civilization matured, social institutions evolved to prevent conflict. To Girard, sacrificial religious ceremonies first arose as imitations of earlier scapegoating rituals. From the mimetic worldview healthy social institutions perform two primary functions,

They satisfy mimetic desire and reduce mimetic rivalry by allowing imitation to take place.
They thereby reduce the need to diffuse mimetic rivalry through scapegoating.
Tranquil societies possess and value institutions that are mimesis tolerant. These institutions, such as religion and family, are Mimesis Machines. They enable millions to see, imitate, and become new versions of themselves. Mimesis Machines, satiate the primal desire for imitation, and produce happy, contented people. Through Mimesis Machines, Elvis fans can become Beatles.

Volatile societies, on the other hand, possess and value mimesis resistant institutions that frustrate attempts at mimicry, and mass produce frustrated, resentful people. These institutions, such as capitalism and beauty hierarchies, are Mimesis Shredders. They stratify humanity, and block the ‘nots’ from imitating the ‘haves’.
techtariat  venture  commentary  reflection  innovation  definite-planning  thiel  barons  economics  growth-econ  optimism  creative  malaise  stagnation  higher-ed  status  error  the-world-is-just-atoms  heavy-industry  sv  zero-positive-sum  japan  flexibility  china  outcome-risk  uncertainty  long-short-run  debt  trump  entrepreneurialism  human-capital  flux-stasis  cjones-like  scifi-fantasy  labor  dirty-hands  engineering  usa  frontier  speedometer  rent-seeking  econ-productivity  government  healthcare  essay  rhetoric  contrarianism  nascent-state  unintended-consequences  volo-avolo  vitality  technology  tech  cs  cycles  energy-resources  biophysical-econ  trends  zeitgeist  rot  alt-inst  proposal  multi  news  org:mag  org:popup  philosophy  big-peeps  speculation  concept  religion  christianity  theos  buddhism  politics  polarization  identity-politics  egalitarianism-hierarchy  inequality  duplication  society  anthropology  culture-war  westminster  info-dynamics  tribalism  institutions  envy  age-generation  letters  noble-lie 
october 2017 by nhaliday
Two theories of home heat control - ScienceDirect
People routinely develop their own theories to explain the world around them. These theories can be useful even when they contradict conventional technical wisdom. Based on in-depth interviews about home heating and thermostat setting behavior, the present study presents two theories people use to understand and adjust their thermostats. The two theories are here called the feedback theory and the valve theory. The valve theory is inconsistent with engineering knowledge, but is estimated to be held by 25% to 50% of Americans. Predictions of each of the theories are compared with the operations normally performed in home heat control. This comparison suggests that the valve theory may be highly functional in normal day-to-day use. Further data is needed on the ways this theory guides behavior in natural environments.
study  hci  ux  hardware  embodied  engineering  dirty-hands  models  thinking  trivia  cocktail  map-territory  realness  neurons  psychology  cog-psych  social-psych  error  usa  poll  descriptive  temperature  protocol-metadata  form-design 
september 2017 by nhaliday
PRRI: America’s Changing Religious Identity
https://www.washingtonpost.com/blogs/right-turn/wp/2017/09/06/the-demographic-change-fueling-the-angst-of-trumps-base/
https://gnxp.nofe.me/2017/09/08/as-many-americans-think-the-bible-is-a-book-of-fables-as-that-it-is-the-word-of-god/
America, that is, the United States of America, has long been a huge exception for the secularization model. Basically as a society develops and modernizes it becomes more secular. At least that’s the model.

...

Today everyone is talking about the Pew survey which shows the marginalization of the Anglo-Protestant America which I grew up in. This marginalization is due to secularization broadly, and non-Hispanic whites in particular. You don’t need Pew to tell you this.

...

Note: Robert Putnam’s American Grace is probably the best book which highlights the complex cultural forces which ushered in the second wave of secularization. The short answer is that the culture wars diminished Christianity in the eyes of liberals.

Explaining Why More Americans Have No Religious Preference: Political Backlash and Generational Succession, 1987-2012: https://www.sociologicalscience.com/articles-vol1-24-423/
the causal direction in the rise of the “Nones” likely runs from political identity as a liberal or conservative to religious identity

The Persistent and Exceptional Intensity of American Religion: A Response to Recent Research: https://osf.io/preprints/socarxiv/xd37b
But we show that rather than religion fading into irrelevance as the secularization thesis would suggest, intense religion—strong affiliation, very frequent practice, literalism, and evangelicalism—is persistent and, in fact, only moderate religion is on the decline in the United States.

https://twitter.com/avermeule/status/913823410609950721
https://archive.is/CiCok
As in the U.K., so now too in America: the left establishment is moving towards an open view that orthodox Christians are unfit for office.
https://twitter.com/avermeule/status/913880665011228673
https://archive.is/LZiyV

https://twitter.com/tcjfs/status/883764202539798529
https://archive.is/HvVrN
i've had the thought that it's a plausible future where traditional notions of theism become implicitly non-white

https://mereorthodoxy.com/bourgeois-christian-politics/

http://www.cnn.com/2015/05/12/living/pew-religion-study/index.html
http://coldcasechristianity.com/2017/are-young-people-really-leaving-christianity/
Some writers and Christian observers deny the flight of young people altogether, but the growing statistics should alarm us enough as Church leaders to do something about the dilemma. My hope in this post is to simply consolidate some of the research (many of the summaries are directly quoted) so you can decide for yourself. I’m going to organize the recent findings in a way that illuminates the problem:

'Christianity as default is gone': the rise of a non-Christian Europe: https://www.theguardian.com/world/2018/mar/21/christianity-non-christian-europe-young-people-survey-religion
In the UK, only 7% of young adults identify as Anglican, fewer than the 10% who categorise themselves as Catholic. Young Muslims, at 6%, are on the brink of overtaking those who consider themselves part of the country’s established church.

https://en.wikipedia.org/wiki/Postchristianity
Other scholars have disputed the global decline of Christianity, and instead hypothesized of an evolution of Christianity which allows it to not only survive, but actively expand its influence in contemporary societies.

Philip Jenkins hypothesized a "Christian Revolution" in the Southern nations, such as Africa, Asia and Latin America, where instead of facing decline, Christianity is actively expanding. The relevance of Christian teachings in the global South will allow the Christian population in these areas to continually increase, and together with the shrinking of the Western Christian population, will form a "new Christendom" in which the majority of the world's Christian population can be found in the South.[9]
news  org:ngo  data  analysis  database  white-paper  usa  religion  christianity  theos  politics  polisci  coalitions  trends  zeitgeist  demographics  race  latin-america  within-group  northeast  the-south  the-west  asia  migration  gender  sex  sexuality  distribution  visualization  age-generation  diversity  maps  judaism  time-series  protestant-catholic  other-xtian  gender-diff  education  compensation  india  islam  multi  org:rec  pro-rata  gnxp  scitariat  huntington  prediction  track-record  error  big-peeps  statesmen  general-survey  poll  putnam-like  study  sociology  roots  impetus  history  mostly-modern  books  recommendations  summary  stylized-facts  values  twitter  social  discussion  journos-pundits  backup  tradition  gnon  unaffiliated  right-wing  identity-politics  eric-kaufmann  preprint  uniqueness  comparison  similarity  org:lite  video  links  list  survey  internet  life-history  fertility  social-capital  wiki  reference  org:anglo  world  developing-world  europe  EU  britain  rot  a 
september 2017 by nhaliday
Which industries are the most liberal and most conservative?
How Democratic or Republican is your job? This tool tells you: https://www.washingtonpost.com/news/the-fix/wp/2015/06/03/how-democratic-or-republican-is-your-job-this-tool-tells-you/?utm_term=.e19707abd9f1

http://verdantlabs.com/politics_of_professions/index.html

What you do and how you vote: http://www.pleeps.org/2017/01/07/what-you-do-and-how-you-vote/

trending blue across white-collar professions:
https://www.nytimes.com/2019/09/18/opinion/trump-fundraising-donors.html
https://twitter.com/adam_bonica/status/1174536380329803776
https://archive.is/r7YB6

https://twitter.com/whyvert/status/1174735746088996864
https://archive.is/Cwrih
This is partly because the meaning of left and right changed during that period. Left used to about protecting workers. Now it's mainly about increasing the power of the elite class over the working class - thus their increased support.
--
yes, it is a different kind of left now

academia:
https://en.wikipedia.org/wiki/Political_views_of_American_academics

The Legal Academy's Ideological Uniformity: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2953087

Homogenous: The Political Affiliations of Elite Liberal Arts College Faculty: https://sci-hub.tw/10.1007/s12129-018-9700-x
includes crosstab by discipline

https://www.conservativecriminology.com/uploads/5/6/1/7/56173731/lounsbery_9-25.pdf#page=28
Neil Gross, Solon Simmons
THE SOCIAL AND POLITICAL VIEWS OF AMERICAN PROFESSORS

another crosstab
description of data sampling on page 21, meant to be representative of all undergraduate degree-granting institutions

Computer science 32.3 58.1 9.7

It’s finally out–The big review paper on the lack of political diversity in social psychology: https://heterodoxacademy.org/2015/09/14/bbs-paper-on-lack-of-political-diversity/
https://heterodoxacademy.org/2015/09/21/political-diversity-response-to-33-critiques/
http://righteousmind.com/viewpoint-diversity/
http://www.nationalaffairs.com/publications/detail/real-academic-diversity
http://quillette.com/2017/07/06/social-sciences-undergoing-purity-spiral/
What’s interesting about Haidt’s alternative interpretation of the liberal progress narrative is that he mentions two elements central to the narrative—private property and nations. And what has happened to a large extent is that as the failures of communism have become increasingly apparent many on the left—including social scientists—have shifted their activism away from opposing private property and towards other aspects, for example globalism.

But how do we know a similarly disastrous thing is not going to happen with globalism as happened with communism? What if some form of national and ethnic affiliation is a deep-seated part of human nature, and that trying to forcefully suppress it will eventually lead to a disastrous counter-reaction? What if nations don’t create conflict, but alleviate it? What if a decentralised structure is the best way for human society to function?
news  org:lite  data  study  summary  politics  polisci  ideology  correlation  economics  finance  law  academia  media  tech  sv  heavy-industry  energy-resources  biophysical-econ  agriculture  pharma  things  visualization  crosstab  phalanges  housing  scale  money  elite  charity  class-warfare  coalitions  demographics  business  distribution  polarization  database  multi  org:rec  dynamic  tools  calculator  list  top-n  labor  management  leadership  government  hari-seldon  gnosis-logos  career  planning  jobs  dirty-hands  long-term  scitariat  haidt  org:ngo  commentary  higher-ed  psychology  social-psych  social-science  westminster  institutions  roots  chart  discrimination  debate  critique  biases  diversity  homo-hetero  replication  org:mag  letters  org:popup  ethnocentrism  error  communism  universalism-particularism  whiggish-hegelian  us-them  tribalism  wonkish  org:data  analysis  general-survey  exploratory  stylized-facts  elections  race  education  twitter  social  backup  journos-pundits  gnon  aphorism  impetus  interests  self-interest 
september 2017 by nhaliday
Europa, Enceladus, Moon Miranda | West Hunter
A lot of ice moons seem to have interior oceans, warmed by tidal flexing and possibly radioactivity.  But they’re lousy candidates for life, because you need free energy; and there’s very little in the interior oceans of such system.

It is possible that NASA is institutionally poor at pointing this out.
west-hunter  scitariat  discussion  ideas  rant  speculation  prediction  government  dirty-hands  space  xenobio  oceans  fluid  thermo  phys-energy  temperature  no-go  volo-avolo  physics  equilibrium  street-fighting  nibble  error  track-record  usa  bio  eden  cybernetics  complex-systems 
september 2017 by nhaliday
Medicine as a pseudoscience | West Hunter
The idea that venesection was a good thing, or at least not so bad, on the grounds that one in a few hundred people have hemochromatosis (in Northern Europe) reminds me of the people who don’t wear a seatbelt, since it would keep them from being thrown out of their convertible into a waiting haystack, complete with nubile farmer’s daughter. Daughters. It could happen. But it’s not the way to bet.

Back in the good old days, Charles II, age 53, had a fit one Sunday evening, while fondling two of his mistresses.

Monday they bled him (cupping and scarifying) of eight ounces of blood. Followed by an antimony emetic, vitriol in peony water, purgative pills, and a clyster. Followed by another clyster after two hours. Then syrup of blackthorn, more antimony, and rock salt. Next, more laxatives, white hellebore root up the nostrils. Powdered cowslip flowers. More purgatives. Then Spanish Fly. They shaved his head and stuck blistering plasters all over it, plastered the soles of his feet with tar and pigeon-dung, then said good-night.

...

Friday. The king was worse. He tells them not to let poor Nelly starve. They try the Oriental Bezoar Stone, and more bleeding. Dies at noon.

Most people didn’t suffer this kind of problem with doctors, since they never saw one. Charles had six. Now Bach and Handel saw the same eye surgeon, John Taylor – who blinded both of them. Not everyone can put that on his resume!

You may wonder how medicine continued to exist, if it had a negative effect, on the whole. There’s always the placebo effect – at least there would be, if it existed. Any real placebo effect is very small: I’d guess exactly zero. But there is regression to the mean. You see the doctor when you’re feeling worse than average – and afterwards, if he doesn’t kill you outright, you’re likely to feel better. Which would have happened whether you’d seen him or not, but they didn’t often do RCTs back in the day – I think James Lind was the first (1747).

Back in the late 19th century, Christian Scientists did better than others when sick, because they didn’t believe in medicine. For reasons I think mistaken, because Mary Baker Eddy rejected the reality of the entire material world, but hey, it worked. Parenthetically, what triggered all that New Age nonsense in 19th century New England? Hash?

This did not change until fairly recently. Sometime in the early 20th medicine, clinical medicine, what doctors do, hit break-even. Now we can’t do without it. I wonder if there are, or will be, other examples of such a pile of crap turning (mostly) into a real science.

good tweet: https://twitter.com/bowmanthebard/status/897146294191390720
The brilliant GP I've had for 35+ years has retired. How can I find another one who meets my requirements?

1 is overweight
2 drinks more than officially recommended amounts
3 has an amused, tolerant atitude to human failings
4 is well aware that we're all going to die anyway, & there are better or worse ways to die
5 has a healthy skeptical attitude to mainstream medical science
6 is wholly dismissive of "a|ternative” medicine
7 believes in evolution
8 thinks most diseases get better without intervention, & knows the dangers of false positives
9 understands the base rate fallacy

EconPapers: Was Civil War Surgery Effective?: http://econpapers.repec.org/paper/htrhcecon/444.htm
contra Greg Cochran:
To shed light on the subject, I analyze a data set created by Dr. Edmund Andrews, a Civil war surgeon with the 1st Illinois Light Artillery. Dr. Andrews’s data can be rendered into an observational data set on surgical intervention and recovery, with controls for wound location and severity. The data also admits instruments for the surgical decision. My analysis suggests that Civil War surgery was effective, and increased the probability of survival of the typical wounded soldier, with average treatment effect of 0.25-0.28.

Medical Prehistory: https://westhunt.wordpress.com/2016/03/14/medical-prehistory/
What ancient medical treatments worked?

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76878
In some very, very limited conditions, bleeding?
--
Bad for you 99% of the time.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76947
Colchicine – used to treat gout – discovered by the Ancient Greeks.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76973
Dracunculiasis (Guinea worm)
Wrap the emerging end of the worm around a stick and slowly pull it out.
(3,500 years later, this remains the standard treatment.)
https://en.wikipedia.org/wiki/Ebers_Papyrus

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76971
Some of the progress is from formal medicine, most is from civil engineering, better nutrition ( ag science and physical chemistry), less crowded housing.

Nurses vs doctors: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/
Medicine, the things that doctors do, was an ineffective pseudoscience until fairly recently. Until 1800 or so, they were wrong about almost everything. Bleeding, cupping, purging, the four humors – useless. In the 1800s, some began to realize that they were wrong, and became medical nihilists that improved outcomes by doing less. Some patients themselves came to this realization, as when Civil War casualties hid from the surgeons and had better outcomes. Sometime in the early 20th century, MDs reached break-even, and became an increasingly positive influence on human health. As Lewis Thomas said, medicine is the youngest science.

Nursing, on the other hand, has always been useful. Just making sure that a patient is warm and nourished when too sick to take care of himself has helped many survive. In fact, some of the truly crushing epidemics have been greatly exacerbated when there were too few healthy people to take care of the sick.

Nursing must be old, but it can’t have existed forever. Whenever it came into existence, it must have changed the selective forces acting on the human immune system. Before nursing, being sufficiently incapacitated would have been uniformly fatal – afterwards, immune responses that involved a period of incapacitation (with eventual recovery) could have been selectively favored.

when MDs broke even: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/#comment-58981
I’d guess the 1930s. Lewis Thomas thought that he was living through big changes. They had a working serum therapy for lobar pneumonia ( antibody-based). They had many new vaccines ( diphtheria in 1923, whopping cough in 1926, BCG and tetanus in 1927, yellow fever in 1935, typhus in 1937.) Vitamins had been mostly worked out. Insulin was discovered in 1929. Blood transfusions. The sulfa drugs, first broad-spectrum antibiotics, showed up in 1935.

DALYs per doctor: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/
The disability-adjusted life year (DALY) is a measure of overall disease burden – the number of years lost. I’m wondering just much harm premodern medicine did, per doctor. How many healthy years of life did a typical doctor destroy (net) in past times?

...

It looks as if the average doctor (in Western medicine) killed a bunch of people over his career ( when contrasted with doing nothing). In the Charles Manson class.

Eventually the market saw through this illusion. Only took a couple of thousand years.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100741
That a very large part of healthcare spending is done for non-health reasons. He has a chapter on this in his new book, also check out his paper “Showing That You Care: The Evolution of Health Altruism” http://mason.gmu.edu/~rhanson/showcare.pdf
--
I ran into too much stupidity to finish the article. Hanson’s a loon. For example when he talks about the paradox of blacks being more sentenced on drug offenses than whites although they use drugs at similar rate. No paradox: guys go to the big house for dealing, not for using. Where does he live – Mars?

I had the same reaction when Hanson parroted some dipshit anthropologist arguing that the stupid things people do while drunk are due to social expectations, not really the alcohol.
Horseshit.

I don’t think that being totally unable to understand everybody around you necessarily leads to deep insights.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100744
What I’ve wondered is if there was anything that doctors did that actually was helpful and if perhaps that little bit of success helped them fool people into thinking the rest of it helped.
--
Setting bones. extracting arrows: spoon of Diocles. Colchicine for gout. Extracting the Guinea worm. Sometimes they got away with removing the stone. There must be others.
--
Quinine is relatively recent: post-1500. Obstetrical forceps also. Caesarean deliveries were almost always fatal to the mother until fairly recently.

Opium has been around for a long while : it works.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100839
If pre-modern medicine was indeed worse than useless – how do you explain no one noticing that patients who get expensive treatments are worse off than those who didn’t?
--
were worse off. People are kinda dumb – you’ve noticed?
--
My impression is that while people may be “kinda dumb”, ancient customs typically aren’t.
Even if we assume that all people who lived prior to the 19th century were too dumb to make the rational observation, wouldn’t you expect this ancient practice to be subject to selective pressure?
--
Your impression is wrong. Do you think that there some slick reason for Carthaginians incinerating their first-born?

Theodoric of York, bloodletting: https://www.youtube.com/watch?v=yvff3TViXmY

details on blood-letting and hemochromatosis: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100746

Starting Over: https://westhunt.wordpress.com/2018/01/23/starting-over/
Looking back on it, human health would have … [more]
west-hunter  scitariat  discussion  ideas  medicine  meta:medicine  science  realness  cost-benefit  the-trenches  info-dynamics  europe  the-great-west-whale  history  iron-age  the-classics  mediterranean  medieval  early-modern  mostly-modern  🌞  harvard  aphorism  rant  healthcare  regression-to-mean  illusion  public-health  multi  usa  northeast  pre-ww2  checklists  twitter  social  albion  ability-competence  study  cliometrics  war  trivia  evidence-based  data  intervention  effect-size  revolution  speculation  sapiens  drugs  antiquity  lived-experience  list  survey  questions  housing  population  density  nutrition  wiki  embodied  immune  evolution  poast  chart  markets  civil-liberty  randy-ayndy  market-failure  impact  scale  pro-rata  estimate  street-fighting  fermi  marginal  truth  recruiting  alt-inst  academia  social-science  space  physics  interdisciplinary  ratty  lesswrong  autism  👽  subculture  hanson  people  track-record  crime  criminal-justice  criminology  race  ethanol  error  video  lol  comedy  tradition  institutions  iq  intelligence  MENA  impetus  legacy 
august 2017 by nhaliday
GALILEO'S STUDIES OF PROJECTILE MOTION
During the Renaissance, the focus, especially in the arts, was on representing as accurately as possible the real world whether on a 2 dimensional surface or a solid such as marble or granite. This required two things. The first was new methods for drawing or painting, e.g., perspective. The second, relevant to this topic, was careful observation.

With the spread of cannon in warfare, the study of projectile motion had taken on greater importance, and now, with more careful observation and more accurate representation, came the realization that projectiles did not move the way Aristotle and his followers had said they did: the path of a projectile did not consist of two consecutive straight line components but was instead a smooth curve. [1]

Now someone needed to come up with a method to determine if there was a special curve a projectile followed. But measuring the path of a projectile was not easy.

Using an inclined plane, Galileo had performed experiments on uniformly accelerated motion, and he now used the same apparatus to study projectile motion. He placed an inclined plane on a table and provided it with a curved piece at the bottom which deflected an inked bronze ball into a horizontal direction. The ball thus accelerated rolled over the table-top with uniform motion and then fell off the edge of the table Where it hit the floor, it left a small mark. The mark allowed the horizontal and vertical distances traveled by the ball to be measured. [2]

By varying the ball's horizontal velocity and vertical drop, Galileo was able to determine that the path of a projectile is parabolic.

https://www.scientificamerican.com/author/stillman-drake/

Galileo's Discovery of the Parabolic Trajectory: http://www.jstor.org/stable/24949756

Galileo's Experimental Confirmation of Horizontal Inertia: Unpublished Manuscripts (Galileo
Gleanings XXII): https://sci-hub.tw/https://www.jstor.org/stable/229718
- Drake Stillman

MORE THAN A DECADE HAS ELAPSED since Thomas Settle published a classic paper in which Galileo's well-known statements about his experiments on inclined planes were completely vindicated.' Settle's paper replied to an earlier attempt by Alexandre Koyre to show that Galileo could not have obtained the results he claimed in his Two New Sciences by actual observations using the equipment there described. The practical ineffectiveness of Settle's painstaking repetition of the experiments in altering the opinion of historians of science is only too evident. Koyre's paper was reprinted years later in book form without so much as a note by the editors concerning Settle's refutation of its thesis.2 And the general literature continues to belittle the role of experiment in Galileo's physics.

More recently James MacLachlan has repeated and confirmed a different experiment reported by Galileo-one which has always seemed highly exaggerated and which was also rejected by Koyre with withering sarcasm.3 In this case, however, it was accuracy of observation rather than precision of experimental data that was in question. Until now, nothing has been produced to demonstrate Galileo's skill in the design and the accurate execution of physical experiment in the modern sense.

Pant of a page of Galileo's unpublished manuscript notes, written late in 7608, corroborating his inertial assumption and leading directly to his discovery of the parabolic trajectory. (Folio 1 16v Vol. 72, MSS Galileiani; courtesy of the Biblioteca Nazionale di Firenze.)

...

(The same skeptical historians, however, believe that to show that Galileo could have used the medieval mean-speed theorem suffices to prove that he did use it, though it is found nowhere in his published or unpublished writings.)

...

Now, it happens that among Galileo's manuscript notes on motion there are many pages that were not published by Favaro, since they contained only calculations or diagrams without attendant propositions or explanations. Some pages that were published had first undergone considerable editing, making it difficult if not impossible to discern their full significance from their printed form. This unpublished material includes at least one group of notes which cannot satisfactorily be accounted for except as representing a series of experiments designed to test a fundamental assumption, which led to a new, important discovery. In these documents precise empirical data are given numerically, comparisons are made with calculated values derived from theory, a source of discrepancy from still another expected result is noted, a new experiment is designed to eliminate this, and further empirical data are recorded. The last-named data, although proving to be beyond Galileo's powers of mathematical analysis at the time, when subjected to modern analysis turn out to be remarkably precise. If this does not represent the experimental process in its fully modern sense, it is hard to imagine what standards historians require to be met.

The discovery of these notes confirms the opinion of earlier historians. They read only Galileo's published works, but did so without a preconceived notion of continuity in the history of ideas. The opinion of our more sophisticated colleagues has its sole support in philosophical interpretations that fit with preconceived views of orderly long-term scientific development. To find manuscript evidence that Galileo was at home in the physics laboratory hardly surprises me. I should find it much more astonishing if, by reasoning alone, working only from fourteenth-century theories and conclusions, he had continued along lines so different from those followed by profound philosophers in earlier centuries. It is to be hoped that, warned by these examples, historians will begin to restore the old cautionary clauses in analogous instances in which scholarly opinions are revised without new evidence, simply to fit historical theories.

In what follows, the newly discovered documents are presented in the context of a hypothetical reconstruction of Galileo's thought.

...

As early as 1590, if we are correct in ascribing Galileo's juvenile De motu to that date, it was his belief that an ideal body resting on an ideal horizontal plane could be set in motion by a force smaller than any previously assigned force, however small. By "horizontal plane" he meant a surface concentric with the earth but which for reasonable distances would be indistinguishable from a level plane. Galileo noted at the time that experiment did not confirm this belief that the body could be set in motion by a vanishingly small force, and he attributed the failure to friction, pressure, the imperfection of material surfaces and spheres, and the departure of level planes from concentricity with the earth.5

It followed from this belief that under ideal conditions the motion so induced would also be perpetual and uniform. Galileo did not mention these consequences until much later, and it is impossible to say just when he perceived them. They are, however, so evident that it is safe to assume that he saw them almost from the start. They constitute a trivial case of the proposition he seems to have been teaching before 1607-that a mover is required to start motion, but that absence of resistance is then sufficient to account for its continuation.6

In mid-1604, following some investigations of motions along circular arcs and motions of pendulums, Galileo hit upon the law that in free fall the times elapsed from rest are as the smaller distance is to the mean proportional between two distances fallen.7 This gave him the times-squared law as well as the rule of odd numbers for successive distances and speeds in free fall. During the next few years he worked out a large number of theorems relating to motion along inclined planes, later published in the Two New Sciences. He also arrived at the rule that the speed terminating free fall from rest was double the speed of the fall itself. These theorems survive in manuscript notes of the period 1604-1609. (Work during these years can be identified with virtual certainty by the watermarks in the paper used, as I have explained elsewhere.8)

In the autumn of 1608, after a summer at Florence, Galileo seems to have interested himself in the question whether the actual slowing of a body moving horizontally followed any particular rule. On folio 117i of the manuscripts just mentioned, the numbers 196, 155, 121, 100 are noted along the horizontal line near the middle of the page (see Fig. 1). I believe that this was the first entry on this leaf, for reasons that will appear later, and that Galileo placed his grooved plane in the level position and recorded distances traversed in equal times along it. Using a metronome, and rolling a light wooden ball about 4 3/4 inches in diameter along a plane with a groove 1 3/4 inches wide, I obtained similar relations over a distance of 6 feet. The figures obtained vary greatly for balls of different materials and weights and for greatly different initial speeds.9 But it suffices for my present purposes that Galileo could have obtained the figures noted by observing the actual deceleration of a ball along a level plane. It should be noted that the watermark on this leaf is like that on folio 116, to which we shall come presently, and it will be seen later that the two sheets are closely connected in time in other ways as well.

The relatively rapid deceleration is obviously related to the contact of ball and groove. Were the ball to roll right off the end of the plane, all resistance to horizontal motion would be virtually removed. If, then, there were any way to have a given ball leave the plane at different speeds of which the ratios were known, Galileo's old idea that horizontal motion would continue uniformly in the absence of resistance could be put to test. His law of free fall made this possible. The ratios of speeds could be controlled by allowing the ball to fall vertically through known heights, at the ends of which it would be deflected horizontally. Falls through given heights … [more]
nibble  org:junk  org:edu  physics  mechanics  gravity  giants  the-trenches  discovery  history  early-modern  europe  mediterranean  the-great-west-whale  frontier  science  empirical  experiment  arms  technology  lived-experience  time  measurement  dirty-hands  iron-age  the-classics  medieval  sequential  wire-guided  error  wiki  reference  people  quantitative-qualitative  multi  pdf  piracy  study  essay  letters  discrete  news  org:mag  org:sci  popsci 
august 2017 by nhaliday
Human Self as Information Agent: Functioning in a Social Environment Based on Shared Meanings — Experts@Minnesota
https://twitter.com/DegenRolf/status/874624254951776256
A neglected aspect of human selfhood is that people are information agents .... We initially assumed that accuracy would be the paramount concern for the information agent... But there are other considerations. Groups benefit from collective action, and so consensual agreement may be a high priority. Consensus may be needed in many situations when the means to verify information’s accuracy are beyond reach... Even if dissenters tum out to have more accurate information, disobedience is punished... Why might evolution have made people willing to sacrifice accuracy in favor of consensus, at least sometimes? Here we speculate that desire for consensus may derive from an innate social motive, whereas accuracy is an epistemic motive that would need to be acquired, and is therefore less deeply rooted and perhaps weaker. There may not be an innate motive to evaluate the truth value of assertions or to appreciate the meaningful difference between truth and falsehood. Hence it may be necessary to leam from experience that accuracy is an informational virtue that confers benefits, whereas consensus may be more closely tied to innate motivations .... The human mind discovers early in life that other minds have different information, which is something most other animals never discover. The desire to share attention and thoughts with others could thus be innate (or innately prepared) whereas the desire to sort truth from fiction may only come along later...The group first builds consensus and only after that is done seeks novel, idiosyncratic input that might increase accuracy. In an important sense, information shared by the group is valued more and perceived as more accurate than unshared information

When shared information coalesces into a collective worldview that includes values, it often has sociopolitical implications. Many groups are committed to particular ideologies or agenda, and information that impugns shared beliefs could be especially unwelcome. Political and religious ideologies have often sustained their power by asserting and enforcing views of questionable truthfulness. Hence individuals and groups may seek to exert control over the shared reality so as to benefit themselves. Thus many individuals will find it more important to get the group to agree with their favored view than to help it reach an objectively correct view. One fascinating question about official falsehoods is whether the ruling elites who propagate such views believe them or not... As an example close to home, psychology today is dominated by a political viewpoint that is progressively liberal, but it seems unlikely that many researchers knowingly assert falsehoods as scientific facts. They do however make publication of some findings much easier than others. The selective critique enables them to believe that the field’s body of knowledge supports their political views more than it does, because contrary facts and findings are suppressed.

Assessing relationships between conformity and meta-traits in an Asch-like paradigm: http://www.tandfonline.com/doi/abs/10.1080/15534510.2017.1371639
https://twitter.com/DegenRolf/status/902511106823999490
Replication of unflattering psychology classic: People bow to conformity pressure, mostly independent of personality

Smart Conformists: Children and Adolescents Associate Conformity With Intelligence Across Cultures: http://onlinelibrary.wiley.com/doi/10.1111/cdev.12935/abstract
https://twitter.com/DegenRolf/status/902398709228609536
Across cultures, children and adolescents viewed high conformity as a sign of intelligence and good behavior.
study  psychology  social-psych  cog-psych  network-structure  social-norms  preference-falsification  is-ought  truth  info-dynamics  pdf  piracy  westminster  multi  twitter  social  commentary  scitariat  quotes  metabuch  stylized-facts  realness  hidden-motives  impetus  neurons  rationality  epistemic  biases  anthropology  local-global  social-science  error  evopsych  EEA  🌞  tribalism  decision-making  spreading  replication  homo-hetero  flux-stasis  reason  noble-lie  reinforcement  memetics 
august 2017 by nhaliday
The Gulf Stream Myth
1. Fifty percent of the winter temperature difference across the North Atlantic is caused by the eastward atmospheric transport of heat released by the ocean that was absorbed and stored in the summer.
2. Fifty percent is caused by the stationary waves of the atmospheric flow.
3. The ocean heat transport contributes a small warming across the basin.

Is the Gulf Stream responsible for Europe’s mild winters?: http://ocp.ldeo.columbia.edu/res/div/ocp/gs/pubs/Seager_etal_QJ_2002.pdf
org:junk  environment  temperature  climate-change  usa  europe  comparison  hmm  regularizer  trivia  cocktail  error  oceans  chart  atmosphere  multi  pdf  study  earth  geography 
august 2017 by nhaliday
Low-Hanging Fruit: Nyekulturny | West Hunter
The methodology is what’s really interesting.  Kim Lewis and Slava Epstein sorted individual soil bacteria into chambers of a device they call the iChip, which is then buried in the ground – the point being that something like 98% of soil bacteria cannot be cultured in standard media, while in this approach, key compounds (whatever they are) can diffuse in from the soil, allowing something like 50% of soil bacteria species to grow.  They then tested the bacterial colonies (10,000 of them) to see if any slammed S. aureus – and some did.

...

I could be wrong, but I wonder if part of the explanation is that microbiology – the subject – is in relative decline, suffering because of funding and status competition with molecular biology and genomics (sexier and less useful than microbiology) . That and the fact that big pharma is not enthusiastic about biological products.
west-hunter  scitariat  discussion  ideas  speculation  bio  science  medicine  meta:medicine  low-hanging  error  stagnation  disease  parasites-microbiome  pharma  innovation  info-dynamics  the-world-is-just-atoms  discovery  the-trenches  alt-inst  dirty-hands  fashun  pragmatic  impact  cost-benefit  trends  ubiquity  prioritizing 
july 2017 by nhaliday
The Rise and Fall of Cognitive Control - Behavioral Scientist
The results highlight the downsides of controlled processing. Within a population, controlled processing may—rather than ensuring undeterred progress—usher in short-sighted, irrational, and detrimental behavior, ultimately leading to population collapse. This is because the innovations produced by controlled processing benefit everyone, even those who do not act with control. Thus, by making non-controlled agents better off, these innovations erode the initial advantage of controlled behavior. This results in the demise of control and the rise of lack-of-control. In turn, this eventually leads to a return to poor decision making and the breakdown of the welfare-enhancing innovations, possibly accelerated and exacerbated by the presence of the enabling technologies themselves. Our models therefore help to explain societal cycles whereby periods of rationality and forethought are followed by plunges back into irrationality and short-sightedness.

https://static1.squarespace.com/static/51ed234ae4b0867e2385d879/t/595fac998419c208a6d99796/1499442499093/Cyclical-Population-Dynamics.pdf
Psychologists, neuroscientists, and economists often conceptualize decisions as arising from processes that lie along a continuum from automatic (i.e., “hardwired” or overlearned, but relatively inflexible) to controlled (less efficient and effortful, but more flexible). Control is central to human cognition, and plays a key role in our ability to modify the world to suit our needs. Given its advantages, reliance on controlled processing may seem predestined to increase within the population over time. Here, we examine whether this is so by introducing an evolutionary game theoretic model of agents that vary in their use of automatic versus controlled processes, and in which cognitive processing modifies the environment in which the agents interact. We find that, under a wide range of parameters and model assumptions, cycles emerge in which the prevalence of each type of processing in the population oscillates between 2 extremes. Rather than inexorably increasing, the emergence of control often creates conditions that lead to its own demise by allowing automaticity to also flourish, thereby undermining the progress made by the initial emergence of controlled processing. We speculate that this observation may have relevance for understanding similar cycles across human history, and may lend insight into some of the circumstances and challenges currently faced by our species.
econotariat  economics  political-econ  policy  decision-making  behavioral-econ  psychology  cog-psych  cycles  oscillation  unintended-consequences  anthropology  broad-econ  cultural-dynamics  tradeoffs  cost-benefit  rot  dysgenics  study  summary  multi  EGT  dynamical  volo-avolo  self-control  discipline  the-monster  pdf  error  rationality  info-dynamics  bounded-cognition  hive-mind  iq  intelligence  order-disorder  risk  microfoundations  science-anxiety  big-picture  hari-seldon  cybernetics 
july 2017 by nhaliday
How accurate are population forecasts?
2 The Accuracy of Past Projections: https://www.nap.edu/read/9828/chapter/4
good ebook:
Beyond Six Billion: Forecasting the World's Population (2000)
https://www.nap.edu/read/9828/chapter/2
Appendix A: Computer Software Packages for Projecting Population
https://www.nap.edu/read/9828/chapter/12
PDE Population Projections looks most relevant for my interests but it's also *ancient*
https://applieddemogtoolbox.github.io/Toolbox/
This Applied Demography Toolbox is a collection of applied demography computer programs, scripts, spreadsheets, databases and texts.

How Accurate Are the United Nations World Population Projections?: http://pages.stern.nyu.edu/~dbackus/BCH/demography/Keilman_JDR_98.pdf

cf. Razib on this: https://pinboard.in/u:nhaliday/b:d63e6df859e8
news  org:lite  prediction  meta:prediction  tetlock  demographics  population  demographic-transition  fertility  islam  world  developing-world  africa  europe  multi  track-record  accuracy  org:ngo  pdf  study  sociology  measurement  volo-avolo  methodology  estimate  data-science  error  wire-guided  priors-posteriors  books  guide  howto  software  tools  recommendations  libraries  gnxp  scitariat 
july 2017 by nhaliday
The Greatest Generation | West Hunter
But  when you consider that people must have had 48 chromosomes back then, rather than the current measly 46, much is explained.

Theophilus Painter, a prominent cytologist, had investigated human chromosome number in 1923. He thought that there were 24 in sperm cells, resulting in a count of 48, which is entirely reasonable. That is definitely the case for all our closest relatives (chimpanzees, bonobos, gorillas, and orangutans).

The authorities say that that Painter made a mistake, and that humans always had 46 chromosomes. But then, for 30 years after Painter’s work, the authorities said that people had 48.  Textbooks in genetics continued to say that Man has 48 chromosomes up until the mid 1950s.  Many cytologists and geneticists studied human chromosomes during that period, but they knew that there were 48, and that’s what they saw. Now they know that there are 46, and that’s what every student sees.

Either the authorities are fallible and most people are sheep, or human chromosome number actually changed sometime after World War II.  No one could believe the first alternative: it would hurt our feelings, and therefore cannot be true.  No, we have a fascinating result: people today are fundamentally different from the Greatest Generation, biologically different: we’re two chromosomes shy of a load. .    So it’s not our fault !

http://blogs.discovermagazine.com/loom/2012/07/19/the-mystery-of-the-missing-chromosome-with-a-special-guest-appearance-from-facebook-creationists/

funny comment: https://westhunt.wordpress.com/2014/11/19/the-greatest-generation/#comment-62920
“some social environments are better than others at extracting the best from its people”

That’s very true – we certainly don’t seem to be doing a very good job of it. It’s a minor matter, but threatening brilliant engineers with death or professional ruin because of their sexist sartorial choices probably isn’t helping…

I used to do some engineering, and if someone had tried on that on me, I’ve have told him to go fuck itself. Is that a lost art?

https://www.theguardian.com/science/2014/nov/14/rosetta-comet-dr-matt-taylor-apology-sexist-shirt
west-hunter  scitariat  stories  history  mostly-modern  usa  world-war  pre-ww2  science  bounded-cognition  error  being-right  info-dynamics  genetics  genomics  bio  troll  multi  news  org:sci  popsci  nature  evolution  the-trenches  poast  rant  aphorism  gender  org:lite  alt-inst  tip-of-tongue  org:anglo 
july 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.

https://twitter.com/wwwojtekk/status/1190731344336293889
https://archive.is/EZu0h
Great (not completely new but still good to have it in one place) discussion of RCTs and inference in economics by Deaton, my favorite sentences (more general than just about RCT) below
Randomization in the tropics revisited: a theme and eleven variations: https://scholar.princeton.edu/sites/default/files/deaton/files/deaton_randomization_revisited_v3_2019.pdf
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated  garett-jones 
june 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractinfothinkingvague

related tags

2016-election  80000-hours  :/  aaronson  ability-competence  abortion-contraception-embryo  absolute-relative  abstraction  academia  accuracy  acemoglu  acm  acmtariat  additive  aDNA  advanced  adversarial  advertising  advice  aesthetics  africa  age-generation  age-of-discovery  aging  agri-mindset  agriculture  ai  ai-control  akrasia  albion  alesina  algebra  algorithms  alien-character  alignment  allodium  alt-inst  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anomie  anonymity  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  arbitrage  archaeology  archaics  architecture  aristos  arms  arrows  art  article  ascetic  asia  assembly  assimilation  assortative-mating  atmosphere  atoms  attaq  attention  audio  authoritarianism  autism  automation  axelrod  axioms  backup  barons  bayesian  behavioral-econ  behavioral-gen  being-becoming  being-right  benchmarks  benevolence  best-practices  better-explained  bias-variance  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blog  blowhards  books  borel-cantelli  bostrom  bounded-cognition  brain-scan  branches  brexit  britain  broad-econ  browser  buddhism  build-packaging  business  c(pp)  c:**  c:***  caching  calculation  calculator  california  canada  cancer  candidate-gene  canon  capital  capitalism  cardio  career  carmack  cartoons  CAS  causation  cause  censorship  certificates-recognition  characterization  charity  chart  cheatsheet  checking  checklists  chemistry  chicago  china  christianity  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  classic  classification  clever-rats  climate-change  clinton  cliometrics  clown-world  coalitions  coarse-fine  cocktail  cocoa  code-dive  code-organizing  coding-theory  cog-psych  cohesion  cold-war  collaboration  comedy  coming-apart  commentary  communication  communism  community  comparison  compensation  competition  compilers  complex-systems  complexity  composition-decomposition  computation  computer-memory  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  config  confluence  confounding  confucian  confusion  conquest-empire  consilience  context  contracts  contradiction  contrarianism  control  convergence  convexity-curvature  cooking  cooperate-defect  coordination  corporation  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  counting  coupling-cohesion  courage  course  cracker-econ  cracker-prog  creative  crime  criminal-justice  criminology  CRISPR  critique  crooked  crosstab  crux  crypto  cryptocurrency  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  d-lang  dan-luu  dark-arts  darwinian  data  data-science  database  dataset  dataviz  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographic-transition  demographics  dennett  density  dependence-independence  descriptive  design  desktop  detail-architecture  deterrence  developing-world  developmental  devops  devtools  diaspora  diet  differential  dignity  diogenes  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distributed  distribution  divergence  diversity  divide-and-conquer  documentation  domestication  dominant-minority  dotnet  douthatish  drama  driving  drugs  DSL  dumb-ML  duplication  duty  dynamic  dynamical  dysgenics  early-modern  earth  easterly  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  ecosystem  eden  editors  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  ego-depletion  EGT  elections  electromag  elegance  elite  email  embedded-cognition  embodied  embodied-cognition  emergent  emotion  empirical  ems  endo-exo  endocrine  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epigenetics  epistemic  equilibrium  eric-kaufmann  error  error-handling  essay  essence-existence  estimate  ethanol  ethics  ethnocentrism  EU  europe  events  evidence  evidence-based  evolution  evopsych  examples  existence  exit-voice  exocortex  expanders  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  explore-exploit  exposition  expression-survival  externalities  extra-introversion  extrema  facebook  failure  faq  farmers-and-foragers  fashun  FDA  features  fermi  fertility  feudal  feynman  fiction  field-study  fields  fighting  finance  fire  fisher  fitness  fitsci  flexibility  fluid  flux-stasis  food  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forms-instances  fourier  frameworks  free-riding  french  frequency  frequentist  frontier  functional  futurism  gallic  game-theory  games  garett-jones  gavisti  gbooks  GCTA  gedanken  gelman  gender  gender-diff  gene-flow  general-survey  generalization  generative  genetic-load  genetics  genomics  geoengineering  geography  geometry  geopolitics  georgia  germanic  giants  gibbon  gilens-page  git  github  gnon  gnosis-logos  gnxp  golang  good-evil  google  gotchas  government  gowers  grad-school  gradient-descent  graph-theory  graphics  graphs  gravity  gray-econ  great-powers  greedy  gregory-clark  grokkability  grokkability-clarity  ground-up  group-selection  growth-econ  GT-101  guessing  guide  guilt-shame  GWAS  gwern  GxE  h2o  habit  hacker  haidt  hanson  hanushek  hard-tech  hardness  hardware  hari-seldon  harvard  haskell  hate  hci  health  healthcare  heavy-industry  heavyweights  hetero-advantage  heterodox  heuristic  hidden-motives  higher-ed  history  hive-mind  hmm  hn  homepage  homo-hetero  honor  horror  housing  howto  hsu  huge-data-the-biggest  human-capital  human-ml  humanity  humility  huntington  hypochondria  hypocrisy  hypothesis-testing  ideas  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impact  impetus  impro  incentives  increase-decrease  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  infrastructure  inhibition  init  innovation  input-output  insight  instinct  institutions  insurance  integration-extension  integrity  intel  intelligence  interdisciplinary  interests  interface-compatibility  internet  interpretation  intersection-connectedness  intervention  interview  intricacy  intuition  invariance  investigative-journo  investing  ioannidis  ios  iq  iran  iraq-syria  iron-age  is-ought  islam  israel  isteveish  iteration-recursion  janus  japan  jargon  javascript  jobs  journos-pundits  judaism  judgement  julia  justice  jvm  kinship  kissinger  knowledge  korea  kumbaya-kult  labor  land  language  large-factor  larry-summers  latency-throughput  latin-america  law  leadership  leaks  learning  lectures  lee-kuan-yew  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  limits  linear-algebra  linearity  liner-notes  linguistics  links  linux  lisp  list  literature  live-coding  lived-experience  lmao  local-global  logic  logistics  lol  long-short-run  long-term  longevity  longform  longitudinal  low-hanging  lower-bounds  machiavelli  machine-learning  macro  madisonian  magnitude  malaise  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  market-power  markets  martial  matching  math  math.AC  math.CA  math.CO  math.DS  math.NT  mathtariat  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memetics  memory-management  MENA  mena4  meta-analysis  meta:math  meta:medicine  meta:prediction  meta:reading  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metal-to-virtual  metameta  methodology  metrics  michael-nielsen  micro  microfoundations  microsoft  migrant-crisis  migration  military  minimalism  minimum-viable  miri-cfar  missing-heritability  ML-MAP-E  mobility  model-class  model-organism  model-selection  models  modernity  moloch  moments  monetary-fiscal  money  monte-carlo  mooc  mood-affiliation  morality  mostly-modern  motivation  move-fast-(and-break-things)  multi  multiplicative  murray  music  musk  mutation  mystic  myth  n-factor  narrative  nascent-state  nationalism-globalism  natural-experiment  nature  navigation  near-far  neocons  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  nlp  no-go  noahpinion  noble-lie  noblesse-oblige  noise-structure  nonlinearity  nordic  northeast  notation  novelty  nuclear  null-result  number  numerics  nutrition  nyc  obama  obesity  objektbuch  ocaml-sml  occam  occident  oceans  ocr  offense-defense  old-anglo  oly  oly-programming  oop  open-closed  open-problems  opioids  optimate  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:biz  org:bleg  org:bv  org:com  org:data  org:davos  org:econlib  org:edge  org:edu  org:foreign  org:gov  org:health  org:junk  org:lite  org:local  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  orwellian  os  oscillation  oss  osx  other-xtian  outcome-risk  outliers  overflow  p:null  paganism  paleocon  papers  paradox  parallax  parasites-microbiome  parenting  pareto  parsimony  path-dependence  patho-altruism  patience  paul-romer  paying-rent  PCP  pdf  peace-violence  people  performance  personality  perturbation  pessimism  peter-singer  phalanges  pharma  philosophy  phys-energy  physics  pic  piketty  pinker  piracy  planning  plots  pls  plt  poast  podcast  poetry  polanyi-marx  polarization  policy  polis  polisci  political-econ  politics  poll  polynomials  pop-diff  pop-structure  popsci  population  population-genetics  populism  postmortem  postrat  power  practice  pragmatic  pre-2013  pre-ww2  prediction  predictive-processing  preference-falsification  prejudice  prepping  preprint  presentation  prioritizing  priors-posteriors  privacy  pro-rata  probability  problem-solving  productivity  prof  profile  programming  progression  project  proof-systems  proofs  propaganda  properties  property-rights  proposal  protestant-catholic  protocol-metadata  prudence  pseudoE  psych-architecture  psychiatry  psycho-atoms  psychology  psychometrics  public-goodish  public-health  publishing  putnam-like  puzzles  python  q-n-a  qra  QTL  quality  quantitative-qualitative  quantum  quantum-info  quantum-money  questions  quixotic  quiz  quotes  race  rand-approx  random  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  reading  realness  realpolitik  reason  recent-selection  recommendations  recruiting  reddit  redistribution  reduction  reference  reflection  regression  regression-to-mean  regularization  regularizer  regulation  reinforcement  religion  rent-seeking  replication  research  research-program  resources-effects  responsibility  retention  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  rigorous-crypto  rindermann-thompson  risk  ritual  roadmap  robotics  robust  roots  rot  rsc  russia  rust  s-factor  s:*  s:**  s:***  safety  sampling  sampling-bias  sanctity-degradation  sapiens  scala  scale  scaling-tech  scaling-up  scholar  sci-comp  science  science-anxiety  scifi-fantasy  scitariat  scott-sumner  SDP  search  securities  security  selection  self-control  self-interest  self-report  selfish-gene  sequential  sex  sexuality  shakespeare  shift  shipping  short-circuit  sib-study  SIGGRAPH  signal-noise  signaling  signum  similarity  simler  simplification-normalization  simulation  sinosphere  skeleton  skunkworks  sky  slides  slippery-slope  smoothness