nhaliday + evidence-based   139

Linus's Law - Wikipedia
Linus's Law is a claim about software development, named in honor of Linus Torvalds and formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999).[1][2] The law states that "given enough eyeballs, all bugs are shallow";

--

In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate.[4] While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".[5][6]

Although detection of even deliberately inserted flaws[7][8] can be attributed to Raymond's claim, the persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum.[9][10][11][12] Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain.[12] In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".[11] Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.

Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty programs: https://academic.oup.com/cybersecurity/article/3/2/81/4524054

https://hbfs.wordpress.com/2009/03/31/how-many-eyeballs-to-make-a-bug-shallow/
wiki  reference  aphorism  ideas  stylized-facts  programming  engineering  linux  worse-is-better/the-right-thing  correctness  debugging  checking  best-practices  security  error  scale  ubiquity  collaboration  oss  realness  empirical  evidence-based  multi  study  info-econ  economics  intricacy  plots  manifolds  techtariat  cracker-prog  os  systems  magnitude  quantitative-qualitative  number  threat-modeling 
9 weeks ago by nhaliday
Extreme inbreeding in a European ancestry sample from the contemporary UK population | Nature Communications
Visscher et al

In most human societies, there are taboos and laws banning mating between first- and second-degree relatives, but actual prevalence and effects on health and fitness are poorly quantified. Here, we leverage a large observational study of ~450,000 participants of European ancestry from the UK Biobank (UKB) to quantify extreme inbreeding (EI) and its consequences. We use genotyped SNPs to detect large runs of homozygosity (ROH) and call EI when >10% of an individual’s genome comprise ROHs. We estimate a prevalence of EI of ~0.03%, i.e., ~1/3652. EI cases have phenotypic means between 0.3 and 0.7 standard deviation below the population mean for 7 traits, including stature and cognitive ability, consistent with inbreeding depression estimated from individuals with low levels of inbreeding.
study  org:nat  bio  genetics  genomics  kinship  britain  pro-rata  distribution  embodied  iq  effect-size  tails  gwern  evidence-based  empirical 
9 weeks ago by nhaliday
The returns to speaking a second language
Does speaking a foreign language have an impact on earnings? The authors use a variety of empirical strategies to address this issue for a representative sample of U.S. college graduates. OLS regressions with a complete set of controls to minimize concerns about omitted variable biases, propensity score methods, and panel data techniques all lead to similar conclusions. The hourly earnings of those who speak a foreign language are more than 2 percent higher than the earnings of those who do not. The authors obtain higher and more imprecise point estimates using state high school graduation and college entry and graduation requirements as instrumental variables.

...

We find that college graduates who speak a second language earn, on average, wages that are 2 percent higher than those who don’t. We include a complete set of controls for general ability using information on grades and college admission tests and reduce the concern that selection drives the results controlling for the academic major chosen by the student. We obtain similar results with simple regression methods if we use nonparametric methods based on the propensity score and if we exploit the temporal variation in the knowledge of a second language. The estimates, thus, are not driven by observable differences in the composition of the pools of bilinguals and monolinguals, by the linear functional form that we impose in OLS regressions, or by constant unobserved heterogeneity. To reduce the concern that omitted variables bias our estimates, we make use of several instrumental variables (IVs). Using high school and college graduation requirements as instruments, we estimate more substantial returns to learning a second language, on the order of 14 to 30 percent. These results have high standard errors, but they suggest that OLS estimates may actually be biased downward.

...

In separate (unreported) regressions, we explore the labor market returns to speaking specific languages. We estimate OLS regressions following the previous specifications but allow the coefficient to vary by language spoken. In our sample, German is the language that obtains the highest rewards in the labor market. The returns to speaking German are 3.8 percent, while they are 2.3 for speaking French and 1.5 for speaking Spanish. In fact, only the returns to speaking German remain statistically significant in this regression. The results indicate that those who speak languages known by a smaller number of people obtain higher rewards in the labor market.14

The Relative Importance of the European Languages: https://ideas.repec.org/p/kud/kuiedp/0623.html
study  economics  labor  cost-benefit  hmm  language  foreign-lang  usa  empirical  evidence-based  education  human-capital  compensation  correlation  endogenous-exogenous  natural-experiment  policy  wonkish  🎩  french  germanic  latin-america  multi  spanish  china  asia  japan 
july 2019 by nhaliday
An Eye Tracking Study on camelCase and under_score Identifier Styles - IEEE Conference Publication
One main difference is that subjects were trained mainly in the underscore style and were all programmers. While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly.

ToCamelCaseorUnderscore: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.158.9499
An empirical study of 135 programmers and non-programmers was conducted to better understand the impact of identifier style on code readability. The experiment builds on past work of others who study how readers of natural language perform such tasks. Results indicate that camel casing leads to higher accuracy among all subjects regardless of training, and those trained in camel casing are able to recognize identifiers in the camel case style faster than identifiers in the underscore style.

https://en.wikipedia.org/wiki/Camel_case#Readability_studies
A 2009 study comparing snake case to camel case found that camel case identifiers could be recognised with higher accuracy among both programmers and non-programmers, and that programmers already trained in camel case were able to recognise those identifiers faster than underscored snake-case identifiers.[35]

A 2010 follow-up study, under the same conditions but using an improved measurement method with use of eye-tracking equipment, indicates: "While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly."[36]
study  psychology  cog-psych  hci  programming  best-practices  stylized-facts  null-result  multi  wiki  reference  concept  empirical  evidence-based  efficiency  accuracy  time  code-organizing  grokkability  protocol-metadata  form-design  grokkability-clarity 
july 2019 by nhaliday
history - Why are UNIX/POSIX system call namings so illegible? - Unix & Linux Stack Exchange
It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names.

http://neverworkintheory.org/2017/11/26/abbreviated-full-names.html
We carried out a family of controlled experiments to investigate whether the use of abbreviated identifier names, with respect to full-word identifier names, affects fault fixing in C and Java source code. This family consists of an original (or baseline) controlled experiment and three replications. We involved 100 participants with different backgrounds and experiences in total. Overall results suggested that there is no difference in terms of effort, effectiveness, and efficiency to fix faults, when source code contains either only abbreviated or only full-word identifier names. We also conducted a qualitative study to understand the values, beliefs, and assumptions that inform and shape fault fixing when identifier names are either abbreviated or full-word. We involved in this qualitative study six professional developers with 1--3 years of work experience. A number of insights emerged from this qualitative study and can be considered a useful complement to the quantitative results from our family of experiments. One of the most interesting insights is that developers, when working on source code with abbreviated identifier names, adopt a more methodical approach to identify and fix faults by extending their focus point and only in a few cases do they expand abbreviated identifiers.
q-n-a  stackex  trivia  programming  os  systems  legacy  legibility  ux  libraries  unix  linux  hacker  cracker-prog  multi  evidence-based  empirical  expert-experience  engineering  study  best-practices  comparison  quality  debugging  efficiency  time  code-organizing  grokkability  grokkability-clarity 
july 2019 by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.

...

However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.

...

A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.

...

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

https://news.ycombinator.com/item?id=20191348
study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases 
june 2019 by nhaliday
When to use C over C++, and C++ over C? - Software Engineering Stack Exchange
You pick C when
- you need portable assembler (which is what C is, really) for whatever reason,
- your platform doesn't provide C++ (a C compiler is much easier to implement),
- you need to interact with other languages that can only interact with C (usually the lowest common denominator on any platform) and your code consists of little more than the interface, not making it worth to lay a C interface over C++ code,
- you hack in an Open Source project (many of which, for various reasons, stick to C),
- you don't know C++.
In all other cases you should pick C++.

--

At the same time, I have to say that @Toll's answers (for one obvious example) have things just about backwards in most respects. Reasonably written C++ will generally be at least as fast as C, and often at least a little faster. Readability is generally much better, if only because you don't get buried in an avalanche of all the code for even the most trivial algorithms and data structures, all the error handling, etc.

...

As it happens, C and C++ are fairly frequently used together on the same projects, maintained by the same people. This allows something that's otherwise quite rare: a study that directly, objectively compares the maintainability of code written in the two languages by people who are equally competent overall (i.e., the exact same people). At least in the linked study, one conclusion was clear and unambiguous: "We found that using C++ instead of C results in improved software quality and reduced maintenance effort..."

--

(Side-note: Check out Linus Torvads' rant on why he prefers C to C++. I don't necessarily agree with his points, but it gives you insight into why people might choose C over C++. Rather, people that agree with him might choose C for these reasons.)

http://harmful.cat-v.org/software/c++/linus

Why would anybody use C over C++? [closed]: https://stackoverflow.com/questions/497786/why-would-anybody-use-c-over-c
Joel's answer is good for reasons you might have to use C, though there are a few others:
- You must meet industry guidelines, which are easier to prove and test for in C.
- You have tools to work with C, but not C++ (think not just about the compiler, but all the support tools, coverage, analysis, etc)
- Your target developers are C gurus
- You're writing drivers, kernels, or other low level code
- You know the C++ compiler isn't good at optimizing the kind of code you need to write
- Your app not only doesn't lend itself to be object oriented, but would be harder to write in that form

In some cases, though, you might want to use C rather than C++:
- You want the performance of assembler without the trouble of coding in assembler (C++ is, in theory, capable of 'perfect' performance, but the compilers aren't as good at seeing optimizations a good C programmer will see)
- The software you're writing is trivial, or nearly so - whip out the tiny C compiler, write a few lines of code, compile and you're all set - no need to open a huge editor with helpers, no need to write practically empty and useless classes, deal with namespaces, etc. You can do nearly the same thing with a C++ compiler and simply use the C subset, but the C++ compiler is slower, even for tiny programs.
- You need extreme performance or small code size, and know the C++ compiler will actually make it harder to accomplish due to the size and performance of the libraries
- You contend that you could just use the C subset and compile with a C++ compiler, but you'll find that if you do that you'll get slightly different results depending on the compiler.

Regardless, if you're doing that, you're using C. Is your question really "Why don't C programmers use C++ compilers?" If it is, then you either don't understand the language differences, or you don't understand compiler theory.

--

- Because they already know C
- Because they're building an embedded app for a platform that only has a C compiler
- Because they're maintaining legacy software written in C
- You're writing something on the level of an operating system, a relational database engine, or a retail 3D video game engine.
q-n-a  stackex  programming  engineering  pls  best-practices  impetus  checklists  c(pp)  systems  assembly  compilers  hardware  embedded  oss  links  study  evidence-based  devtools  performance  rant  expert-experience  types  blowhards  linux  git  vcs  debate  rhetoric  worse-is-better/the-right-thing  cracker-prog  multi  metal-to-virtual  interface-compatibility 
may 2019 by nhaliday
A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.

Is spoken English more efficient than other languages?: https://linguistics.stackexchange.com/questions/2550/is-spoken-english-more-efficient-than-other-languages
As a translator, I can assure you that English is no more efficient than other languages.
--
[some comments on a different answer:]
Russian, when spoken, is somewhat less efficient than English, and that is for sure. No one who has ever worked as an interpreter can deny it. You can convey somewhat more information in English than in Russian within an hour. The English language is not constrained by the rigid case and gender systems of the Russian language, which somewhat reduce the information density of the Russian language. The rules of the Russian language force the speaker to incorporate sometimes unnecessary details in his speech, which can be problematic for interpreters – user74809 Nov 12 '18 at 12:48
But in writing, though, I do think that Russian is somewhat superior. However, when it comes to common daily speech, I do not think that anyone can claim that English is less efficient than Russian. As a matter of fact, I also find Russian to be somewhat more mentally taxing than English when interpreting. I mean, anyone who has lived in the world of Russian and then moved to the world of English is certain to notice that English is somewhat more efficient in everyday life. It is not a night-and-day difference, but it is certainly noticeable. – user74809 Nov 12 '18 at 13:01
...
By the way, I am not knocking Russian. I love Russian, it is my mother tongue and the only language, in which I sound like a native speaker. I mean, I still have a pretty thick Russian accent. I am not losing it anytime soon, if ever. But like I said, living in both worlds, the Moscow world and the Washington D.C. world, I do notice that English is objectively more efficient, even if I am myself not as efficient in it as most other people. – user74809 Nov 12 '18 at 13:40

Do most languages need more space than English?: https://english.stackexchange.com/questions/2998/do-most-languages-need-more-space-than-english
Speaking as a translator, I can share a few rules of thumb that are popular in our profession:
- Hebrew texts are usually shorter than their English equivalents by approximately 1/3. To a large extent, that can be attributed to cheating, what with no vowels and all.
- Spanish, Portuguese and French (I guess we can just settle on Romance) texts are longer than their English counterparts by about 1/5 to 1/4.
- Scandinavian languages are pretty much on par with English. Swedish is a tiny bit more compact.
- Whether or not Russian (and by extension, Ukrainian and Belorussian) is more compact than English is subject to heated debate, and if you ask five people, you'll be presented with six different opinions. However, everybody seems to agree that the difference is just a couple percent, be it this way or the other.

--

A point of reference from the website I maintain. The files where we store the translations have the following sizes:

English: 200k
Portuguese: 208k
Spanish: 209k
German: 219k
And the translations are out of date. That is, there are strings in the English file that aren't yet in the other files.

For Chinese, the situation is a bit different because the character encoding comes into play. Chinese text will have shorter strings, because most words are one or two characters, but each character takes 3–4 bytes (for UTF-8 encoding), so each word is 3–12 bytes long on average. So visually the text takes less space but in terms of the information exchanged it uses more space. This Language Log post suggests that if you account for the encoding and remove redundancy in the data using compression you find that English is slightly more efficient than Chinese.

Is English more efficient than Chinese after all?: https://languagelog.ldc.upenn.edu/nll/?p=93
[Executive summary: Who knows?]

This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote:
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data  multi  comparison  writing  meta:reading  measure  compression  empirical  evidence-based  experiment  analysis  chart  trivia  cocktail  org:edu 
february 2019 by nhaliday
Randomizing Religion: The Impact of Protestant Evangelism on Economic Outcomes
To test the causal impact of religiosity, we conducted a randomized evaluation of an evangelical Protestant Christian values and theology education program that consisted of 15 weekly half-hour sessions. We analyze outcomes for 6,276 ultra-poor Filipino households six months after the program ended. We find _significant increases in religiosity and income_, no significant changes in total labor supply, assets, consumption, food security, or _life satisfaction, and a significant decrease in perceived relative economic status_. Exploratory analysis suggests the program may have improved hygienic practices and increased household discord, and that _the income treatment effect may operate through increasing grit_.

https://marginalrevolution.com/marginalrevolution/2018/02/randomizing-religion-impact-protestant-evangelism-economic-outcomes.html

Social Cohesion, Religious Beliefs, and the Effect of Protestantism on Suicide: https://www.mitpressjournals.org/doi/abs/10.1162/REST_a_00708
In an economic theory of suicide, we model social cohesion of the religious community and religious beliefs about afterlife as two mechanisms by which Protestantism increases suicide propensity. We build a unique micro-regional dataset of 452 Prussian counties in 1816-21 and 1869-71, when religiousness was still pervasive. Exploiting the concentric dispersion of Protestantism around Wittenberg, our instrumental-variable model finds that Protestantism had a substantial positive effect on suicide. Results are corroborated in first-difference models. Tests relating to the two mechanisms based on historical church-attendance data and modern suicide data suggest that the sociological channel plays the more important role.

this is also mentioned in the survey of reformation effects (under "dark" effects)
study  field-study  sociology  wonkish  intervention  religion  theos  branches  evidence-based  christianity  protestant-catholic  asia  developing-world  economics  compensation  money  labor  human-capital  emotion  s-factor  discipline  multi  social-structure  death  individualism-collectivism  n-factor  cohesion  causation  endogenous-exogenous  history  early-modern  europe  germanic  geography  within-group  urban-rural  marginal-rev  econotariat  commentary  class  personality  social-psych 
february 2018 by nhaliday
The Gelman View – spottedtoad
I have read Andrew Gelman’s blog for about five years, and gradually, I’ve decided that among his many blog posts and hundreds of academic articles, he is advancing a philosophy not just of statistics but of quantitative social science in general. Not a statistician myself, here is how I would articulate the Gelman View:

A. Purposes

1. The purpose of social statistics is to describe and understand variation in the world. The world is a complicated place, and we shouldn’t expect things to be simple.
2. The purpose of scientific publication is to allow for communication, dialogue, and critique, not to “certify” a specific finding as absolute truth.
3. The incentive structure of science needs to reward attempts to independently investigate, reproduce, and refute existing claims and observed patterns, not just to advance new hypotheses or support a particular research agenda.

B. Approach

1. Because the world is complicated, the most valuable statistical models for the world will generally be complicated. The result of statistical investigations will only rarely be to  give a stamp of truth on a specific effect or causal claim, but will generally show variation in effects and outcomes.
2. Whenever possible, the data, analytic approach, and methods should be made as transparent and replicable as possible, and should be fair game for anyone to examine, critique, or amend.
3. Social scientists should look to build upon a broad shared body of knowledge, not to “own” a particular intervention, theoretic framework, or technique. Such ownership creates incentive problems when the intervention, framework, or technique fail and the scientist is left trying to support a flawed structure.

Components

1. Measurement. How and what we measure is the first question, well before we decide on what the effects are or what is making that measurement change.
2. Sampling. Who we talk to or collect information from always matters, because we should always expect effects to depend on context.
3. Inference. While models should usually be complex, our inferential framework should be simple enough for anyone to follow along. And no p values.

He might disagree with all of this, or how it reflects his understanding of his own work. But I think it is a valuable guide to empirical work.
ratty  unaffiliated  summary  gelman  scitariat  philosophy  lens  stats  hypothesis-testing  science  meta:science  social-science  institutions  truth  is-ought  best-practices  data-science  info-dynamics  alt-inst  academia  empirical  evidence-based  checklists  strategy  epistemic 
november 2017 by nhaliday
Stretching and injury prevention: an obscure relationship. - PubMed - NCBI
Sports involving bouncing and jumping activities with a high intensity of stretch-shortening cycles (SSCs) [e.g. soccer and football] require a muscle-tendon unit that is compliant enough to store and release the high amount of elastic energy that benefits performance in such sports. If the participants of these sports have an insufficient compliant muscle-tendon unit, the demands in energy absorption and release may rapidly exceed the capacity of the muscle-tendon unit. This may lead to an increased risk for injury of this structure. Consequently, the rationale for injury prevention in these sports is to increase the compliance of the muscle-tendon unit. Recent studies have shown that stretching programmes can significantly influence the viscosity of the tendon and make it significantly more compliant, and when a sport demands SSCs of high intensity, stretching may be important for injury prevention. This conjecture is in agreement with the available scientific clinical evidence from these types of sports activities. In contrast, when the type of sports activity contains low-intensity, or limited SSCs (e.g. jogging, cycling and swimming) there is no need for a very compliant muscle-tendon unit since most of its power generation is a consequence of active (contractile) muscle work that needs to be directly transferred (by the tendon) to the articular system to generate motion. Therefore, stretching (and thus making the tendon more compliant) may not be advantageous. This conjecture is supported by the literature, where strong evidence exists that stretching has no beneficial effect on injury prevention in these sports.
study  survey  health  embodied  fitness  fitsci  biomechanics  sports  soccer  running  endurance  evidence-based  null-result  realness  contrarianism  homo-hetero  comparison  embodied-pack 
november 2017 by nhaliday
Injury prevention in runners - "skimpy research" | RunningPhysio
Wherever possible RunningPhysio tries to be evidence based but in many cases there is a lack of high quality research. Extensive advice exists on injury prevention in runners and yet the research underpinning that advice is very limited, so limited in fact that one recent study described it as “skimpy”! So we decided we'd examine this “skimpy research”.
org:health  health  fitness  fitsci  evidence-based  running  embodied  analysis  survey  endurance 
october 2017 by nhaliday
The Constitutional Economics of Autocratic Succession on JSTOR
Abstract. The paper extends and empirically tests Gordon Tullock’s public choice theory of the nature of autocracy. A simple model of the relationship between constitutional rules governing succession in autocratic regimes and the occurrence of coups against autocrats is sketched. The model is applied to a case study of coups against monarchs in Denmark in the period ca. 935–1849. A clear connection is found between the specific constitutional rules governing succession and the frequency of coups. Specifically, the introduction of automatic hereditary succession in an autocracy provides stability and limits the number of coups conducted by contenders.

Table 2. General constitutional rules of succession, Denmark ca. 935–1849

To see this the data may be divided into three categories of constitutional rules of succession: One of open succession (for the periods 935–1165 and 1326–40), one of appointed succession combined with election (for the periods 1165–1326 and 1340–1536), and one of more or less formalized hereditary succession (1536–1849). On the basis of this categorization the data have been summarized in Table 3.

validity of empirics is a little sketchy

https://twitter.com/GarettJones/status/922103073257824257
https://archive.is/NXbdQ
The graphic novel it is based on is insightful, illustrates Tullock's game-theoretic, asymmetric information views on autocracy.

Conclusions from Gorton Tullock's book Autocracy, p. 211-215.: https://astro.temple.edu/~bstavis/courses/tulluck.htm
study  polisci  political-econ  economics  cracker-econ  big-peeps  GT-101  info-econ  authoritarianism  antidemos  government  micro  leviathan  elite  power  institutions  garett-jones  multi  econotariat  twitter  social  commentary  backup  art  film  comics  fiction  competition  europe  nordic  empirical  evidence-based  incentives  legacy  peace-violence  order-disorder  🎩  organizing  info-dynamics  history  medieval  law  axioms  stylized-facts  early-modern  data  longitudinal  flux-stasis  shift  revolution  correlation  org:junk  org:edu  summary  military  war  top-n  hi-order-bits  feudal  democracy  sulla  leadership  nascent-state  protocol-metadata 
october 2017 by nhaliday
Frontier Culture: The Roots and Persistence of “Rugged Individualism” in the United States∗
In a classic 1893 essay, Frederick Jackson Turner argued that the American frontier promoted individualism. We revisit the Frontier Thesis and examine its relevance at the subnational level. Using Census data and GIS techniques, we track the frontier throughout the 1790–1890 period and construct a novel, county-level measure of historical frontier experience. We document the distinctive demographics of frontier locations during this period—disproportionately male, prime-age adult, foreign-born, and illiterate—as well as their higher levels of individualism, proxied by the share of infrequent names among children. Many decades after the closing of the frontier, counties with longer historical frontier experience exhibit more prevalent individualism and opposition to redistribution and regulation. We take several steps towards a causal interpretation, including an instrumental variables approach that exploits variation in the speed of westward expansion induced by prior national immigration in- flows. Using linked historical Census data, we identify mechanisms giving rise to a persistent frontier culture. Greater individualism on the frontier was not driven solely by selective migration, suggesting that frontier conditions may have shaped behavior and values. We provide evidence suggesting that rugged individualism may be rooted in its adaptive advantage on the frontier and the opportunities for upward mobility through effort.

https://twitter.com/whyvert/status/921900860224897024
https://archive.is/jTzSe

The Origins of Cultural Divergence: Evidence from a Developing Country.: http://economics.handels.gu.se/digitalAssets/1643/1643769_37.-hoang-anh-ho-ncde-2017-june.pdf
Cultural norms diverge substantially across societies, often even within the same country. In this paper, we test the voluntary settlement hypothesis, proposing that individualistic people tend to self-select into migrating out of reach from collectivist states towards the periphery and that such patterns of historical migration are reflected even in the contemporary distribution of norms. For more than one thousand years during the first millennium CE, northern Vietnam was under an exogenously imposed Chinese rule. From the eleventh to the eighteenth centuries, ancient Vietnam gradually expanded its territory through various waves of southward conquest. We demonstrate that areas being annexed earlier into ancient Vietnam are nowadays more (less) prone to collectivist (individualist) culture. We argue that the southward out-migration of individualist people was the main mechanism behind this finding. The result is consistent across various measures obtained from an extensive household survey and robust to various control variables as well as to different empirical specifications, including an instrumental variable estimation. A lab-in-the-field experiment also confirms the finding.
pdf  study  economics  broad-econ  cliometrics  path-dependence  evidence-based  empirical  stylized-facts  values  culture  cultural-dynamics  anthropology  usa  frontier  allodium  the-west  correlation  individualism-collectivism  measurement  politics  ideology  expression-survival  redistribution  regulation  political-econ  government  migration  history  early-modern  pre-ww2  things  phalanges  🎩  selection  polisci  roots  multi  twitter  social  commentary  scitariat  backup  gnon  growth-econ  medieval  china  asia  developing-world  shift  natural-experiment  endo-exo  endogenous-exogenous  hari-seldon 
october 2017 by nhaliday
Dressed for Success? The Effect of School Uniforms on Student Achievement and Behavior
Each school in the district determines adoption independently, providing variation over schools and time. By including student and school fixed-effects we find evidence that uniform adoption improves attendance in secondary grades, while in elementary schools they generate large increases in teacher retention.
study  economics  sociology  econometrics  natural-experiment  endo-exo  usa  the-south  social-norms  intervention  policy  wonkish  education  human-capital  management  industrial-org  organizing  input-output  evidence-based  endogenous-exogenous 
october 2017 by nhaliday
Tax Evasion and Inequality
This paper attempts to estimate the size and distribution of tax evasion in rich countries. We combine stratified random audits—the key source used to study tax evasion so far—with new micro-data leaked from two large offshore financial institutions, HSBC Switzerland (“Swiss leaks”) and Mossack Fonseca (“Panama Papers”). We match these data to population-wide wealth records in Norway, Sweden, and Denmark. We find that tax evasion rises sharply with wealth, a phenomenon that random audits fail to capture. On average about 3% of personal taxes are evaded in Scandinavia, but this figure rises to about 30% in the top 0.01% of the wealth distribution, a group that includes households with more than $40 million in net wealth. A simple model of the supply of tax evasion services can explain why evasion rises steeply with wealth. Taking tax evasion into account increases the rise in inequality seen in tax data since the 1970s markedly, highlighting the need to move beyond tax data to capture income and wealth at the top, even in countries where tax compliance is generally high. We also find that after reducing tax evasion—by using tax amnesties—tax evaders do not legally avoid taxes more. This result suggests that fighting tax evasion can be an effective way to collect more tax revenue from the ultra-wealthy.

Figure 1

America’s unreported economy: measuring the size, growth and determinants of income tax evasion in the U.S.: https://link.springer.com/article/10.1007/s10611-011-9346-x
This study empirically investigates the extent of noncompliance with the tax code and examines the determinants of federal income tax evasion in the U.S. Employing a refined version of Feige’s (Staff Papers, International Monetary Fund 33(4):768–881, 1986, 1989) General Currency Ratio (GCR) model to estimate a time series of unreported income as our measure of tax evasion, we find that 18–23% of total reportable income may not properly be reported to the IRS. This gives rise to a 2009 “tax gap” in the range of $390–$540 billion. As regards the determinants of tax noncompliance, we find that federal income tax evasion is an increasing function of the average effective federal income tax rate, the unemployment rate, the nominal interest rate, and per capita real GDP, and a decreasing function of the IRS audit rate. Despite important refinements of the traditional currency ratio approach for estimating the aggregate size and growth of unreported economies, we conclude that the sensitivity of the results to different benchmarks, imperfect data sources and alternative specifying assumptions precludes obtaining results of sufficient accuracy and reliability to serve as effective policy guides.
pdf  study  economics  micro  evidence-based  data  europe  nordic  scale  class  compensation  money  monetary-fiscal  political-econ  redistribution  taxes  madisonian  inequality  history  mostly-modern  natural-experiment  empirical  🎩  cocktail  correlation  models  supply-demand  GT-101  crooked  elite  vampire-squid  nationalism-globalism  multi  pro-rata  usa  time-series  trends  world-war  cold-war  government  todo  planning  long-term  trivia  law  crime  criminology  estimate  speculation  measurement  labor  macro  econ-metrics  wealth  stock-flow  time  density  criminal-justice  frequency  dark-arts  traces  evidence 
october 2017 by nhaliday
Evidence-based | West Hunter
The central notion of evidence-based medicine is that our understanding of human biology is imperfect. Some of the idea we come up with for treating and preventing disease are effective, but most are not, worse than useless. So we need careful, rigorous statistical studies before implementing those ideas on a wide scale. A good example of doing this the wrong way was when when doctors started recommending having babies sleep prone, which roughly doubled the incidence of sudden infant death syndrome for the next several decades.

It seems to me that our understanding of psychology, sociology, economics, political science, and education is at least as imperfect as our understanding of biomedicine.

https://westhunt.wordpress.com/2015/01/24/evidence-based/#comment-65904
“Measure twice, cut once” – can’t get much more elitist than that!

Carefully testing innovations on a small scale before widely implementing them is pretty much the opposite of what self-appointed elites have done. Are you deef or something?

https://westhunt.wordpress.com/2015/01/24/evidence-based/#comment-66035
To the extent that they diverge from accepted best practice, physicians, on average, add negative value. I’ve seen this in action, and statistical studies back it up. In other words, Gregory House is a fictional character.
west-hunter  scitariat  discussion  truth  westminster  social-science  academia  psychology  social-psych  sociology  economics  polisci  education  medicine  meta:medicine  evidence-based  empirical  elite  technocracy  cochrane  best-practices  marginal  multi  poast  vampire-squid  humility  reason  ability-competence  the-watchers 
september 2017 by nhaliday
Medicine as a pseudoscience | West Hunter
The idea that venesection was a good thing, or at least not so bad, on the grounds that one in a few hundred people have hemochromatosis (in Northern Europe) reminds me of the people who don’t wear a seatbelt, since it would keep them from being thrown out of their convertible into a waiting haystack, complete with nubile farmer’s daughter. Daughters. It could happen. But it’s not the way to bet.

Back in the good old days, Charles II, age 53, had a fit one Sunday evening, while fondling two of his mistresses.

Monday they bled him (cupping and scarifying) of eight ounces of blood. Followed by an antimony emetic, vitriol in peony water, purgative pills, and a clyster. Followed by another clyster after two hours. Then syrup of blackthorn, more antimony, and rock salt. Next, more laxatives, white hellebore root up the nostrils. Powdered cowslip flowers. More purgatives. Then Spanish Fly. They shaved his head and stuck blistering plasters all over it, plastered the soles of his feet with tar and pigeon-dung, then said good-night.

...

Friday. The king was worse. He tells them not to let poor Nelly starve. They try the Oriental Bezoar Stone, and more bleeding. Dies at noon.

Most people didn’t suffer this kind of problem with doctors, since they never saw one. Charles had six. Now Bach and Handel saw the same eye surgeon, John Taylor – who blinded both of them. Not everyone can put that on his resume!

You may wonder how medicine continued to exist, if it had a negative effect, on the whole. There’s always the placebo effect – at least there would be, if it existed. Any real placebo effect is very small: I’d guess exactly zero. But there is regression to the mean. You see the doctor when you’re feeling worse than average – and afterwards, if he doesn’t kill you outright, you’re likely to feel better. Which would have happened whether you’d seen him or not, but they didn’t often do RCTs back in the day – I think James Lind was the first (1747).

Back in the late 19th century, Christian Scientists did better than others when sick, because they didn’t believe in medicine. For reasons I think mistaken, because Mary Baker Eddy rejected the reality of the entire material world, but hey, it worked. Parenthetically, what triggered all that New Age nonsense in 19th century New England? Hash?

This did not change until fairly recently. Sometime in the early 20th medicine, clinical medicine, what doctors do, hit break-even. Now we can’t do without it. I wonder if there are, or will be, other examples of such a pile of crap turning (mostly) into a real science.

good tweet: https://twitter.com/bowmanthebard/status/897146294191390720
The brilliant GP I've had for 35+ years has retired. How can I find another one who meets my requirements?

1 is overweight
2 drinks more than officially recommended amounts
3 has an amused, tolerant atitude to human failings
4 is well aware that we're all going to die anyway, & there are better or worse ways to die
5 has a healthy skeptical attitude to mainstream medical science
6 is wholly dismissive of "a|ternative” medicine
7 believes in evolution
8 thinks most diseases get better without intervention, & knows the dangers of false positives
9 understands the base rate fallacy

EconPapers: Was Civil War Surgery Effective?: http://econpapers.repec.org/paper/htrhcecon/444.htm
contra Greg Cochran:
To shed light on the subject, I analyze a data set created by Dr. Edmund Andrews, a Civil war surgeon with the 1st Illinois Light Artillery. Dr. Andrews’s data can be rendered into an observational data set on surgical intervention and recovery, with controls for wound location and severity. The data also admits instruments for the surgical decision. My analysis suggests that Civil War surgery was effective, and increased the probability of survival of the typical wounded soldier, with average treatment effect of 0.25-0.28.

Medical Prehistory: https://westhunt.wordpress.com/2016/03/14/medical-prehistory/
What ancient medical treatments worked?

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76878
In some very, very limited conditions, bleeding?
--
Bad for you 99% of the time.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76947
Colchicine – used to treat gout – discovered by the Ancient Greeks.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76973
Dracunculiasis (Guinea worm)
Wrap the emerging end of the worm around a stick and slowly pull it out.
(3,500 years later, this remains the standard treatment.)
https://en.wikipedia.org/wiki/Ebers_Papyrus

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76971
Some of the progress is from formal medicine, most is from civil engineering, better nutrition ( ag science and physical chemistry), less crowded housing.

Nurses vs doctors: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/
Medicine, the things that doctors do, was an ineffective pseudoscience until fairly recently. Until 1800 or so, they were wrong about almost everything. Bleeding, cupping, purging, the four humors – useless. In the 1800s, some began to realize that they were wrong, and became medical nihilists that improved outcomes by doing less. Some patients themselves came to this realization, as when Civil War casualties hid from the surgeons and had better outcomes. Sometime in the early 20th century, MDs reached break-even, and became an increasingly positive influence on human health. As Lewis Thomas said, medicine is the youngest science.

Nursing, on the other hand, has always been useful. Just making sure that a patient is warm and nourished when too sick to take care of himself has helped many survive. In fact, some of the truly crushing epidemics have been greatly exacerbated when there were too few healthy people to take care of the sick.

Nursing must be old, but it can’t have existed forever. Whenever it came into existence, it must have changed the selective forces acting on the human immune system. Before nursing, being sufficiently incapacitated would have been uniformly fatal – afterwards, immune responses that involved a period of incapacitation (with eventual recovery) could have been selectively favored.

when MDs broke even: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/#comment-58981
I’d guess the 1930s. Lewis Thomas thought that he was living through big changes. They had a working serum therapy for lobar pneumonia ( antibody-based). They had many new vaccines ( diphtheria in 1923, whopping cough in 1926, BCG and tetanus in 1927, yellow fever in 1935, typhus in 1937.) Vitamins had been mostly worked out. Insulin was discovered in 1929. Blood transfusions. The sulfa drugs, first broad-spectrum antibiotics, showed up in 1935.

DALYs per doctor: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/
The disability-adjusted life year (DALY) is a measure of overall disease burden – the number of years lost. I’m wondering just much harm premodern medicine did, per doctor. How many healthy years of life did a typical doctor destroy (net) in past times?

...

It looks as if the average doctor (in Western medicine) killed a bunch of people over his career ( when contrasted with doing nothing). In the Charles Manson class.

Eventually the market saw through this illusion. Only took a couple of thousand years.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100741
That a very large part of healthcare spending is done for non-health reasons. He has a chapter on this in his new book, also check out his paper “Showing That You Care: The Evolution of Health Altruism” http://mason.gmu.edu/~rhanson/showcare.pdf
--
I ran into too much stupidity to finish the article. Hanson’s a loon. For example when he talks about the paradox of blacks being more sentenced on drug offenses than whites although they use drugs at similar rate. No paradox: guys go to the big house for dealing, not for using. Where does he live – Mars?

I had the same reaction when Hanson parroted some dipshit anthropologist arguing that the stupid things people do while drunk are due to social expectations, not really the alcohol.
Horseshit.

I don’t think that being totally unable to understand everybody around you necessarily leads to deep insights.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100744
What I’ve wondered is if there was anything that doctors did that actually was helpful and if perhaps that little bit of success helped them fool people into thinking the rest of it helped.
--
Setting bones. extracting arrows: spoon of Diocles. Colchicine for gout. Extracting the Guinea worm. Sometimes they got away with removing the stone. There must be others.
--
Quinine is relatively recent: post-1500. Obstetrical forceps also. Caesarean deliveries were almost always fatal to the mother until fairly recently.

Opium has been around for a long while : it works.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100839
If pre-modern medicine was indeed worse than useless – how do you explain no one noticing that patients who get expensive treatments are worse off than those who didn’t?
--
were worse off. People are kinda dumb – you’ve noticed?
--
My impression is that while people may be “kinda dumb”, ancient customs typically aren’t.
Even if we assume that all people who lived prior to the 19th century were too dumb to make the rational observation, wouldn’t you expect this ancient practice to be subject to selective pressure?
--
Your impression is wrong. Do you think that there some slick reason for Carthaginians incinerating their first-born?

Theodoric of York, bloodletting: https://www.youtube.com/watch?v=yvff3TViXmY

details on blood-letting and hemochromatosis: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100746

Starting Over: https://westhunt.wordpress.com/2018/01/23/starting-over/
Looking back on it, human health would have … [more]
west-hunter  scitariat  discussion  ideas  medicine  meta:medicine  science  realness  cost-benefit  the-trenches  info-dynamics  europe  the-great-west-whale  history  iron-age  the-classics  mediterranean  medieval  early-modern  mostly-modern  🌞  harvard  aphorism  rant  healthcare  regression-to-mean  illusion  public-health  multi  usa  northeast  pre-ww2  checklists  twitter  social  albion  ability-competence  study  cliometrics  war  trivia  evidence-based  data  intervention  effect-size  revolution  speculation  sapiens  drugs  antiquity  lived-experience  list  survey  questions  housing  population  density  nutrition  wiki  embodied  immune  evolution  poast  chart  markets  civil-liberty  randy-ayndy  market-failure  impact  scale  pro-rata  estimate  street-fighting  fermi  marginal  truth  recruiting  alt-inst  academia  social-science  space  physics  interdisciplinary  ratty  lesswrong  autism  👽  subculture  hanson  people  track-record  crime  criminal-justice  criminology  race  ethanol  error  video  lol  comedy  tradition  institutions  iq  intelligence  MENA  impetus  legacy 
august 2017 by nhaliday
Human conversational behavior | SpringerLink
Dunbar et al

Observational studies of human conversations in relaxed social settings suggest that these consist predominantly of exchanges of social information (mostly concerning personal relationships and experiences). Most of these exchanges involve information about the speaker or third parties, and very few involve critical comments or the soliciting or giving of advice. Although a policing function may still be important (e.g., for controlling social cheats), it seems that this does not often involve overt criticism of other individuals’ behavior. The few significant differences between the sexes in the proportion of conversation time devoted to particular topics are interpreted as reflecting females’ concerns with networking and males’ concerns with self-display in what amount to a conventional mating lek.

What Shall We Talk about in Farsi?: https://link.springer.com/article/10.1007/s12110-017-9300-4
How Men And Women Differ: Gender Differences in Communication Styles, Influence Tactics, and Leadership Styles: http://scholarship.claremont.edu/cgi/viewcontent.cgi?article=1521&context=cmc_theses
Gender differences in conversation topics, 1922–1990: https://link.springer.com/article/10.1007/BF00289744
study  sociology  anthropology  psychology  social-psych  language  speaking  communication  pdf  piracy  gender  gender-diff  impro  distribution  evopsych  multi  leadership  iran  comparison  culture  society  ethnography  stylized-facts  evidence-based  history  mostly-modern  org:mag  org:ngo  letters  theory-of-mind 
august 2017 by nhaliday
The “Hearts and Minds” Fallacy: Violence, Coercion, and Success in Counterinsurgency Warfare | International Security | MIT Press Journals
The U.S. prescription for success has had two main elements: to support liberalizing, democratizing reforms to reduce popular grievances; and to pursue a military strategy that carefully targets insurgents while avoiding harming civilians. An analysis of contemporaneous documents and interviews with participants in three cases held up as models of the governance approach—Malaya, Dhofar, and El Salvador—shows that counterinsurgency success is the result of a violent process of state building in which elites contest for power, popular interests matter little, and the government benefits from uses of force against civilians.

https://twitter.com/foxyforecaster/status/893049155337244672
https://archive.is/zhOXD
this is why liberal states mostly fail in counterinsurgency wars

http://www.cbsnews.com/news/commentary-why-are-we-still-in-afghanistan/

contrary study:
Nation Building Through Foreign Intervention: Evidence from Discontinuities in Military Strategies: https://academic.oup.com/qje/advance-article/doi/10.1093/qje/qjx037/4110419
This study uses discontinuities in U.S. strategies employed during the Vietnam War to estimate their causal impacts. It identifies the effects of bombing by exploiting rounding thresholds in an algorithm used to target air strikes. Bombing increased the military and political activities of the communist insurgency, weakened local governance, and reduced noncommunist civic engagement. The study also exploits a spatial discontinuity across neighboring military regions that pursued different counterinsurgency strategies. A strategy emphasizing overwhelming firepower plausibly increased insurgent attacks and worsened attitudes toward the U.S. and South Vietnamese government, relative to a more hearts-and-minds-oriented approach. JEL Codes: F35, F51, F52

anecdote:
Military Adventurer Raymond Westerling On How To Defeat An Insurgency: http://www.socialmatter.net/2018/03/12/military-adventurer-raymond-westerling-on-how-to-defeat-an-insurgency/
study  war  meta:war  military  defense  terrorism  MENA  strategy  tactics  cynicism-idealism  civil-liberty  kumbaya-kult  foreign-policy  realpolitik  usa  the-great-west-whale  occident  democracy  antidemos  institutions  leviathan  government  elite  realness  multi  twitter  social  commentary  stylized-facts  evidence-based  objektbuch  attaq  chart  contrarianism  scitariat  authoritarianism  nl-and-so-can-you  westminster  iraq-syria  polisci  🎩  conquest-empire  news  org:lite  power  backup  martial  nietzschean  pdf  piracy  britain  asia  developing-world  track-record  expansionism  peace-violence  interests  china  race  putnam-like  anglosphere  latin-america  volo-avolo  cold-war  endogenous-exogenous  shift  natural-experiment  rounding  gnon  org:popup  europe  germanic  japan  history  mostly-modern  world-war  examples  death  nihil  dominant-minority  tribalism  ethnocentrism  us-them  letters 
august 2017 by nhaliday
CURRENT CONCEPTS IN MUSCLE STRETCHING FOR EXERCISE AND REHABILITATION
Three muscle stretching techniques are frequently described in the literature: Static, Dynamic, and Pre-Contraction stretches (Figure 2).

Static stretching is effective at increasing ROM.

Unfortunately, however, static stretching as part of a warm-up immediately prior to exercise has been shown detrimental to dynamometer-measured muscle strength19–29 and performance in running and jumping.30–39 The loss of strength resulting from acute static stretching has been termed, “stretch-induced strength loss.”3 The specific causes for this type of stretch induced loss in strength is not clear; some suggest neural factors,31,40 while others suggest mechanical factors.19,23

In general, it appears that static stretching is most beneficial for athletes requiring flexibility for their sports (e.g. gymnastics, dance, etc.). Dynamic stretching may be better suited for athletes requiring running or jumping performance30 during their sport such as basketball players or sprinters.

Stretching has not been shown to be effective at reducing the incidence of overall injuries.88 While there is some evidence of stretching reducing musculotendinous injuries,88 more evidence is needed to determine if stretching programs alone can reduce muscular injuries.3
study  health  fitness  fitsci  evidence-based  running  embodied  sports  survey  summary  biomechanics  endurance  embodied-pack 
august 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Alzheimers | West Hunter
Some disease syndromes almost have to be caused by pathogens – for example, any with a fitness impact (prevalence x fitness reduction) > 2% or so, too big to be caused by mutational pressure. I don’t think that this is the case for AD: it hits so late in life that the fitness impact is minimal. However, that hardly means that it can’t be caused by a pathogen or pathogens – a big fraction of all disease syndromes are, including many that strike in old age. That possibility is always worth checking out, not least because infectious diseases are generally easier to prevent and/or treat.

There is new work that strongly suggests that pathogens are the root cause. It appears that the amyloid is an antimicrobial peptide. amyloid-beta binds to invading microbes and then surrounds and entraps them. ‘When researchers injected Salmonella into mice’s hippocampi, a brain area damaged in Alzheimer’s, A-beta quickly sprang into action. It swarmed the bugs and formed aggregates called fibrils and plaques. “Overnight you see the plaques throughout the hippocampus where the bugs were, and then in each single plaque is a single bacterium,” Tanzi says. ‘

obesity and pathogens: https://westhunt.wordpress.com/2016/05/29/alzheimers/#comment-79757
not sure about this guy, but interesting: https://westhunt.wordpress.com/2016/05/29/alzheimers/#comment-79748
http://perfecthealthdiet.com/2010/06/is-alzheimer%E2%80%99s-caused-by-a-bacterial-infection-of-the-brain/

https://westhunt.wordpress.com/2016/12/13/the-twelfth-battle-of-the-isonzo/
All too often we see large, long-lasting research efforts that never produce, never achieve their goal.

For example, the amyloid hypothesis [accumulation of amyloid-beta oligomers is the cause of Alzheimers] has been dominant for more than 20 years, and has driven development of something like 15 drugs. None of them have worked. At the same time the well-known increased risk from APOe4 has been almost entirely ignored, even though it ought to be a clue to the cause.

In general, when a research effort has been spinning its wheels for a generation or more, shouldn’t we try something different? We could at least try putting a fraction of those research dollars into alternative approaches that have not yet failed repeatedly.

Mostly this applies to research efforts that at least wish they were science. ‘educational research’ is in a special class, and I hardly know what to recommend. Most of the remedial actions that occur to me violate one or more of the Geneva conventions.

APOe4 related to lymphatic system: https://en.wikipedia.org/wiki/Apolipoprotein_E

https://westhunt.wordpress.com/2012/03/06/spontaneous-generation/#comment-2236
Look,if I could find out the sort of places that I usually misplace my keys – if I did, which I don’t – I could find the keys more easily the next time I lose them. If you find out that practitioners of a given field are not very competent, it marks that field as a likely place to look for relatively easy discovery. Thus medicine is a promising field, because on the whole doctors are not terribly good investigators. For example, none of the drugs developed for Alzheimers have worked at all, which suggests that our ideas on the causation of Alzheimers are likely wrong. Which suggests that it may (repeat may) be possible to make good progress on Alzheimers, either by an entirely empirical approach, which is way underrated nowadays, or by dumping the current explanation, finding a better one, and applying it.

You could start by looking at basic notions of field X and asking yourself: How do we really know that? Is there serious statistical evidence? Does that notion even accord with basic theory? This sort of checking is entirely possible. In most of the social sciences, we don’t, there isn’t, and it doesn’t.

Hygiene and the world distribution of Alzheimer’s disease: Epidemiological evidence for a relationship between microbial environment and age-adjusted disease burden: https://academic.oup.com/emph/article/2013/1/173/1861845/Hygiene-and-the-world-distribution-of-Alzheimer-s

Amyloid-β peptide protects against microbial infection in mouse and worm models of Alzheimer’s disease: http://stm.sciencemag.org/content/8/340/340ra72

Fungus, the bogeyman: http://www.economist.com/news/science-and-technology/21676754-curious-result-hints-possibility-dementia-caused-fungal
Fungus and dementia
paper: http://www.nature.com/articles/srep15015

Porphyromonas gingivalis in Alzheimer’s disease brains: Evidence for disease causation and treatment with small-molecule inhibitors: https://advances.sciencemag.org/content/5/1/eaau3333
west-hunter  scitariat  disease  parasites-microbiome  medicine  dementia  neuro  speculation  ideas  low-hanging  todo  immune  roots  the-bones  big-surf  red-queen  multi  🌞  poast  obesity  strategy  info-foraging  info-dynamics  institutions  meta:medicine  social-science  curiosity  🔬  science  meta:science  meta:research  wiki  epidemiology  public-health  study  arbitrage  alt-inst  correlation  cliometrics  path-dependence  street-fighting  methodology  nibble  population-genetics  org:nat  health  embodied  longevity  aging  org:rec  org:biz  org:anglo  news  neuro-nitgrit  candidate-gene  nutrition  diet  org:health  explanans  fashun  empirical  theory-practice  ability-competence  dirty-hands  education  aphorism  truth  westminster  innovation  evidence-based  religion  prudence  track-record  problem-solving  dental  being-right  prioritizing 
july 2017 by nhaliday
Language, Religion, and Ethnic Civil WarJournal of Conflict Resolution - Nils-Christian Bormann, Lars-Erik Cederman, Manuel Vogt, 2017
Our findings indicate that intrastate conflict is more likely within linguistic dyads than among religious ones. Moreover, we find no support for the thesis that Muslim groups are particularly conflict-prone.
study  foreign-policy  realpolitik  polisci  war  meta:war  peace-violence  diversity  race  ethnocentrism  us-them  world  evidence-based  language  religion  putnam-like 
june 2017 by nhaliday
On the effects of inequality on economic growth | Nintil
After the discussion above, what should one think about the relationship between inequality and growth?

For starters, that the consensus of the literature points to our lack of knowledge, and the need to be very careful when studying these phenomena. As of today there is no solid consensus on the effects of inequality on growth. Tentatively, on the grounds of Neves et al.’s meta-analysis, we can conclude that the impact of inequality on developed countries is economically insignificant. This means that one can claim that inequality is good, bad, or neutral for growth as long as the effects claimed are small and one talks about developed countries. For developing countries, the relationships are more negative.

http://squid314.livejournal.com/320672.html
I recently finished The Spirit Level, subtitled "Why More Equal Societies Almost Almost Do Better", although "Five Million Different Scatter Plot Graphs Plus Associated Commentary" would also have worked. It was a pretty thorough manifesto for the best kind of leftism: the type that foregoes ideology and a priori arguments in exchange for a truckload of statistics showing that their proposed social remedies really work.

Inequality: some people know what they want to find: https://www.adamsmith.org/blog/economics/inequality-some-people-know-what-they-want-to-find

Inequality doesn’t matter: a primer: https://www.adamsmith.org/blog/inequality-doesnt-matter-a-primer

Inequality and visibility of wealth in experimental social networks: https://www.nature.com/articles/nature15392
- Akihiro Nishi, Hirokazu Shirado, David G. Rand & Nicholas A. Christakis

We show that wealth visibility facilitates the downstream consequences of initial inequality—in initially more unequal situations, wealth visibility leads to greater inequality than when wealth is invisible. This result reflects a heterogeneous response to visibility in richer versus poorer subjects. We also find that making wealth visible has adverse welfare consequences, yielding lower levels of overall cooperation, inter-connectedness, and wealth. High initial levels of economic inequality alone, however, have relatively few deleterious welfare effects.

https://twitter.com/NAChristakis/status/952315243572719617
https://archive.is/DpyAx
Our own work has shown that the *visibility* of inequality, more then the inequality per se, may be especially corrosive to the social fabric. https://www.nature.com/articles/nature15392 … I wonder if @WalterScheidel historical data sheds light on this idea? end 5/
ratty  unaffiliated  commentary  article  inequality  egalitarianism-hierarchy  economics  macro  growth-econ  causation  meta-analysis  study  summary  links  albion  econotariat  org:ngo  randy-ayndy  nl-and-so-can-you  survey  policy  wonkish  spock  nitty-gritty  evidence-based  s:*  🤖  🎩  world  developing-world  group-level  econ-metrics  chart  gray-econ  endo-exo  multi  yvain  ssc  books  review  critique  contrarianism  sociology  polisci  politics  left-wing  correlation  null-result  race  culture  society  anglosphere  protestant-catholic  regional-scatter-plots  big-picture  compensation  meaningness  cost-benefit  class  mobility  wealth  org:anglo  rhetoric  ideology  envy  money  endogenous-exogenous  org:nat  journos-pundits  anthropology  stylized-facts  open-closed  branches  walter-scheidel  broad-econ  twitter  social  discussion  backup  public-goodish  humility  charity 
june 2017 by nhaliday
Electroconvulsive therapy: a crude, controversial out-of-favor treatme – Coyne of the Realm
various evidence that ECT works

I will soon be offering e-books providing skeptical looks at mindfulness and positive psychology, as well as scientific writing courses on the web as I have been doing face-to-face for almost a decade.

https://www.coyneoftherealm.com/collections/frontpage

Coyne of the Realm Takes a Skeptical Look at Mindfulness — Table of Contents: https://www.coyneoftherealm.com/pages/coyne-of-the-realm-takes-a-skeptical-look-at-mindfulness-table-of-contents

Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation: http://journals.sagepub.com/doi/10.1177/1745691617709589
Where's the Proof That Mindfulness Meditation Works?: https://www.scientificamerican.com/article/wheres-the-proof-that-mindfulness-meditation-works1/
scitariat  psychology  cog-psych  psychiatry  medicine  evidence-based  mindful  the-monster  announcement  attention  regularizer  contrarianism  meta-analysis  multi  critique  books  attaq  replication  realness  study  news  org:mag  org:sci  popsci  absolute-relative  backup  intervention  psycho-atoms 
june 2017 by nhaliday
Educational Romanticism & Economic Development | pseudoerasmus
https://twitter.com/GarettJones/status/852339296358940672
deleeted

https://twitter.com/GarettJones/status/943238170312929280
https://archive.is/p5hRA

Did Nations that Boosted Education Grow Faster?: http://econlog.econlib.org/archives/2012/10/did_nations_tha.html
On average, no relationship. The trendline points down slightly, but for the time being let's just call it a draw. It's a well-known fact that countries that started the 1960's with high education levels grew faster (example), but this graph is about something different. This graph shows that countries that increased their education levels did not grow faster.

Where has all the education gone?: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1016.2704&rep=rep1&type=pdf

https://twitter.com/GarettJones/status/948052794681966593
https://archive.is/kjxqp

https://twitter.com/GarettJones/status/950952412503822337
https://archive.is/3YPic

https://twitter.com/pseudoerasmus/status/862961420065001472
http://hanushek.stanford.edu/publications/schooling-educational-achievement-and-latin-american-growth-puzzle

The Case Against Education: What's Taking So Long, Bryan Caplan: http://econlog.econlib.org/archives/2015/03/the_case_agains_9.html

The World Might Be Better Off Without College for Everyone: https://www.theatlantic.com/magazine/archive/2018/01/whats-college-good-for/546590/
Students don't seem to be getting much out of higher education.
- Bryan Caplan

College: Capital or Signal?: http://www.economicmanblog.com/2017/02/25/college-capital-or-signal/
After his review of the literature, Caplan concludes that roughly 80% of the earnings effect from college comes from signalling, with only 20% the result of skill building. Put this together with his earlier observations about the private returns to college education, along with its exploding cost, and Caplan thinks that the social returns are negative. The policy implications of this will come as very bitter medicine for friends of Bernie Sanders.

Doubting the Null Hypothesis: http://www.arnoldkling.com/blog/doubting-the-null-hypothesis/

Is higher education/college in the US more about skill-building or about signaling?: https://www.quora.com/Is-higher-education-college-in-the-US-more-about-skill-building-or-about-signaling
ballpark: 50% signaling, 30% selection, 20% addition to human capital
more signaling in art history, more human capital in engineering, more selection in philosophy

Econ Duel! Is Education Signaling or Skill Building?: http://marginalrevolution.com/marginalrevolution/2016/03/econ-duel-is-education-signaling-or-skill-building.html
Marginal Revolution University has a brand new feature, Econ Duel! Our first Econ Duel features Tyler and me debating the question, Is education more about signaling or skill building?

Against Tulip Subsidies: https://slatestarcodex.com/2015/06/06/against-tulip-subsidies/

https://www.overcomingbias.com/2018/01/read-the-case-against-education.html

https://nintil.com/2018/02/05/notes-on-the-case-against-education/

https://www.nationalreview.com/magazine/2018-02-19-0000/bryan-caplan-case-against-education-review

https://spottedtoad.wordpress.com/2018/02/12/the-case-against-education/
Most American public school kids are low-income; about half are non-white; most are fairly low skilled academically. For most American kids, the majority of the waking hours they spend not engaged with electronic media are at school; the majority of their in-person relationships are at school; the most important relationships they have with an adult who is not their parent is with their teacher. For their parents, the most important in-person source of community is also their kids’ school. Young people need adult mirrors, models, mentors, and in an earlier era these might have been provided by extended families, but in our own era this all falls upon schools.

Caplan gestures towards work and earlier labor force participation as alternatives to school for many if not all kids. And I empathize: the years that I would point to as making me who I am were ones where I was working, not studying. But they were years spent working in schools, as a teacher or assistant. If schools did not exist, is there an alternative that we genuinely believe would arise to draw young people into the life of their community?

...

It is not an accident that the state that spends the least on education is Utah, where the LDS church can take up some of the slack for schools, while next door Wyoming spends almost the most of any state at $16,000 per student. Education is now the one surviving binding principle of the society as a whole, the one black box everyone will agree to, and so while you can press for less subsidization of education by government, and for privatization of costs, as Caplan does, there’s really nothing people can substitute for it. This is partially about signaling, sure, but it’s also because outside of schools and a few religious enclaves our society is but a darkling plain beset by winds.

This doesn’t mean that we should leave Caplan’s critique on the shelf. Much of education is focused on an insane, zero-sum race for finite rewards. Much of schooling does push kids, parents, schools, and school systems towards a solution ad absurdum, where anything less than 100 percent of kids headed to a doctorate and the big coding job in the sky is a sign of failure of everyone concerned.

But let’s approach this with an eye towards the limits of the possible and the reality of diminishing returns.

https://westhunt.wordpress.com/2018/01/27/poison-ivy-halls/
https://westhunt.wordpress.com/2018/01/27/poison-ivy-halls/#comment-101293
The real reason the left would support Moander: the usual reason. because he’s an enemy.

https://westhunt.wordpress.com/2018/02/01/bright-college-days-part-i/
I have a problem in thinking about education, since my preferences and personal educational experience are atypical, so I can’t just gut it out. On the other hand, knowing that puts me ahead of a lot of people that seem convinced that all real people, including all Arab cabdrivers, think and feel just as they do.

One important fact, relevant to this review. I don’t like Caplan. I think he doesn’t understand – can’t understand – human nature, and although that sometimes confers a different and interesting perspective, it’s not a royal road to truth. Nor would I want to share a foxhole with him: I don’t trust him. So if I say that I agree with some parts of this book, you should believe me.

...

Caplan doesn’t talk about possible ways of improving knowledge acquisition and retention. Maybe he thinks that’s impossible, and he may be right, at least within a conventional universe of possibilities. That’s a bit outside of his thesis, anyhow. Me it interests.

He dismisses objections from educational psychologists who claim that studying a subject improves you in subtle ways even after you forget all of it. I too find that hard to believe. On the other hand, it looks to me as if poorly-digested fragments of information picked up in college have some effect on public policy later in life: it is no coincidence that most prominent people in public life (at a given moment) share a lot of the same ideas. People are vaguely remembering the same crap from the same sources, or related sources. It’s correlated crap, which has a much stronger effect than random crap.

These widespread new ideas are usually wrong. They come from somewhere – in part, from higher education. Along this line, Caplan thinks that college has only a weak ideological effect on students. I don’t believe he is correct. In part, this is because most people use a shifting standard: what’s liberal or conservative gets redefined over time. At any given time a population is roughly half left and half right – but the content of those labels changes a lot. There’s a shift.

https://westhunt.wordpress.com/2018/02/01/bright-college-days-part-i/#comment-101492
I put it this way, a while ago: “When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”
--
You just explained the Credo quia absurdum doctrine. I always wondered if it was nonsense. It is not.
--
Someone on twitter caught it first – got all the way to “sliding down the razor blade of life”. Which I explained is now called “transitioning”

What Catholics believe: https://theweek.com/articles/781925/what-catholics-believe
We believe all of these things, fantastical as they may sound, and we believe them for what we consider good reasons, well attested by history, consistent with the most exacting standards of logic. We will profess them in this place of wrath and tears until the extraordinary event referenced above, for which men and women have hoped and prayed for nearly 2,000 years, comes to pass.

https://westhunt.wordpress.com/2018/02/05/bright-college-days-part-ii/
According to Caplan, employers are looking for conformity, conscientiousness, and intelligence. They use completion of high school, or completion of college as a sign of conformity and conscientiousness. College certainly looks as if it’s mostly signaling, and it’s hugely expensive signaling, in terms of college costs and foregone earnings.

But inserting conformity into the merit function is tricky: things become important signals… because they’re important signals. Otherwise useful actions are contraindicated because they’re “not done”. For example, test scores convey useful information. They could help show that an applicant is smart even though he attended a mediocre school – the same role they play in college admissions. But employers seldom request test scores, and although applicants may provide them, few do. Caplan says ” The word on the street… [more]
econotariat  pseudoE  broad-econ  economics  econometrics  growth-econ  education  human-capital  labor  correlation  null-result  world  developing-world  commentary  spearhead  garett-jones  twitter  social  pic  discussion  econ-metrics  rindermann-thompson  causation  endo-exo  biodet  data  chart  knowledge  article  wealth-of-nations  latin-america  study  path-dependence  divergence  🎩  curvature  microfoundations  multi  convexity-curvature  nonlinearity  hanushek  volo-avolo  endogenous-exogenous  backup  pdf  people  policy  monetary-fiscal  wonkish  cracker-econ  news  org:mag  local-global  higher-ed  impetus  signaling  rhetoric  contrarianism  domestication  propaganda  ratty  hanson  books  review  recommendations  distribution  externalities  cost-benefit  summary  natural-experiment  critique  rent-seeking  mobility  supply-demand  intervention  shift  social-choice  government  incentives  interests  q-n-a  street-fighting  objektbuch  X-not-about-Y  marginal-rev  c:***  qra  info-econ  info-dynamics  org:econlib  yvain  ssc  politics  medicine  stories 
april 2017 by nhaliday
The Causal Impact of Human Capital on R&D and Productivity: Evidence from the United States
We instrument our measures of schooling by using the variation in compulsory schooling laws and differences in mobilization rates in WWII, which we relate to the education benefits provided by the GI Bill Act (1944). This novel instrument provides a clean source of variation in the costs of attending college. Two-stage least squared regressions find no effect of the share of population with secondary schooling on outcomes such as n R&D per worker or TFP growth. On the other hand, the share of population with tertiary education has a significant effect on both R&D per worker or TFP growth.
pdf  study  economics  growth-econ  education  higher-ed  human-capital  causation  endo-exo  natural-experiment  usa  history  mostly-modern  econ-productivity  labor  econometrics  intervention  stylized-facts  world-war  supply-demand  policy  wonkish  innovation  input-output  evidence-based  endogenous-exogenous 
april 2017 by nhaliday
An updated meta-analysis of the ego depletion effect | SpringerLink
The results suggest that attention video should be an ineffective depleting task, whereas emotion video should be the most effective one. Future studies are needed to confirm the effectiveness of each depletion task revealed by the current meta-analysis.
study  psychology  cog-psych  replication  meta-analysis  intervention  hmm  attention  emotion  the-monster  stamina  ego-depletion  discipline  self-control  evidence-based  solid-study 
april 2017 by nhaliday
New studies show the cost of student laptop use in lecture classes - Daniel Willingham
In-lecture media use and academic performance: Does subject area matter: http://www.sciencedirect.com/science/article/pii/S0747563217304983
The study found that while a significant negative correlation exists between in-lecture media use and academic performance for students in the Arts and Social Sciences, the same pattern is not observable for students in the faculties of Engineering, Economic and Management Sciences, and Medical and Health Sciences.

hmm

Why you should take notes by hand — not on a laptop: https://www.vox.com/2014/6/4/5776804/note-taking-by-hand-versus-laptop
Presumably, they're using the computers to take notes, so they better remember the course material. But new research shows that if learning is their goal, using a laptop during class is a terrible idea.

It's not just because internet-connected laptops are so distracting. It's because even if students aren't distracted, the act of taking notes on a computer actually seems to interfere with their ability to remember information.

Pam Mueller and Daniel Oppenheimer, the psychologists who conducted the new research, believe it's because students on laptops usually just mindlessly type everything a professor says. Those taking notes by hand, though, have to actively listen and decide what's important — because they generally can't write fast enough to get everything down — which ultimately helps them learn.

The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking: https://linguistics.ucla.edu/people/hayes/Teaching/papers/MuellerAndOppenheimer2014OnTakingNotesByHand.pdf
scitariat  education  higher-ed  learning  data  study  summary  intervention  internet  attention  field-study  effect-size  studying  regularizer  aversion  the-monster  multi  cost-benefit  notetaking  evidence-based  news  org:lite  org:data  hi-order-bits  synthesis  spreading  contiguity-proximity 
april 2017 by nhaliday
Destined for War: Can China and the United States Escape Thucydides’s Trap? - The Atlantic
The defining question about global order for this generation is whether China and the United States can escape Thucydides’s Trap. The Greek historian’s metaphor reminds us of the attendant dangers when a rising power rivals a ruling power—as Athens challenged Sparta in ancient Greece, or as Germany did Britain a century ago. Most such contests have ended badly, often for both nations, a team of mine at the Harvard Belfer Center for Science and International Affairs has concluded after analyzing the historical record. In 12 of 16 cases over the past 500 years, the result was war. When the parties avoided war, it required huge, painful adjustments in attitudes and actions on the part not just of the challenger but also the challenged.

http://foreignpolicy.com/2017/06/09/the-thucydides-trap/
http://marginalrevolution.com/marginalrevolution/2017/06/no-thucydides-trap.html
news  org:mag  foreign-policy  realpolitik  the-classics  china  asia  usa  prediction  war  world  expansionism  current-events  history  early-modern  mostly-modern  track-record  iron-age  mediterranean  europe  competition  lee-kuan-yew  polis  sinosphere  polisci  wonkish  economics  longform  let-me-see  scale  definite-planning  chart  evidence-based  defense  nihil  the-bones  zeitgeist  great-powers  statesmen  ranking  kumbaya-kult  peace-violence  pre-ww2  multi  org:foreign  nuclear  deterrence  strategy  whiggish-hegelian  econotariat  marginal-rev  commentary  moloch  thucydides 
march 2017 by nhaliday
PsycARTICLES - Is education associated with improvements in general cognitive ability, or in specific skills?
Results indicated that the association of education with improved cognitive test scores is not mediated by g, but consists of direct effects on specific cognitive skills. These results suggest a decoupling of educational gains from increases in general intellectual capacity.

look at Model C for the coefficients

How much does education improve intelligence? A meta-analysis: https://psyarxiv.com/kymhp
Intelligence test scores and educational duration are positively correlated. This correlation can be interpreted in two ways: students with greater propensity for intelligence go on to complete more education, or a longer education increases intelligence. We meta-analysed three categories of quasi-experimental studies of educational effects on intelligence: those estimating education-intelligence associations after controlling for earlier intelligence, those using compulsory schooling policy changes as instrumental variables, and those using regression-discontinuity designs on school-entry age cutoffs. Across 142 effect sizes from 42 datasets involving over 600,000 participants, we found consistent evidence for beneficial effects of education on cognitive abilities, of approximately 1 to 5 IQ points for an additional year of education. Moderator analyses indicated that the effects persisted across the lifespan, and were present on all broad categories of cognitive ability studied. Education appears to be the most consistent, robust, and durable method yet to be identified for raising intelligence.

three study designs: control for prior IQ, exogenous policy change, and school age cutoff regression discontinuity

https://westhunt.wordpress.com/2017/11/07/skoptsys/#comment-97601
It’s surprising that there isn’t much of a fadeout (p11) – half of the effect size is still there by age 70 (?!). That wasn’t what I expected. Maybe they’re being pulled upwards by smaller outlier studies – most of the bigger ones tend towards the lower end.

https://twitter.com/gwern/status/928308706370052098
https://archive.is/v98bd
These gains are hollow, as they acknowledge in the discussion. Examples:
albion  spearhead  scitariat  study  psychology  cog-psych  iq  large-factor  education  intervention  null-result  longitudinal  britain  anglo  psychometrics  psych-architecture  graphs  graphical-models  causation  neuro-nitgrit  effect-size  stylized-facts  direct-indirect  flexibility  input-output  evidence-based  preprint  multi  optimism  meta-analysis  west-hunter  poast  commentary  aging  marginal  europe  nordic  shift  twitter  social  backup  ratty  gwern  links  flynn  environmental-effects  debate  roots 
march 2017 by nhaliday
Peacekeeping Force: Effects of Providing Tactical Equipment to Local Law Enforcement - American Economic Association
For causal identification, we exploit exogenous variation in equipment availability and cost-shifting institutional aspects of the 1033 Program. Our results indicate that these items have generally positive effects: reduced citizen complaints, reduced assaults on officers, increased drug crime arrests, and no increases in offender deaths.

https://www.aeaweb.org/articles?id=10.1257/pol.20150478
We use temporal variations in US military expenditure and between-counties variation in the odds of receiving a positive amount of military aid to identify the causal effect of militarized policing on crime. We find that (i) military aid reduces street-level crime; (ii) the program is cost-effective; and (iii) there is evidence in favor of a deterrence mechanism.

https://www.aeaweb.org/articles?id=10.1257/pol.20150525
https://www.usna.edu/EconDept/RePEc/usn/wp/usnawp56.pdf
contradictory: http://journals.sagepub.com.sci-hub.tw/doi/abs/10.1177/2053168017712885

http://econ.tulane.edu/RePEc/pdf/tul1708.pdf
I find that the implementation of Stand Your Ground policies lead to an average of 2.75 additional black Alleged Perpetrators of Crimes being killed each month, 2.39 of whom are killed by black citizens. Additionally, I find 0.5 additional white Alleged Perpetrators are killed each month, 0.49 of whom are killed by white citizens.

A Big Test of Police Body Cameras Defies Expectations: https://www.nytimes.com/2017/10/20/upshot/a-big-test-of-police-body-cameras-defies-expectations.html
But what happens when the cameras are on the chests of police officers? The results of the largest, most rigorous study of police body cameras in the United States came out Friday morning, and they are surprising both police officers and researchers.

For seven months, just over a thousand Washington, D.C., police officers were randomly assigned cameras — and another thousand were not. Researchers tracked use-of-force incidents, civilian complaints, charging decisions and other outcomes to see if the cameras changed behavior. But on every metric, the effects were too small to be statistically significant. Officers with cameras used force and faced civilian complaints at about the same rates as officers without cameras.
study  ssc  economics  policy  politics  culture-war  military  crime  criminal-justice  contrarianism  government  natural-experiment  sociology  endo-exo  attaq  current-events  criminology  arms  multi  pdf  piracy  intervention  econometrics  🎩  law  civil-liberty  chart  order-disorder  authoritarianism  wonkish  evidence-based  news  org:rec  org:data  open-closed  virginia-DC  field-study  endogenous-exogenous 
march 2017 by nhaliday
Placebo interventions for all clinical conditions. - PubMed - NCBI
We did not find that placebo interventions have important clinical effects in general. However, in certain settings placebo interventions can influence patient-reported outcomes, especially pain and nausea, though it is difficult to distinguish patient-reported effects of placebo from biased reporting. The effect on pain varied, even among trials with low risk of bias, from negligible to clinically important. Variations in the effect of placebo were partly explained by variations in how trials were conducted and how patients were informed.

How much of the placebo 'effect' is really statistical regression?: https://www.ncbi.nlm.nih.gov/pubmed/6369471
Statistical regression to the mean predicts that patients selected for abnormalcy will, on the average, tend to improve. We argue that most improvements attributed to the placebo effect are actually instances of statistical regression. First, whereas older clinical trials susceptible to regression resulted in a marked improvement in placebo-treated patients, in a modern series of clinical trials whose design tended to protect against regression, we found no significant improvement (median change 0.3 per cent, p greater than 0.05) in placebo-treated patients.

Placebo effects are weak: regression to the mean is the main reason ineffective treatments appear to work: http://www.dcscience.net/2015/12/11/placebo-effects-are-weak-regression-to-the-mean-is-the-main-reason-ineffective-treatments-appear-to-work/

A radical new hypothesis in medicine: give patients drugs they know don’t work: https://www.vox.com/science-and-health/2017/6/1/15711814/open-label-placebo-kaptchuk
People on no treatment got about 30 percent better. And people who were given an open-label placebo got 60 percent improvement in the adequate relief of their irritable bowel syndrome.

Surgery Is One Hell Of A Placebo: https://fivethirtyeight.com/features/surgery-is-one-hell-of-a-placebo/
study  psychology  social-psych  medicine  meta:medicine  contrarianism  evidence-based  embodied-cognition  intervention  illusion  realness  meta-analysis  multi  science  stats  replication  gelman  regularizer  thinking  regression-to-mean  methodology  insight  hmm  news  org:data  org:lite  interview  tricks  drugs  cost-benefit  health  ability-competence  chart 
march 2017 by nhaliday
Information Processing: Epistasis vs additivity
On epistasis: why it is unimportant in polygenic directional selection: http://rstb.royalsocietypublishing.org/content/365/1544/1241.short
- James F. Crow

The Evolution of Multilocus Systems Under Weak Selection: http://www.genetics.org/content/genetics/134/2/627.full.pdf
- Thomas Nagylaki

Data and Theory Point to Mainly Additive Genetic Variance for Complex Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1000008
The relative proportion of additive and non-additive variation for complex traits is important in evolutionary biology, medicine, and agriculture. We address a long-standing controversy and paradox about the contribution of non-additive genetic variation, namely that knowledge about biological pathways and gene networks imply that epistasis is important. Yet empirical data across a range of traits and species imply that most genetic variance is additive. We evaluate the evidence from empirical studies of genetic variance components and find that additive variance typically accounts for over half, and often close to 100%, of the total genetic variance. We present new theoretical results, based upon the distribution of allele frequencies under neutral and other population genetic models, that show why this is the case even if there are non-additive effects at the level of gene action. We conclude that interactions at the level of genes are not likely to generate much interaction at the level of variance.
hsu  scitariat  commentary  links  study  list  evolution  population-genetics  genetics  methodology  linearity  nonlinearity  comparison  scaling-up  nibble  lens  bounded-cognition  ideas  bio  occam  parsimony  🌞  summary  quotes  multi  org:nat  QTL  stylized-facts  article  explanans  sapiens  biodet  selection  variance-components  metabuch  thinking  models  data  deep-materialism  chart  behavioral-gen  evidence-based  empirical  mutation  spearhead  model-organism  bioinformatics  linear-models  math  magnitude  limits  physics  interdisciplinary  stat-mech 
february 2017 by nhaliday
The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses - IOANNIDIS - 2016 - The Milbank Quarterly - Wiley Online Library
Currently, _probably more systematic reviews of trials than new randomized trials are published annually_. Most topics addressed by meta-analyses of randomized trials have overlapping, redundant meta-analyses; same-topic meta-analyses may exceed 20 sometimes. Some fields produce massive numbers of meta-analyses; for example, 185 meta-analyses of antidepressants for depression were published between 2007 and 2014. These meta-analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests. _China has rapidly become the most prolific producer of English-language, PubMed-indexed meta-analyses_. The most massive presence of Chinese meta-analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes. Furthermore, many contracting companies working on evidence synthesis receive industry contracts to produce meta-analyses, many of which probably remain unpublished. Many other meta-analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta-analyses are both non-misleading and useful.
study  ioannidis  science  medicine  replication  methodology  meta:science  critique  evidence-based  meta-analysis  china  asia  genetics  anomie  cochrane  candidate-gene  info-dynamics  sinosphere 
january 2017 by nhaliday
So apparently this is why we have positive psychology but not evidence-based psychological treatment
APA presidents are supposed to have an initiative and… I thought mine could be “evidence-based treatment and prevention.” So I went to my friend, Steve Hyman, the director of [National Institute of Mental Health]. He was thrilled and told me he would chip in $40 million dollars if I could get APA working on evidence-based treatment.

So I told CAPP [which owns the APA] about my plan and about NIMH’s willingness. I felt the room get chillier and chillier. I rattled on. Finally, the chair of CAPP memorably said, “What if the evidence doesn’t come out in our favor?”

…I limped my way to [my friend’s] office for some fatherly advice.

“Marty,” he opined, “you are trying to be a transactional president. But you cannot out-transact these people…”

And so I proposed that Psychology turn its… attention away from pathology and victimology and more toward what makes life worth living: positive emotion, positive character, and positive institutions. I never looked back and this became my mission for the next fifteen years. The endeavor… caught on.
ratty  quotes  stories  psychology  social-psych  replication  social-science  lol  core-rats  evidence-based 
january 2017 by nhaliday
Bisphenol A (BPA)
Alternatives to BPA containers not easy for U.S. foodmakers to find: http://www.washingtonpost.com/wp-dyn/content/article/2010/02/22/AR2010022204830.html

Food is main source of BPA for consumers, thermal paper also potentially significant: https://www.efsa.europa.eu/en/press/news/130725
New data resulting from an EFSA call for data led to a considerable refinement of exposure estimates compared to 2006. For infants and toddlers (aged 6 months-3 years) average exposure from the diet is estimated to amount to 375 nanograms per kilogram of body weight per day (ng/kg bw/day) whereas for the population above 18 years of age (including women of child-bearing age) the figure is up to 132 ng/kg bw/day. By comparison, these estimates are less than 1% of the current Tolerable Daily Intake (TDI) for BPA (0.05 milligrams/kg bw/day) established by EFSA in 2006.

For all population groups above three years of age thermal paper was the second most important source of BPA after the diet (potentially accounting for up to 15% of total exposure in some population groups).

Among other key findings, scientists found dietary exposure to BPA to be the highest among children aged three to ten (explainable by their higher food consumption on a body weight basis). Canned food and non-canned meat and meat products were identified as major contributors to dietary BPA exposure for all age groups.

Tips for Avoiding BPA in Canned Food: http://www.breastcancerfund.org/reduce-your-risk/tips/avoid-bpa.html

Holding Thermal Receipt Paper and Eating Food after Using Hand Sanitizer Results in High Serum Bioactive and Urine Total Levels of Bisphenol A (BPA): http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0110509

Bisphenol S Disrupts Estradiol-Induced Nongenomic Signaling in a Rat Pituitary Cell Line: Effects on Cell Functions: http://ehp.niehs.nih.gov/1205826/
common substitute for BPA

http://wellnessmama.com/54748/hidden-sources-of-bpa/

Effect of probiotics, Bifidobacterium breve and Lactobacillus casei, on bisphenol A exposure in rats: https://www.ncbi.nlm.nih.gov/pubmed/18540113

What are the sources of exposure to eight frequently used phthalic acid esters in Europeans?: https://www.ncbi.nlm.nih.gov/pubmed/16834635
Food is a main source of DiBP, DnBP, and DEHP in consumers. In this case, consumers have very few possibilities to effectively reduce their exposure.

Are endocrine disrupting compounds a health risk in drinking water?: https://www.ncbi.nlm.nih.gov/pubmed/16823090

How to Avoid Phthalates (Even Though You Can't Avoid Phthalates): http://www.huffingtonpost.com/maia-james/phthalates-health_b_2464248.html
data  org:gov  hypochondria  endocrine  embodied-street-fighting  public-health  news  org:rec  business  tradeoffs  food  multi  study  summary  diet  top-n  org:euro  org:health  nitty-gritty  human-bean  checklists  cooking  embodied  human-study  science-anxiety  sanctity-degradation  intervention  epidemiology  bio  🐸  model-organism  list  health  hmm  idk  parasites-microbiome  street-fighting  evidence-based  objektbuch  embodied-pack  chart  roots  h2o  advice  org:lite  biodet  fluid  left-wing 
january 2017 by nhaliday
The Experts | West Hunter
It seems to me that not all people called experts actually are. In fact, there are whole fields in which none of the experts are experts. But let’s try to define terms.

...

Along these lines, I’ve read Tetlock’s book, Expert Political Judgment. A funny, funny, book. I will have more to say on that later.

USSR: https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60760
iraq war:
https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60653
Of course it is how Bush sold the war. Selling the war involving statements to the press, leaks, etc, not a Congressional resolution, which is the product of that selling job. Leaks to that lying slut at the New York Times, Judith Miller, for example.

Actively seeking a nuclear weapons capacity would have meant making fissionables, or building facilities to make fissionables. That hadn’t happened, and it was impossible for Iraq to have done so, given that any such effort had to be undetectable (because we hadn’t detected it with our ‘national technical means’, spy satellites and such) and given their limited resources in men, money, and materiel. Iraq had done nothing along these lines. Absolutely nothing.

https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60674
You don’t even know what yellow cake is. It is true that Saddam had had a nuclear program before the Gulf War, although it had not come too close to a weapon – but that program had been destroyed, and could not be rebuilt A. in a way invisible to our spy satellites and B with no money, because of sanctions.

The 550 tons of uranium oxide- unenriched uranium oxide – was a leftover from the earlier program. Under UN seal, and those seals had not been broken. Without enrichment, and without a means of enrichment, it was useless.

What’s the point of pushing this nonsense? somebody paying you?

The President was a moron, the Government of the United States proved itself a pack of fools,as did the New York Times, the Washington Post, Congress, virtually all of the pundits, etc. etc. And undoubtedly you were a fool as well: you might as well deal with it, because the truth is not going to go away.

interesting discussion of battle fatigue and desertion: https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60709
Actually, I don’t know how Freudian those Army psychologists were in 1944: they may have been useless in some other way. The gist is that in the European theater, for example in the Normandy campaign, the US had a much higher rate of psychological casualties than the Germans. “Both British and American psychiatrists were struck by the ‘apparently few cases of psychoneurosis’ among German prisoners of war. ” They were lower in the Red Army, as well.

In the Pacific theater, combat fatigue was even worse for US soldiers, but rare among the Japanese.

...

The infantry took most of the casualties – it was a very dangerous, unpleasant job. People didn’t like being in the infantry. In the American Army, and to a lesser extent, the British Army, getting into medical evacuation channels was a way to avoid getting killed. Not so much in the German Army: suspected malingerers were shot. In the American Army, they weren’t. That’s the most importance difference between the Germans and Americans affecting the ‘combat fatigue’ rate – the Germans didn’t put up with it. They did have some procedures, but they all ended up putting the guy back in combat fairly rapidly.

Even for desertion, only ONE American soldier was executed. In the Germany Army, 20,000. It makes a difference. We ran a soft war: since we ended up with whole divisions out of the fight, we probably would have done better (won faster, lost fewer guys) if we had been harsher on malingerers and deserters.

more on emdees: https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60697
As for your idea that doctors improve with age, I doubt it. So do some other people: for example, in this article in Annals of Internal Medicine (Systematic review: the relationship between clinical experience and quality of health care), they say “Overall, 32 of the 62 (52%) evaluations reported decreasing performance with increasing years in practice for all outcomes assessed; 13 (21%) reported decreasing performance with increasing experience for some outcomes but no association for others; 2 (3%) reported that performance initially increased with increasing experience, peaked, and then decreased (concave relationship); 13 (21%) reported no association; 1 (2%) reported increasing performance with increasing years in practice for some outcomes but no association for others; and 1 (2%) reported increasing performance with increasing years in practice for all outcomes. Results did not change substantially when the analysis was restricted to studies that used the most objective outcome measures.

I don’t how well that 25-year old doctor with an IQ of 160 would do, never having met anyone like that. I do know a mathematician who has an IQ around 160 and was married to a doctor, but she* dumped him after he put her through med school and came down with lymphoma.

And that libertarian friend I mentioned, who said that although quarantine would have worked against AIDS, better that we didn’t, despite the extra hundreds of thousands of deaths that resulted – why, he’s a doctor.

*all the other fifth-years in her program also dumped their spouses. Catching?

climate change: https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60787
I think that predicting climate is difficult, considering the complex feedback loops, but I know that almost every right-wing thing said about it that I have checked out turned out to be false.
west-hunter  rant  discussion  social-science  error  history  psychology  military  war  multi  mostly-modern  bounded-cognition  martial  crooked  meta:war  realness  being-right  emotion  scitariat  info-dynamics  poast  world-war  truth  tetlock  alt-inst  expert-experience  epidemiology  public-health  spreading  disease  sex  sexuality  iraq-syria  gender  gender-diff  parenting  usa  europe  germanic  psychiatry  courage  medicine  meta:medicine  age-generation  aging  climate-change  track-record  russia  communism  economics  correlation  nuclear  arms  randy-ayndy  study  evidence-based  data  time  reason  ability-competence  complex-systems  politics  ideology  roots  government  elite  impetus 
january 2017 by nhaliday
WHAT'S TO KNOW ABOUT THE CREDIBILITY OF EMPIRICAL ECONOMICS? - Ioannidis - 2013 - Journal of Economic Surveys - Wiley Online Library
Abstract. The scientific credibility of economics is itself a scientific question that can be addressed with both theoretical speculations and empirical data. In this review, we examine the major parameters that are expected to affect the credibility of empirical economics: sample size, magnitude of pursued effects, number and pre-selection of tested relationships, flexibility and lack of standardization in designs, definitions, outcomes and analyses, financial and other interests and prejudices, and the multiplicity and fragmentation of efforts. We summarize and discuss the empirical evidence on the lack of a robust reproducibility culture in economics and business research, the prevalence of potential publication and other selective reporting biases, and other failures and biases in the market of scientific information. Overall, the credibility of the economics literature is likely to be modest or even low.

The Power of Bias in Economics Research: http://onlinelibrary.wiley.com/doi/10.1111/ecoj.12461/full
We investigate two critical dimensions of the credibility of empirical economics research: statistical power and bias. We survey 159 empirical economics literatures that draw upon 64,076 estimates of economic parameters reported in more than 6,700 empirical studies. Half of the research areas have nearly 90% of their results under-powered. The median statistical power is 18%, or less. A simple weighted average of those reported results that are adequately powered (power ≥ 80%) reveals that nearly 80% of the reported effects in these empirical economics literatures are exaggerated; typically, by a factor of two and with one-third inflated by a factor of four or more.

Economics isn't a bogus science — we just don't use it correctly: http://www.latimes.com/opinion/op-ed/la-oe-ioannidis-economics-is-a-science-20171114-story.html
https://archive.is/AU7Xm
study  ioannidis  social-science  meta:science  economics  methodology  critique  replication  bounded-cognition  error  stat-power  🎩  🔬  info-dynamics  piracy  empirical  biases  econometrics  effect-size  network-structure  realness  paying-rent  incentives  academia  multi  evidence-based  news  org:rec  rhetoric  contrarianism  backup  cycles  finance  huge-data-the-biggest  org:local 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : metascivague

related tags

2016-election  80000-hours  :/  ability-competence  absolute-relative  abstraction  academia  accuracy  acmtariat  advertising  advice  africa  age-generation  aging  agri-mindset  ai  ai-control  akrasia  albion  allodium  alt-inst  analogy  analysis  anarcho-tyranny  anglo  anglosphere  announcement  anomie  anthropology  antidemos  antiquity  aphorism  arbitrage  archaeology  arms  art  article  ascetic  asia  assembly  atmosphere  atoms  attaq  attention  audio  authoritarianism  autism  automation  aversion  axelrod  axioms  backup  bayesian  behavioral-econ  behavioral-gen  being-right  best-practices  bias-variance  biases  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biomechanics  biophysical-econ  biotech  bits  blog  blowhards  books  bootstraps  bounded-cognition  brain-scan  branches  bret-victor  britain  broad-econ  business  c(pp)  c:*  c:**  c:***  canada  cancer  candidate-gene  canon  capitalism  carcinisation  cardio  career  carmack  causation  chapman  charity  chart  checking  checklists  chicago  china  christianity  civic  civil-liberty  civilization  clarity  class  clever-rats  climate-change  clinton  cliometrics  clown-world  coalitions  cochrane  cocktail  code-organizing  cog-psych  cohesion  cold-war  collaboration  comedy  comics  coming-apart  commentary  communication  communism  community  comparison  compensation  competition  compilers  complex-systems  compression  concept  conceptual-vocab  concurrency  confidence  confounding  conquest-empire  contiguity-proximity  contradiction  contrarianism  control  convexity-curvature  cooking  cool  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  corruption  cosmetic  cost-benefit  counter-revolution  counterexample  counterfactual  courage  cracker-econ  cracker-prog  crime  criminal-justice  criminology  critique  crooked  crux  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cycles  cynicism-idealism  dan-luu  dark-arts  darwinian  data  data-science  database  dataviz  death  debate  debt  debugging  decision-making  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  dementia  democracy  density  dental  descriptive  desktop  detail-architecture  deterrence  developing-world  developmental  devtools  diet  direct-indirect  dirty-hands  discipline  discovery  discrimination  discussion  disease  distributed  distribution  divergence  diversity  documentation  domestication  dominant-minority  dotnet  douthatish  drama  drugs  duplication  duty  dynamic  dynamical  early-modern  easterly  econ-metrics  econ-productivity  econometrics  economics  econotariat  ed-yong  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  ego-depletion  elections  elite  embedded  embodied  embodied-cognition  embodied-pack  embodied-street-fighting  emotion  empirical  endo-exo  endocrine  endogenous-exogenous  endurance  engineering  enlightenment-renaissance-restoration-reformation  ensembles  entrepreneurialism  environment  environmental-effects  envy  epidemiology  epistemic  equilibrium  ergo  error  error-handling  essay  estimate  ethanol  ethics  ethnocentrism  ethnography  EU  europe  evidence  evidence-based  evolution  evopsych  examples  exocortex  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  expression-survival  externalities  extra-introversion  faq  farmers-and-foragers  fashun  FDA  fermi  feudal  fiction  field-study  film  finance  fitness  fitsci  flexibility  fluid  flux-stasis  flynn  focus  food  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forum  free-riding  french  frequency  frontier  functional  futurism  games  garett-jones  gelman  gender  gender-diff  generalization  genetic-correlation  genetics  genomics  geography  geopolitics  germanic  get-fit  gibbon  gilens-page  git  gnon  gnosis-logos  good-evil  google  gotchas  government  grad-school  gradient-descent  graphical-models  graphs  gray-econ  great-powers  grokkability  grokkability-clarity  group-level  group-selection  growth  growth-econ  GT-101  guide  GWAS  gwern  h2o  habit  hacker  hanson  hanushek  happy-sad  hard-tech  hardware  hari-seldon  harvard  haskell  hci  health  healthcare  heterodox  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hive-mind  hmm  hn  homepage  homo-hetero  housing  hsu  huge-data-the-biggest  human-bean  human-capital  human-study  humility  hypochondria  hypocrisy  hypothesis-testing  ideas  identity-politics  ideology  idk  iidness  illusion  immune  impact  impetus  impro  incentives  increase-decrease  individualism-collectivism  industrial-org  inequality  info-dynamics  info-econ  info-foraging  infographic  inhibition  init  innovation  input-output  insight  institutions  integrity  intel  intellectual-property  intelligence  interdisciplinary  interests  interface-compatibility  internet  intervention  interview  intricacy  ioannidis  iq  iran  iraq-syria  iron-age  is-ought  islam  japan  jobs  journos-pundits  judaism  judgement  jvm  kinship  knowledge  kumbaya-kult  labor  land  language  large-factor  latex  latin-america  law  leadership  learning  lee-kuan-yew  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  leviathan  lexical  libraries  life-history  lifehack  limits  linear-models  linearity  linguistics  links  linux  list  literature  lived-experience  local-global  lol  long-term  longevity  longform  longitudinal  low-hanging  machine-learning  macro  madisonian  magnitude  malaise  management  manifolds  map-territory  marginal  marginal-rev  market-failure  markets  martial  math  maxim-gun  meaningness  measure  measurement  media  medicine  medieval  mediterranean  MENA  mena4  meta-analysis  meta:medicine  meta:prediction  meta:reading  meta:research  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metal-to-virtual  metameta  methodology  metrics  michael-nielsen  micro  microfoundations  microsoft  midwest  migration  military  mindful  minimalism  miri-cfar  mobility  model-organism  models  moloch  moments  monetary-fiscal  money  money-for-time  mooc  morality  mostly-modern  multi  music  mutation  n-factor  nascent-state  nationalism-globalism  natural-experiment  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  nonlinearity  nootropics  nordic  northeast  notation  notetaking  novelty  nuclear  null-result  number  nutrition  nyc  obesity  objective-measure  objektbuch  ocaml-sml  occam  occident  oop  open-closed  operational  opioids  optimate  optimism  optimization  order-disorder  org:anglo  org:biz  org:data  org:davos  org:econlib  org:edu  org:euro  org:foreign  org:gov  org:health  org:junk  org:lite  org:local  org:mag  org:nat  org:ngo  org:popup  org:rec  org:sci  org:theos  organization  organizing  orwellian  os  oscillation  oss  other-xtian  outcome-risk  p:null  p:whenever  papers  parable  parallax  parasites-microbiome  parenting  parsimony  path-dependence  patho-altruism  paying-rent  pdf  peace-violence  people  performance  personality  persuasion  pessimism  peter-singer  phalanges  pharma  philosophy  physics  pic  piracy  planning  plots  pls  plt  poast  podcast  policy  polis  polisci  political-econ  politics  poll  popsci  population  population-genetics  postmortem  postrat  power  practice  pragmatic  pre-ww2  prediction  prediction-markets  preference-falsification  prejudice  prepping  preprint  presentation  prioritizing  priors-posteriors  privacy  pro-rata  problem-solving  productivity  profile  programming  progression  propaganda  property-rights  proposal  protestant-catholic  protocol-metadata  prudence  pseudoE  psych-architecture  psychiatry  psycho-atoms  psychology  psychometrics  public-goodish  public-health  publishing  putnam-like  python  q-n-a  qra  QTL  quality  quantified-self  quantitative-qualitative  questions  quixotic  quotes  r-lang  race  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  realness  realpolitik  reason  recommendations  recruiting  red-queen  reddit  redistribution  reference  reflection  regional-scatter-plots  regression  regression-to-mean  regularizer  regulation  reinforcement  religion  rent-seeking  replication  research  research-program  retention  review  revolution  rhetoric  rhythm  right-wing  rigor  rindermann-thompson  risk  ritual  robust  roots  rot  rounding  running  russia  s-factor  s:*  s:**  s:***  sanctity-degradation  sapiens  scala  scale  scaling-up  scholar  science  science-anxiety  scifi-fantasy  scitariat  search  securities  security  selection  self-control  self-report  sequential  sex  sexuality  shift  shipping  sib-study  signal-noise  signaling  simler  singularity  sinosphere  skeleton  skunkworks  sleep  soccer  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  society  sociology  software  solid-study  space  spanish  spatial  speaking  spearhead  speculation  speedometer  spock  sports  spreading  ssc  stackex  stagnation  stamina  startups  stat-mech  stat-power  state  state-of-art  statesmen  static-dynamic  stats  status  stock-flow  stories  strategy  stream  street-fighting  stress  stripe  structure  study  studying  stylized-facts  subculture  subjective-objective  success  sulla  summary  supply-demand  survey  sv  syntax  synthesis  systematic-ad-hoc  systems  tactics  tails  taxes  tcstariat  teaching  tech  technical-writing  technocracy  techtariat  temperance  terrorism  tetlock  the-bones  the-classics  the-great-west-whale  the-monster  the-self  the-south  the-trenches  the-watchers  the-west  theory-of-mind  theory-practice  theos  thick-thin  thiel  things  thinking  threat-modeling  thucydides  time  time-preference  time-series  time-use  tip-of-tongue  todo  toolkit  tools  top-n  traces  track-record  tradecraft  tradeoffs  tradition  transportation  trees  trends  tribalism  tricks  trivia  troll  trump  truth  tutoring  twin-study  twitter  types  ubiquity  unaffiliated  uncertainty  unintended-consequences  unit  unix  urban  urban-rural  us-them  usa  ux  values  vampire-squid  variance-components  vcs  venture  video  virginia-DC  virtu  visual-understanding  visualization  visuo  volo-avolo  vulgar  walter-scheidel  war  water  wealth  wealth-of-nations  web  weightlifting  welfare-state  west-hunter  westminster  whiggish-hegelian  white-paper  wiki  wire-guided  wisdom  within-group  within-without  wonkish  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  X-not-about-Y  yak-shaving  yvain  zeitgeist  🌞  🎩  🐸  👽  🔬  🖥  🤖  🦀  🦉 

Copy this bookmark:



description:


tags: