nhaliday + meta:science   125

The Gelman View – spottedtoad
I have read Andrew Gelman’s blog for about five years, and gradually, I’ve decided that among his many blog posts and hundreds of academic articles, he is advancing a philosophy not just of statistics but of quantitative social science in general. Not a statistician myself, here is how I would articulate the Gelman View:

A. Purposes

1. The purpose of social statistics is to describe and understand variation in the world. The world is a complicated place, and we shouldn’t expect things to be simple.
2. The purpose of scientific publication is to allow for communication, dialogue, and critique, not to “certify” a specific finding as absolute truth.
3. The incentive structure of science needs to reward attempts to independently investigate, reproduce, and refute existing claims and observed patterns, not just to advance new hypotheses or support a particular research agenda.

B. Approach

1. Because the world is complicated, the most valuable statistical models for the world will generally be complicated. The result of statistical investigations will only rarely be to  give a stamp of truth on a specific effect or causal claim, but will generally show variation in effects and outcomes.
2. Whenever possible, the data, analytic approach, and methods should be made as transparent and replicable as possible, and should be fair game for anyone to examine, critique, or amend.
3. Social scientists should look to build upon a broad shared body of knowledge, not to “own” a particular intervention, theoretic framework, or technique. Such ownership creates incentive problems when the intervention, framework, or technique fail and the scientist is left trying to support a flawed structure.

Components

1. Measurement. How and what we measure is the first question, well before we decide on what the effects are or what is making that measurement change.
2. Sampling. Who we talk to or collect information from always matters, because we should always expect effects to depend on context.
3. Inference. While models should usually be complex, our inferential framework should be simple enough for anyone to follow along. And no p values.

He might disagree with all of this, or how it reflects his understanding of his own work. But I think it is a valuable guide to empirical work.
ratty  unaffiliated  summary  gelman  scitariat  philosophy  lens  stats  hypothesis-testing  science  meta:science  social-science  institutions  truth  is-ought  best-practices  data-science  info-dynamics  alt-inst  academia  empirical  evidence-based  checklists  strategy  epistemic 
november 2017 by nhaliday
Peer review is younger than you think - Marginal REVOLUTION
I’d like to see a detailed look at actual journal practices, but my personal sense is that editorial review was the norm until fairly recently, not review by a team of outside referees.  In 1956, for instance, the American Historical Review asked for only one submission copy, and it seems the same was true as late as 1970.  I doubt they made the photocopies themselves. Schmidt seems to suggest that the practices of government funders nudged the academic professions into more formal peer review with multiple referee reports.
econotariat  marginal-rev  commentary  data  gbooks  trends  anglo  language  zeitgeist  search  history  mostly-modern  science  meta:science  institutions  academia  publishing  trivia  cocktail  links 
september 2017 by nhaliday
No, science’s reproducibility problem is not limited to psychology - The Washington Post
But now then: Are psychology experiments more likely than, say, chemistry experiments or physics experiments to have issues with reproducibility? Ioannidis told me yes, probably so.

“I think on average physics and chemistry would do better. I don’t know how much better," he said.

Maybe someone should try to constrain the differences between the physical sciences and the social sciences. Perhaps physics and chemistry will do their own version of the reproducibility study?
news  org:rec  ioannidis  replication  science  meta:science  social-science  psychology  social-psych 
september 2017 by nhaliday
Of mice and men: why animal trial results don’t always translate to humans
It showed that of the most-cited animal studies in prestigious scientific journals, such as Nature and Cell, only 37% were replicated in subsequent human randomised trials and 18% were contradicted in human trials. It is safe to assume that less-cited animal studies in lesser journals would have an even lower strike rate.
news  org:mag  org:edu  science  meta:science  medicine  meta:medicine  model-organism  human-study  homo-hetero  data  pro-rata  org:nat  replication  methodology 
september 2017 by nhaliday
All models are wrong - Wikipedia
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled "All models are wrong but some are useful". The section is copied below.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
thinking  metabuch  metameta  map-territory  models  accuracy  wire-guided  truth  philosophy  stats  data-science  methodology  lens  wiki  reference  complex-systems  occam  parsimony  science  nibble  hi-order-bits  info-dynamics  the-trenches  meta:science  physics  fluid  thermo  stat-mech  applicability-prereqs  theory-practice  elegance 
august 2017 by nhaliday
Fear and Loathing in Psychology - The Unz Review
Warne and Astle looked at 29 best-selling undergraduate textbooks, which is where psychology students learn about intelligence, because less than 10% of graduate courses offer an intelligence option.

3.3% of textbook space is dedicated to intelligence. Given its influence, this is not very much.

The most common topics start well, with IQ and Spearman’s g, but do not go on to the best validated, evidence-led Cattell-Horn-Carol meta-analytic summary, but a side-stream, speculative triarchic theory from Sternberg; and a highly speculative and non-specific sketch of an idea about multiple intelligences Gardner. The last is a particular puzzle, since it really is a whimsical notion that motor skill is no different from analytical problem solving. All must have prizes.
Commonly, environmental influences are discussed, genetic ones rarely.

What Do Undergraduates Learn About Human Intelligence? An Analysis of Introductory Psychology Textbooks: https://drive.google.com/file/d/0B3c4TxciNeJZOTl3clpiX0JKckk/view

Education or Indoctrination? The Accuracy of Introductory Psychology Textbooks in Covering Controversial Topics and Urban Legends About Psychology: http://sci-hub.tw/https://link.springer.com/article/10.1007/s12144-016-9539-7

Twenty-four leading introductory psychology textbooks were surveyed for their coverage of a number of controversial topics (e.g., media violence, narcissism epidemic, multiple intelligences) and scientific urban legends (e.g., Kitty Genovese, Mozart Effect) for their factual accuracy. Results indicated numerous errors of factual reporting across textbooks, particularly related to failing to inform students of the controversial nature of some research fields and repeating some scientific urban legends as if true. Recommendations are made for improving the accuracy of introductory textbooks.

Mapping the scale of the narcissism epidemic: Increases in narcissism 2002–2007 within ethnic groups: https://www.sciencedirect.com/science/article/pii/S0092656608000949

The increasing numbers of Asian-Americans at the UCs over time may have masked changes in narcissism, as Asian-Americans score lower on the NPI. When examined within ethnic groups, Trzesniewski et al.’s data show that NPI scores increased significantly between 2002 and 2007 at twice the rate of the yearly change found over 24 years in Twenge et al. (2008a). The overall means also show a significant increase 2002–2007. Thus the available evidence suggests that college students are endorsing progressively more narcissistic personality traits over the generations.

Birth Cohort Increases in Narcissistic Personality Traits Among American College Students, 1982–2009: http://journals.sagepub.com/doi/abs/10.1177/1948550609355719

Both studies demonstrate significant increases in narcissism over time (Study 1 d = .37, 1982–2008, when campus is controlled; Study 2 d = .37, 1994–2009). These results support a generational differences model of individual personality traits reflecting changes in culture.

could this just be a selection effect (more people attending)?
albion  scitariat  education  higher-ed  academia  social-science  westminster  info-dynamics  psychology  cog-psych  psychometrics  iq  intelligence  realness  biases  commentary  study  summary  meta:science  pinker  multi  pdf  survey  is-ought  truth  culture-war  toxoplasmosis  replication  social-psych  propaganda  madisonian  identity-politics  init  personality  psychiatry  disease  trends  epidemiology  public-health  psych-architecture  dimensionality  confounding  control  age-generation  demographics  race  christopher-lasch  humility  usa  the-west  california  berkeley  asia 
july 2017 by nhaliday
National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track
Here we report five hiring experiments in which faculty evaluated hypothetical female and male applicants, using systematically varied profiles disguising identical scholarship, for assistant professorships in biology, engineering, economics, and psychology. Contrary to prevailing assumptions, men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males with matching lifestyles (single, married, divorced), with the exception of male economists, who showed no gender preference. Comparing different lifestyles revealed that women preferred divorced mothers to married fathers and that men preferred mothers who took parental leaves to mothers who did not.

Double-blind review favours increased representation of female authors: http://www.sciencedirect.com/science/article/pii/S0169534707002704
Double-blind peer review, in which neither author nor reviewer identity are revealed, is rarely practised in ecology or evolution journals. However, in 2001, double-blind review was introduced by the journal Behavioral Ecology. Following this policy change, there was a significant increase in female first-authored papers, a pattern not observed in a very similar journal that provides reviewers with author information. No negative effects could be identified, suggesting that double-blind review should be considered by other journals.

Teaching accreditation exams reveal grading biases favor women in male-dominated disciplines in France: http://science.sciencemag.org/content/353/6298/474
This bias turns from 3 to 5 percentile ranks for men in literature and foreign languages to about 10 percentile ranks for women in math, physics, or philosophy.
study  org:nat  science  meta:science  gender  discrimination  career  progression  planning  long-term  values  academia  field-study  null-result  effect-size  🎓  multi  publishing  intervention  biases 
july 2017 by nhaliday
Alzheimers | West Hunter
Some disease syndromes almost have to be caused by pathogens – for example, any with a fitness impact (prevalence x fitness reduction) > 2% or so, too big to be caused by mutational pressure. I don’t think that this is the case for AD: it hits so late in life that the fitness impact is minimal. However, that hardly means that it can’t be caused by a pathogen or pathogens – a big fraction of all disease syndromes are, including many that strike in old age. That possibility is always worth checking out, not least because infectious diseases are generally easier to prevent and/or treat.

There is new work that strongly suggests that pathogens are the root cause. It appears that the amyloid is an antimicrobial peptide. amyloid-beta binds to invading microbes and then surrounds and entraps them. ‘When researchers injected Salmonella into mice’s hippocampi, a brain area damaged in Alzheimer’s, A-beta quickly sprang into action. It swarmed the bugs and formed aggregates called fibrils and plaques. “Overnight you see the plaques throughout the hippocampus where the bugs were, and then in each single plaque is a single bacterium,” Tanzi says. ‘

obesity and pathogens: https://westhunt.wordpress.com/2016/05/29/alzheimers/#comment-79757
not sure about this guy, but interesting: https://westhunt.wordpress.com/2016/05/29/alzheimers/#comment-79748
http://perfecthealthdiet.com/2010/06/is-alzheimer%E2%80%99s-caused-by-a-bacterial-infection-of-the-brain/

https://westhunt.wordpress.com/2016/12/13/the-twelfth-battle-of-the-isonzo/
All too often we see large, long-lasting research efforts that never produce, never achieve their goal.

For example, the amyloid hypothesis [accumulation of amyloid-beta oligomers is the cause of Alzheimers] has been dominant for more than 20 years, and has driven development of something like 15 drugs. None of them have worked. At the same time the well-known increased risk from APOe4 has been almost entirely ignored, even though it ought to be a clue to the cause.

In general, when a research effort has been spinning its wheels for a generation or more, shouldn’t we try something different? We could at least try putting a fraction of those research dollars into alternative approaches that have not yet failed repeatedly.

Mostly this applies to research efforts that at least wish they were science. ‘educational research’ is in a special class, and I hardly know what to recommend. Most of the remedial actions that occur to me violate one or more of the Geneva conventions.

APOe4 related to lymphatic system: https://en.wikipedia.org/wiki/Apolipoprotein_E

https://westhunt.wordpress.com/2012/03/06/spontaneous-generation/#comment-2236
Look,if I could find out the sort of places that I usually misplace my keys – if I did, which I don’t – I could find the keys more easily the next time I lose them. If you find out that practitioners of a given field are not very competent, it marks that field as a likely place to look for relatively easy discovery. Thus medicine is a promising field, because on the whole doctors are not terribly good investigators. For example, none of the drugs developed for Alzheimers have worked at all, which suggests that our ideas on the causation of Alzheimers are likely wrong. Which suggests that it may (repeat may) be possible to make good progress on Alzheimers, either by an entirely empirical approach, which is way underrated nowadays, or by dumping the current explanation, finding a better one, and applying it.

You could start by looking at basic notions of field X and asking yourself: How do we really know that? Is there serious statistical evidence? Does that notion even accord with basic theory? This sort of checking is entirely possible. In most of the social sciences, we don’t, there isn’t, and it doesn’t.

Hygiene and the world distribution of Alzheimer’s disease: Epidemiological evidence for a relationship between microbial environment and age-adjusted disease burden: https://academic.oup.com/emph/article/2013/1/173/1861845/Hygiene-and-the-world-distribution-of-Alzheimer-s

Amyloid-β peptide protects against microbial infection in mouse and worm models of Alzheimer’s disease: http://stm.sciencemag.org/content/8/340/340ra72

Fungus, the bogeyman: http://www.economist.com/news/science-and-technology/21676754-curious-result-hints-possibility-dementia-caused-fungal
Fungus and dementia
paper: http://www.nature.com/articles/srep15015
west-hunter  scitariat  disease  parasites-microbiome  medicine  dementia  neuro  speculation  ideas  low-hanging  todo  immune  roots  the-bones  big-surf  red-queen  multi  🌞  poast  obesity  strategy  info-foraging  info-dynamics  institutions  meta:medicine  social-science  curiosity  🔬  science  meta:science  meta:research  wiki  epidemiology  public-health  study  arbitrage  alt-inst  correlation  cliometrics  path-dependence  street-fighting  methodology  nibble  population-genetics  org:nat  health  embodied  longevity  aging  org:rec  org:biz  org:anglo  news  neuro-nitgrit  candidate-gene  nutrition  diet  org:health  explanans  fashun  empirical  theory-practice  ability-competence  dirty-hands  education  aphorism  truth  westminster  innovation  evidence-based  religion  prudence  track-record  problem-solving 
july 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated 
june 2017 by nhaliday
Links 6/17: Silinks Is Golden | Slate Star Codex
Vox tries its hand at an explainer about the Sam Harris / Charles Murray interview. Some criticism from Gene Expression, The Misrepresentation Of Genetic Science In The Vox Piece On Race And IQ. From Elan, The Cherry-Picked Science In Vox’s Charles Murray Article. From Sam Harris, an accusation that the article just blatantly lies about the contents of the publicly available podcast (one of the authors later apologizes for this, but Vox hasn’t changed the article). From Professor Emeritus Richard Haier, who called it a “junk science piece” and tried to write a counterpiece for Vox (they refused to publish it, but it’s now up on Quillette). And even from other Vox reporters who thought it was journalistically shoddy. As for me, I think the article was as good as it could be under the circumstances – while it does get some things wrong and is personally unfair to Murray, from a scientific point of view I’m just really glad that the piece admits that IQ is real, meaningful, and mostly hereditary. This was the main flashpoint of the original debate twenty-five years ago, it’s more important than the stuff on the achievement gap, and the piece gets it entirely right. I think this sort of shift from debating the very existence of intelligence to debating the details is important, very productive, and worth praising even when the details are kind of dubious. This should be read in the context of similar recent articles like NYMag’s Yes, There Is A Genetic Component To Intelligence and Nature’s Intelligence Research Should Not Be Held Back By Its Past.

interesting comment thread on media treatment of HBD and effect on alt-right: http://slatestarcodex.com/2017/06/14/links-617-silinks-is-golden/#comment-510641

AskHistorians: Did Roman legionnaires get PTSD? “For the Romans, people experiencing intrusive memories were said to be haunted by ghosts…those haunted by ghosts are constantly depicted showing many symptoms which would be familiar to the modern PTSD sufferer.”

The best new blog I’ve come across recently is Sam[]zdat, which among other things has been reviewing various great books. Their Seeing Like A State review is admittedly better than mine, but I most appreciated The Meridian Of Her Greatness, based on a review of Karl Polanyi’s The Great Transformation. Go for the really incisive look at important ideas and social trends, stay for the writing style.

What lesson should we draw about Democrats’ prospects from the Republicans’ 7 point win in the Montana special election? (point, counterpoint).

An analysis showing Donald Trump’s speech patterns getting less fluent and more bizarre over the past few years – might he be suffering from mild age-related cognitive impairment? Also, given that this can be pretty subtle (cue joke about Trump) and affect emotional stability in complicated ways, should we be more worried about electing seventy-plus year old people to the presidency?

PNAS has a good (albeit kind of silly) article on claims that scientific progress has slowed.

New study finds that growth mindset is not associated with scholastic aptitude in a large sample of university applicants. Particularly excited about this one because an author said that my blog posts about growth mindset inspired the study. I’m honored to have been able to help the progress of science!
ratty  yvain  ssc  links  multi  culture-war  westminster  iq  psychometrics  race  pop-diff  debate  history  iron-age  mediterranean  the-classics  war  disease  psychiatry  books  review  leviathan  polisci  markets  capitalism  politics  elections  data  postmortem  trends  usa  government  trump  current-events  stagnation  science  meta:science  innovation  psychology  cog-psych  education  growth  social-psych  media  propaganda  poast  identity-politics  cocktail  trivia  aging  counter-revolution  polanyi-marx  org:local 
june 2017 by nhaliday
Why we should love null results – The 100% CI
https://twitter.com/StuartJRitchie/status/870257682233659392
This is a must-read blog for many reasons, but biggest is: it REALLY matters if a hypothesis is likely to be true.
Strikes me that the areas of psychology with the most absurd hypotheses (ones least likely to be true) *AHEMSOCIALPRIMINGAHEM* are also...
...the ones with extremely small sample sizes. So this already-scary graph from the blogpost becomes all the more terrifying:
scitariat  explanation  science  hypothesis-testing  methodology  null-result  multi  albion  twitter  social  commentary  psychology  social-psych  social-science  meta:science  data  visualization  nitty-gritty  stat-power  priors-posteriors 
june 2017 by nhaliday
Edge.org: 2017 : WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?
highlights:
- the genetic book of the dead [Dawkins]
- complementarity [Frank Wilczek]
- relative information
- effective theory [Lisa Randall]
- affordances [Dennett]
- spontaneous symmetry breaking
- relatedly, equipoise [Nicholas Christakis]
- case-based reasoning
- population reasoning (eg, common law)
- criticality [Cesar Hidalgo]
- Haldan's law of the right size (!SCALE!)
- polygenic scores
- non-ergodic
- ansatz
- state [Aaronson]: http://www.scottaaronson.com/blog/?p=3075
- transfer learning
- effect size
- satisficing
- scaling
- the breeder's equation [Greg Cochran]
- impedance matching

soft:
- reciprocal altruism
- life history [Plomin]
- intellectual honesty [Sam Harris]
- coalitional instinct (interesting claim: building coalitions around "rationality" actually makes it more difficult to update on new evidence as it makes you look like a bad person, eg, the Cathedral)
basically same: https://twitter.com/ortoiseortoise/status/903682354367143936

more: https://www.edge.org/conversation/john_tooby-coalitional-instincts

interesting timing. how woke is this dude?
org:edge  2017  technology  discussion  trends  list  expert  science  top-n  frontier  multi  big-picture  links  the-world-is-just-atoms  metameta  🔬  scitariat  conceptual-vocab  coalitions  q-n-a  psychology  social-psych  anthropology  instinct  coordination  duty  power  status  info-dynamics  cultural-dynamics  being-right  realness  cooperate-defect  westminster  chart  zeitgeist  rot  roots  epistemic  rationality  meta:science  analogy  physics  electromag  geoengineering  environment  atmosphere  climate-change  waves  information-theory  bits  marginal  quantum  metabuch  homo-hetero  thinking  sapiens  genetics  genomics  evolution  bio  GT-101  low-hanging  minimum-viable  dennett  philosophy  cog-psych  neurons  symmetry  humility  life-history  social-structure  GWAS  behavioral-gen  biodet  missing-heritability  ergodic  machine-learning  generalization  west-hunter  population-genetics  methodology  blowhards  spearhead  group-level  scale  magnitude  business  scaling-tech  tech  business-models  optimization  effect-size  aaronson  state  bare-hands  problem-solving  politics 
may 2017 by nhaliday
[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts
Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans.

https://www.reddit.com/r/slatestarcodex/comments/6dy6ex/arxiv_when_will_ai_exceed_human_performance/
study  preprint  science  meta:science  technology  ai  automation  labor  ai-control  risk  futurism  poll  expert  usa  asia  trends  hmm  idk  definite-planning  frontier  ideas  prediction  innovation  china  sinosphere  multi  reddit  social  commentary  ssc  speedometer  flux-stasis  ratty  expert-experience  org:mat  singularity  optimism  pessimism 
may 2017 by nhaliday
Lucio Russo - Wikipedia
In The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had to Be Reborn (Italian: La rivoluzione dimenticata), Russo promotes the belief that Hellenistic science in the period 320-144 BC reached heights not achieved by Classical age science, and proposes that it went further than ordinarily thought, in multiple fields not normally associated with ancient science.

La Rivoluzione Dimenticata (The Forgotten Revolution), Reviewed by Sandro Graffi: http://www.ams.org/notices/199805/review-graffi.pdf

Before turning to the question of the decline of Hellenistic science, I come back to the new light shed by the book on Euclid’s Elements and on pre-Ptolemaic astronomy. Euclid’s definitions of the elementary geometric entities—point, straight line, plane—at the beginning of the Elements have long presented a problem.7 Their nature is in sharp contrast with the approach taken in the rest of the book, and continued by mathematicians ever since, of refraining from defining the fundamental entities explicitly but limiting themselves to postulating the properties which they enjoy. Why should Euclid be so hopelessly obscure right at the beginning and so smooth just after? The answer is: the definitions are not Euclid’s. Toward the beginning of the second century A.D. Heron of Alexandria found it convenient to introduce definitions of the elementary objects (a sign of decadence!) in his commentary on Euclid’s Elements, which had been written at least 400 years before. All manuscripts of the Elements copied ever since included Heron’s definitions without mention, whence their attribution to Euclid himself. The philological evidence leading to this conclusion is quite convincing.8

...

What about the general and steady (on the average) impoverishment of Hellenistic science under the Roman empire? This is a major historical problem, strongly tied to the even bigger one of the decline and fall of the antique civilization itself. I would summarize the author’s argument by saying that it basically represents an application to science of a widely accepted general theory on decadence of antique civilization going back to Max Weber. Roman society, mainly based on slave labor, underwent an ultimately unrecoverable crisis as the traditional sources of that labor force, essentially wars, progressively dried up. To save basic farming, the remaining slaves were promoted to be serfs, and poor free peasants reduced to serfdom, but this made trade disappear. A society in which production is almost entirely based on serfdom and with no trade clearly has very little need of culture, including science and technology. As Max Weber pointed out, when trade vanished, so did the marble splendor of the ancient towns, as well as the spiritual assets that went with it: art, literature, science, and sophisticated commercial laws. The recovery of Hellenistic science then had to wait until the disappearance of serfdom at the end of the Middle Ages. To quote Max Weber: “Only then with renewed vigor did the old giant rise up again.”

...

The epilogue contains the (rather pessimistic) views of the author on the future of science, threatened by the apparent triumph of today’s vogue of irrationality even in leading institutions (e.g., an astrology professorship at the Sorbonne). He looks at today’s ever-increasing tendency to teach science more on a fideistic than on a deductive or experimental basis as the first sign of a decline which could be analogous to the post-Hellenistic one.

Praising Alexandrians to excess: https://sci-hub.tw/10.1088/2058-7058/17/4/35
The Economic Record review: https://sci-hub.tw/10.1111/j.1475-4932.2004.00203.x

listed here: https://pinboard.in/u:nhaliday/b:c5c09f2687c1

Was Roman Science in Decline? (Excerpt from My New Book): https://www.richardcarrier.info/archives/13477
people  trivia  cocktail  history  iron-age  mediterranean  the-classics  speculation  west-hunter  scitariat  knowledge  wiki  ideas  wild-ideas  technology  innovation  contrarianism  multi  pdf  org:mat  books  review  critique  regularizer  todo  piracy  physics  canon  science  the-trenches  the-great-west-whale  broad-econ  the-world-is-just-atoms  frontier  speedometer  🔬  conquest-empire  giants  economics  article  growth-econ  cjones-like  industrial-revolution  empirical  absolute-relative  truth  rot  zeitgeist  gibbon  big-peeps  civilization  malthus  roots  old-anglo  britain  early-modern  medieval  social-structure  limits  quantitative-qualitative  rigor  lens  systematic-ad-hoc  analytical-holistic  cycles  space  mechanics  math  geometry  gravity  revolution  novelty  meta:science  is-ought  flexibility  trends  reason  applicability-prereqs  theory-practice  traces  evidence 
may 2017 by nhaliday
China Overtakes US in Scientific Articles, Robots, Supercomputers - The Unz Review
gnon  commentary  trends  usa  china  asia  comparison  sinosphere  frontier  technology  science  innovation  robotics  automation  latin-america  india  russia  scale  military  defense  foreign-policy  realpolitik  great-powers  kumbaya-kult  thucydides  multi  hsu  scitariat  heavy-industry  news  org:nat  org:sci  data  visualization  list  infographic  world  europe  EU  org:mag  dynamic  ranking  top-n  britain  anglo  japan  meta:science  anglosphere  database  germanic  org:biz  rhetoric  prediction  tech  labor  human-capital  education  higher-ed  money  compensation  idk  org:lite  expansionism  current-events  🔬  the-world-is-just-atoms  🎓  dirty-hands  org:rec  org:anglo  speedometer  track-record  time-series  monetary-fiscal  chart  quality 
may 2017 by nhaliday
Battle for the Planet of Low-Hanging Fruit | West Hunter
Peter Chamberlen the elder [1560-1631] was the son of a Huguenot surgeon who had left France in 1576. He invented obstetric forceps , a surgical instrument similar to a pair of tongs, useful in extracting the baby in a  difficult birth.   He, his brother, and  his brother’s descendants preserved and prospered from their private technology for 125 years. They  went to a fair amount of effort to preserve the secret: the pregnant patient was blindfolded, and all others had to leave the room.  The Chamberlens specialized in difficult births  among the rich and famous.
west-hunter  scitariat  discussion  history  early-modern  mostly-modern  stories  info-dynamics  science  meta:science  technology  low-hanging  fourier  europe  germanic  IEEE  ideas  the-trenches  alt-inst  discovery  innovation  open-closed 
may 2017 by nhaliday
Say a little prior for me: more on climate change - Statistical Modeling, Causal Inference, and Social Science
http://www.fooledbyrandomness.com/climateletter.pdf
We have only one planet. This fact radically constrains the kinds of risks that are appropriate to take at a large scale. Even a risk with a very low probability becomes unacceptable when it affects all of us – there is no reversing mistakes of that magnitude.
gelman  scitariat  discussion  links  science  meta:science  epistemic  info-dynamics  climate-change  causation  models  thinking  priors-posteriors  atmosphere  environment  multi  pdf  rhetoric  uncertainty  risk  outcome-risk  moments 
april 2017 by nhaliday
Meta-assessment of bias in science
Science is said to be suffering a reproducibility crisis caused by many biases. How common are these problems, across the wide diversity of research fields? We probed for multiple bias-related patterns in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was on average relatively small. However, we consistently observed that small, early, highly cited studies published in peer-reviewed journals were likely to overestimate effects. We found little evidence that these biases were related to scientific productivity, and we found no difference between biases in male and female researchers. However, a scientist’s early-career status, isolation, and lack of scientific integrity might be significant risk factors for producing unreliable results.
study  academia  science  meta:science  metabuch  stylized-facts  ioannidis  replication  error  incentives  integrity  trends  social-science  meta-analysis  🔬  hypothesis-testing  effect-size  usa  biases  org:nat  info-dynamics 
march 2017 by nhaliday
[0809.5250] The decline in the concentration of citations, 1900-2007
These measures are used for four broad disciplines: natural sciences and engineering, medical fields, social sciences, and the humanities. All these measures converge and show that, contrary to what was reported by Evans, the dispersion of citations is actually increasing.

- natural sciences around 60-70% cited in 2-5 year window
- humanities stands out w/ 10-20% cited (maybe because of focus on books)
study  preprint  science  meta:science  distribution  network-structure  len:short  publishing  density  🔬  info-dynamics  org:mat 
february 2017 by nhaliday
probability - Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? - Cross Validated
The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in 100p% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability pp given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself.

http://stats.stackexchange.com/questions/139290/a-psychology-journal-banned-p-values-and-confidence-intervals-is-it-indeed-wise

PS. Note that my question is not about the ban itself; it is about the suggested approach. I am not asking about frequentist vs. Bayesian inference either. The Editorial is pretty negative about Bayesian methods too; so it is essentially about using statistics vs. not using statistics at all.

wut

http://stats.stackexchange.com/questions/6966/why-continue-to-teach-and-use-hypothesis-testing-when-confidence-intervals-are
http://stats.stackexchange.com/questions/2356/are-there-any-examples-where-bayesian-credible-intervals-are-obviously-inferior
http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval
http://stats.stackexchange.com/questions/6652/what-precisely-is-a-confidence-interval
http://stats.stackexchange.com/questions/1164/why-havent-robust-and-resistant-statistics-replaced-classical-techniques/
http://stats.stackexchange.com/questions/16312/what-is-the-difference-between-confidence-intervals-and-hypothesis-testing
http://stats.stackexchange.com/questions/31679/what-is-the-connection-between-credible-regions-and-bayesian-hypothesis-tests
http://stats.stackexchange.com/questions/11609/clarification-on-interpreting-confidence-intervals
http://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals
q-n-a  overflow  nibble  stats  data-science  science  methodology  concept  confidence  conceptual-vocab  confusion  explanation  thinking  hypothesis-testing  jargon  multi  meta:science  best-practices  error  discussion  bayesian  frequentist  hmm  publishing  intricacy  wut  comparison  motivation  clarity  examples  robust  metabuch  🔬  info-dynamics  reference 
february 2017 by nhaliday
Measurement error and the replication crisis | Science
In a low-noise setting, the theoretical results of Hausman and others correctly show that measurement error will attenuate coefficient estimates. But we can demonstrate with a simple exercise that the opposite occurs in the presence of high noise and selection on statistical significance.
study  org:nat  science  meta:science  stats  signal-noise  gelman  methodology  hypothesis-testing  replication  social-science  error  metabuch  unit  nibble  bounded-cognition  measurement  🔬  info-dynamics 
february 2017 by nhaliday
Paperscape
- includes physics, cs, etc.
- CS is _a lot_ smaller, or at least has much lower citation counts
- size = number citations, placement = citation network structure
papers  publishing  science  meta:science  data  visualization  network-structure  big-picture  dynamic  exploratory  🎓  physics  cs  math  hi-order-bits  survey  visual-understanding  preprint  aggregator  database  search  maps  zooming  metameta  scholar-pack  🔬  info-dynamics  scale  let-me-see  chart 
february 2017 by nhaliday
Mikhail Leonidovich Gromov - Wikipedia
Gromov's style of geometry often features a "coarse" or "soft" viewpoint, analyzing asymptotic or large-scale properties.

Gromov is also interested in mathematical biology,[11] the structure of the brain and the thinking process, and the way scientific ideas evolve.[8]
math  people  giants  russia  differential  geometry  topology  math.GR  wiki  structure  meta:math  meta:science  interdisciplinary  bio  neuro  magnitude  limits  science  nibble  coarse-fine  wild-ideas  convergence  info-dynamics  ideas 
january 2017 by nhaliday
Funding the Reproducibility Crises as effective giving - Less Wrong Discussion
I had definitely noticed all the different nutrition, psychology, and biological initiatives like OSF or the Reproducibility Project, and how expensive they all are, but I didn't realize that they all owed their funding to a single source. I'm very glad Arnold is doing this, but I now feel more pessimistic about academia than when I assumed that the funding for all this was coming from a broad coalition of universities and nonprofits etc....
ratty  lesswrong  commentary  replication  science  meta:science  effective-altruism  cause  money  error  gwern  power  info-dynamics 
january 2017 by nhaliday
Information Processing: Is science self-correcting?
A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character.
hsu  scitariat  ioannidis  science  meta:science  error  commentary  physics  limits  oscillation  models  equilibrium  bounded-cognition  complex-systems  being-right  info-dynamics  the-trenches  truth 
january 2017 by nhaliday
The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses - IOANNIDIS - 2016 - The Milbank Quarterly - Wiley Online Library
Currently, _probably more systematic reviews of trials than new randomized trials are published annually_. Most topics addressed by meta-analyses of randomized trials have overlapping, redundant meta-analyses; same-topic meta-analyses may exceed 20 sometimes. Some fields produce massive numbers of meta-analyses; for example, 185 meta-analyses of antidepressants for depression were published between 2007 and 2014. These meta-analyses are often produced either by industry employees or by authors with industry ties and results are aligned with sponsor interests. _China has rapidly become the most prolific producer of English-language, PubMed-indexed meta-analyses_. The most massive presence of Chinese meta-analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes. Furthermore, many contracting companies working on evidence synthesis receive industry contracts to produce meta-analyses, many of which probably remain unpublished. Many other meta-analyses have serious flaws. Of the remaining, most have weak or insufficient evidence to inform decision making. Few systematic reviews and meta-analyses are both non-misleading and useful.
study  ioannidis  science  medicine  replication  methodology  meta:science  critique  evidence-based  meta-analysis  china  asia  genetics  anomie  cochrane  candidate-gene  info-dynamics  sinosphere 
january 2017 by nhaliday
Thinking Outside One’s Paradigm | Academically Interesting
I think that as a scientist (or really, even as a citizen) it is important to be able to see outside one’s own paradigm. I currently think that I do a good job of this, but it seems to me that there’s a big danger of becoming more entrenched as I get older. Based on the above experiences, I plan to use the following test: When someone asks me a question about my field, how often have I not thought about it before? How tempted am I to say, “That question isn’t interesting”? If these start to become more common, then I’ll know something has gone wrong.
ratty  clever-rats  academia  science  interdisciplinary  lens  frontier  thinking  rationality  meta:science  curiosity  insight  scholar  innovation  reflection  acmtariat  water  biases  heterodox  🤖  🎓  aging  meta:math  low-hanging  big-picture  hi-order-bits  flexibility  org:bleg  nibble  the-trenches  wild-ideas  metameta  courage  s:**  discovery  context  embedded-cognition  endo-exo  near-far  🔬  info-dynamics  allodium  ideas  questions  within-without  meta:research 
january 2017 by nhaliday
WHAT'S TO KNOW ABOUT THE CREDIBILITY OF EMPIRICAL ECONOMICS? - Ioannidis - 2013 - Journal of Economic Surveys - Wiley Online Library
Abstract. The scientific credibility of economics is itself a scientific question that can be addressed with both theoretical speculations and empirical data. In this review, we examine the major parameters that are expected to affect the credibility of empirical economics: sample size, magnitude of pursued effects, number and pre-selection of tested relationships, flexibility and lack of standardization in designs, definitions, outcomes and analyses, financial and other interests and prejudices, and the multiplicity and fragmentation of efforts. We summarize and discuss the empirical evidence on the lack of a robust reproducibility culture in economics and business research, the prevalence of potential publication and other selective reporting biases, and other failures and biases in the market of scientific information. Overall, the credibility of the economics literature is likely to be modest or even low.

The Power of Bias in Economics Research: http://onlinelibrary.wiley.com/doi/10.1111/ecoj.12461/full
We investigate two critical dimensions of the credibility of empirical economics research: statistical power and bias. We survey 159 empirical economics literatures that draw upon 64,076 estimates of economic parameters reported in more than 6,700 empirical studies. Half of the research areas have nearly 90% of their results under-powered. The median statistical power is 18%, or less. A simple weighted average of those reported results that are adequately powered (power ≥ 80%) reveals that nearly 80% of the reported effects in these empirical economics literatures are exaggerated; typically, by a factor of two and with one-third inflated by a factor of four or more.

Economics isn't a bogus science — we just don't use it correctly: http://www.latimes.com/opinion/op-ed/la-oe-ioannidis-economics-is-a-science-20171114-story.html
https://archive.is/AU7Xm
study  ioannidis  social-science  meta:science  economics  methodology  critique  replication  bounded-cognition  error  stat-power  🎩  🔬  info-dynamics  piracy  empirical  biases  econometrics  effect-size  network-structure  realness  paying-rent  incentives  academia  multi  evidence-based  news  org:rec  rhetoric  contrarianism  backup  cycles  finance  huge-data-the-biggest  org:local 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractcoordmetametasci

related tags

2016-election  :)  aaronson  ability-competence  absolute-relative  academia  accuracy  acm  acmtariat  adversarial  advice  aesthetics  age-generation  aggregator  aging  agriculture  ai  ai-control  albion  alignment  allodium  alt-inst  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  anomie  anthropic  anthropology  antidemos  aphorism  applicability-prereqs  applications  arbitrage  art  article  asia  atmosphere  audio  authoritarianism  automation  axelrod  backup  bare-hands  barons  bayesian  behavioral-gen  being-right  berkeley  best-practices  better-explained  betting  bias-variance  biases  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biotech  bits  blog  blowhards  bonferroni  books  bootstraps  borjas  bounded-cognition  brain-scan  bret-victor  britain  broad-econ  business  business-models  c:***  california  cancer  candidate-gene  canon  capitalism  career  cartoons  causation  cause  censorship  charity  chart  checklists  china  christianity  christopher-lasch  civilization  cjones-like  clarity  class  class-warfare  classic  clever-rats  climate-change  cliometrics  coalitions  coarse-fine  cochrane  cocktail  cog-psych  cohesion  cold-war  collaboration  commentary  communication  communism  comparison  compensation  complex-systems  concept  conceptual-vocab  concrete  confidence  confluence  confounding  confusion  conquest-empire  constraint-satisfaction  context  contradiction  contrarianism  control  convergence  convexity-curvature  cool  cooperate-defect  coordination  core-rats  corporation  correlation  corruption  cost-benefit  counter-revolution  counterexample  courage  course  creative  criminology  critique  crooked  crosstab  cs  cultural-dynamics  culture-war  curiosity  current-events  curvature  cycles  cynicism-idealism  darwinian  data  data-science  database  dataset  death  debate  debt  decision-making  defense  definite-planning  dementia  democracy  demographics  dennett  density  dependence-independence  descriptive  developing-world  diet  differential  differential-privacy  dimensionality  direct-indirect  dirty-hands  discovery  discrimination  discussion  disease  distribution  diversity  drama  drugs  duty  dynamic  dysgenics  early-modern  econ-productivity  econometrics  economics  econotariat  education  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  ego-depletion  eh  einstein  elections  electromag  elegance  elite  embedded-cognition  embodied  emergent  emotion  empirical  ems  endo-exo  endogenous-exogenous  energy-resources  engineering  ensembles  environment  epidemiology  epigenetics  epistemic  equilibrium  ergodic  error  essay  ethics  EU  europe  evidence  evidence-based  evolution  examples  expansionism  expert  expert-experience  explanans  explanation  exploratory  exposition  externalities  extra-introversion  facebook  faq  fashun  fermi  feynman  field-study  finance  flexibility  fluid  flux-stasis  food  foreign-lang  foreign-policy  formal-values  fourier  frequentist  frontier  futurism  gbooks  gelman  gender  gender-diff  generalization  generative  genetics  genomics  geoengineering  geometry  germanic  giants  gibbon  gnon  gnosis-logos  gnxp  google  gotchas  government  grad-school  gradient-descent  graphical-models  graphs  gravity  gray-econ  great-powers  ground-up  group-level  growth  growth-econ  GT-101  GWAS  gwern  hamming  hanson  health  heavy-industry  heterodox  heuristic  hi-order-bits  hidden-motives  high-dimension  high-variance  higher-ed  history  hmm  hn  homepage  homo-hetero  howto  hsu  huge-data-the-biggest  human-capital  human-ml  human-study  humility  hypocrisy  hypothesis-testing  ideas  identity-politics  ideology  idk  IEEE  illusion  immune  impact  impetus  incentives  india  industrial-revolution  inequality  inference  info-dynamics  info-foraging  infographic  information-theory  init  innovation  input-output  insight  instinct  institutions  integrity  intelligence  interdisciplinary  internet  interpretability  intervention  interview  intricacy  ioannidis  iq  iran  iron-age  is-ought  iteration-recursion  japan  jargon  journos-pundits  judgement  knowledge  kumbaya-kult  labor  language  latin-america  learning  learning-theory  left-wing  len:long  len:short  lens  lesswrong  let-me-see  letters  leviathan  life-history  limits  liner-notes  links  list  literature  local-global  logic  long-term  longevity  longform  low-hanging  lower-bounds  machiavelli  machine-learning  madisonian  magnitude  malaise  malthus  management  map-territory  maps  marginal  marginal-rev  markets  martial  math  math.GR  mathtariat  measurement  mechanics  media  medicine  medieval  mediterranean  memetics  meta-analysis  meta:math  meta:medicine  meta:prediction  meta:reading  meta:research  meta:science  metabolic  metabuch  metameta  methodology  metrics  michael-nielsen  microfoundations  microsoft  migration  military  minimum-viable  missing-heritability  mobility  model-organism  models  moments  monetary-fiscal  money  monte-carlo  morality  mostly-modern  motivation  mrtz  msr  multi  murray  mystic  myth  n-factor  narrative  nascent-state  nationalism-globalism  natural-experiment  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nitty-gritty  no-go  noble-lie  nonlinearity  novelty  null-result  nutrition  obesity  objektbuch  occam  occident  old-anglo  online-learning  open-closed  optimate  optimism  optimization  org:anglo  org:biz  org:bleg  org:data  org:edge  org:edu  org:health  org:junk  org:lite  org:local  org:mag  org:mat  org:nat  org:rec  org:sci  organization  organizing  orient  oscillation  outcome-risk  outliers  overflow  p:whenever  papers  paradox  parasites-microbiome  parenting  parsimony  path-dependence  paying-rent  pdf  peace-violence  people  personality  persuasion  perturbation  pessimism  phalanges  pharma  phase-transition  phd  philosophy  phys-energy  physics  piketty  pinker  piracy  planning  poast  podcast  poetry  polanyi-marx  polarization  policy  polisci  politics  poll  pop-diff  popsci  population-genetics  postmortem  power  power-law  pre-ww2  prediction  prediction-markets  preference-falsification  preprint  presentation  priors-posteriors  privacy  pro-rata  problem-solving  productivity  prof  profile  progression  project  propaganda  proposal  prudence  pseudoE  psych-architecture  psychiatry  psychology  psychometrics  public-health  publishing  q-n-a  QTL  quality  quantitative-qualitative  quantum  quantum-info  questions  quotes  race  ranking  rant  rat-pack  rationality  ratty  reading  realness  realpolitik  reason  recommendations  red-queen  reddit  reference  reflection  regression  regularization  regularizer  religion  rent-seeking  replication  research  research-program  review  revolution  rhetoric  rigor  risk  robotics  robust  roots  rot  russia  s-factor  s:**  s:***  sampling  sanctity-degradation  sapiens  scale  scaling-tech  scaling-up  schelling  scholar  scholar-pack  science  scitariat  search  selection  sensitivity  sexuality  shannon  shift  signal-noise  signaling  simulation  singularity  sinosphere  skeleton  skunkworks  slippery-slope  social  social-choice  social-psych  social-science  social-structure  sociology  software  solid-study  space  spearhead  speculation  speedometer  spreading  spring-2019  ssc  stagnation  startups  stat-mech  stat-power  state  state-of-art  stats  status  stereotypes  stories  strategy  straussian  stream  street-fighting  structure  study  stylized-facts  success  summary  supply-demand  survey  symmetry  synthesis  systematic-ad-hoc  taubes-guyenet  tcs  tcstariat  teaching  tech  technical-writing  technocracy  technology  techtariat  the-basilisk  the-bones  the-classics  the-great-west-whale  the-monster  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-practice  theos  thermo  thick-thin  things  thinking  thucydides  thurston  time-series  todo  tools  top-n  topology  toxoplasmosis  traces  track-record  tradeoffs  trends  tribalism  trivia  trump  trust  truth  twitter  unaffiliated  uncertainty  unintended-consequences  unit  urban  urban-rural  us-them  usa  values  variance-components  video  virtu  visual-understanding  visualization  vitality  volo-avolo  walls  war  water  waves  west-hunter  westminster  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-without  wordlessness  working-stiff  world  worrydream  writing  wut  X-not-about-Y  yvain  zeitgeist  zero-positive-sum  zooming  🌞  🎓  🎩  🔬  🖥  🤖  🦉 

Copy this bookmark:



description:


tags: