nhaliday + hypothesis-testing   97

The Gelman View – spottedtoad
I have read Andrew Gelman’s blog for about five years, and gradually, I’ve decided that among his many blog posts and hundreds of academic articles, he is advancing a philosophy not just of statistics but of quantitative social science in general. Not a statistician myself, here is how I would articulate the Gelman View:

A. Purposes

1. The purpose of social statistics is to describe and understand variation in the world. The world is a complicated place, and we shouldn’t expect things to be simple.
2. The purpose of scientific publication is to allow for communication, dialogue, and critique, not to “certify” a specific finding as absolute truth.
3. The incentive structure of science needs to reward attempts to independently investigate, reproduce, and refute existing claims and observed patterns, not just to advance new hypotheses or support a particular research agenda.

B. Approach

1. Because the world is complicated, the most valuable statistical models for the world will generally be complicated. The result of statistical investigations will only rarely be to  give a stamp of truth on a specific effect or causal claim, but will generally show variation in effects and outcomes.
2. Whenever possible, the data, analytic approach, and methods should be made as transparent and replicable as possible, and should be fair game for anyone to examine, critique, or amend.
3. Social scientists should look to build upon a broad shared body of knowledge, not to “own” a particular intervention, theoretic framework, or technique. Such ownership creates incentive problems when the intervention, framework, or technique fail and the scientist is left trying to support a flawed structure.

Components

1. Measurement. How and what we measure is the first question, well before we decide on what the effects are or what is making that measurement change.
2. Sampling. Who we talk to or collect information from always matters, because we should always expect effects to depend on context.
3. Inference. While models should usually be complex, our inferential framework should be simple enough for anyone to follow along. And no p values.

He might disagree with all of this, or how it reflects his understanding of his own work. But I think it is a valuable guide to empirical work.
ratty  unaffiliated  summary  gelman  scitariat  philosophy  lens  stats  hypothesis-testing  science  meta:science  social-science  institutions  truth  is-ought  best-practices  data-science  info-dynamics  alt-inst  academia  empirical  evidence-based  checklists  strategy  epistemic 
november 2017 by nhaliday
Use and Interpretation of LD Score Regression
LD Score regression distinguishes confounding from polygenicity in genome-wide association studies: https://sci-hub.bz/10.1038/ng.3211
- Po-Ru Loh, Nick Patterson, et al.

https://www.biorxiv.org/content/biorxiv/early/2014/02/21/002931.full.pdf

Both polygenicity (i.e. many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield inflated distributions of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from bias and true signal from polygenicity. We have developed an approach that quantifies the contributions of each by examining the relationship between test statistics and linkage disequilibrium (LD). We term this approach LD Score regression. LD Score regression provides an upper bound on the contribution of confounding bias to the observed inflation in test statistics and can be used to estimate a more powerful correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size.

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n3/extref/ng.3211-S1.pdf

An atlas of genetic correlations across human diseases
and traits: https://sci-hub.bz/10.1038/ng.3406

https://www.biorxiv.org/content/early/2015/01/27/014498.full.pdf

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n11/extref/ng.3406-S1.pdf

https://github.com/bulik/ldsc
ldsc is a command line tool for estimating heritability and genetic correlation from GWAS summary statistics. ldsc also computes LD Scores.
nibble  pdf  slides  talks  bio  biodet  genetics  genomics  GWAS  genetic-correlation  correlation  methodology  bioinformatics  concept  levers  🌞  tutorial  explanation  pop-structure  gene-drift  ideas  multi  study  org:nat  article  repo  software  tools  libraries  stats  hypothesis-testing  biases  confounding  gotchas  QTL  simulation  survey  preprint  population-genetics 
november 2017 by nhaliday
Fitting a Structural Equation Model
seems rather unrigorous: nonlinear optimization, possibility of nonconvergence, doesn't even mention local vs. global optimality...
pdf  slides  lectures  acm  stats  hypothesis-testing  graphs  graphical-models  latent-variables  model-class  optimization  nonlinearity  gotchas  nibble  ML-MAP-E  iteration-recursion  convergence 
november 2017 by nhaliday
Ancient Admixture in Human History
- Patterson, Reich et al., 2012
Population mixture is an important process in biology. We present a suite of methods for learning about population mixtures, implemented in a software package called ADMIXTOOLS, that support formal tests for whether mixture occurred and make it possible to infer proportions and dates of mixture. We also describe the development of a new single nucleotide polymorphism (SNP) array consisting of 629,433 sites with clearly documented ascertainment that was specifically designed for population genetic analyses and that we genotyped in 934 individuals from 53 diverse populations. To illustrate the methods, we give a number of examples that provide new insights about the history of human admixture. The most striking finding is a clear signal of admixture into northern Europe, with one ancestral population related to present-day Basques and Sardinians and the other related to present-day populations of northeast Asia and the Americas. This likely reflects a history of admixture between Neolithic migrants and the indigenous Mesolithic population of Europe, consistent with recent analyses of ancient bones from Sweden and the sequencing of the genome of the Tyrolean “Iceman.”
nibble  pdf  study  article  methodology  bio  sapiens  genetics  genomics  population-genetics  migration  gene-flow  software  trees  concept  history  antiquity  europe  roots  gavisti  🌞  bioinformatics  metrics  hypothesis-testing  levers  ideas  libraries  tools  pop-structure 
november 2017 by nhaliday
Karl Pearson and the Chi-squared Test
Pearson's paper of 1900 introduced what subsequently became known as the chi-squared test of goodness of fit. The terminology and allusions of 80 years ago create a barrier for the modern reader, who finds that the interpretation of Pearson's test procedure and the assessment of what he achieved are less than straightforward, notwithstanding the technical advances made since then. An attempt is made here to surmount these difficulties by exploring Pearson's relevant activities during the first decade of his statistical career, and by describing the work by his contemporaries and predecessors which seem to have influenced his approach to the problem. Not all the questions are answered, and others remain for further study.

original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
nibble  papers  acm  stats  hypothesis-testing  methodology  history  mostly-modern  pre-ww2  old-anglo  giants  science  the-trenches  stories  multi  q-n-a  overflow  explanation  summary  innovation  discovery  distribution  degrees-of-freedom  limits 
october 2017 by nhaliday
Section 10 Chi-squared goodness-of-fit test.
- pf that chi-squared statistic for Pearson's test (multinomial goodness-of-fit) actually has chi-squared distribution asymptotically
- the gotcha: terms Z_j in sum aren't independent
- solution:
- compute the covariance matrix of the terms to be E[Z_iZ_j] = -sqrt(p_ip_j)
- note that an equivalent way of sampling the Z_j is to take a random standard Gaussian and project onto the plane orthogonal to (sqrt(p_1), sqrt(p_2), ..., sqrt(p_r))
- that is equivalent to just sampling a Gaussian w/ 1 less dimension (hence df=r-1)
QED
pdf  nibble  lecture-notes  mit  stats  hypothesis-testing  acm  probability  methodology  proofs  iidness  distribution  limits  identity  direction  lifts-projections 
october 2017 by nhaliday
Immigrants and Everest, Bryan Caplan | EconLog | Library of Economics and Liberty
Immigrants use less welfare than natives, holding income constant. Immigrants are far less likely to be in jail than natives, holding high school graduation constant.* On the surface, these seem like striking results. But I've heard a couple of smart people [Garett Jones] demur with an old statistics joke: "Controlling for barometric pressure, Mount Everest has the same altitude as the Dead Sea." Sometimes controls conceal the truth rather than laying it bare.
https://twitter.com/GarettJones/status/897153018503852033
https://archive.is/9k2Ww
org:econlib  econotariat  cracker-econ  garett-jones  migration  meta:rhetoric  propaganda  crime  criminology  causation  endo-exo  regression  spearhead  aphorism  hypothesis-testing  twitter  social  discussion  pic  quotes  gotchas  multi  backup  endogenous-exogenous 
august 2017 by nhaliday
Analysis of variance - Wikipedia
Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences among group means and their associated procedures (such as "variation" among and between groups), developed by statistician and evolutionary biologist Ronald Fisher. In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups. ANOVAs are useful for comparing (testing) three or more means (groups or variables) for statistical significance. It is conceptually similar to multiple two-sample t-tests, but is more conservative (results in less type I error) and is therefore suited to a wide range of practical problems.

good pic: https://en.wikipedia.org/wiki/Analysis_of_variance#Motivating_example

tutorial by Gelman: http://www.stat.columbia.edu/~gelman/research/published/econanova3.pdf

so one way to think of partitioning the variance:
y_ij = alpha_i + beta_j + eps_ij
Var(y_ij) = Var(alpha_i) + Var(beta_j) + Cov(alpha_i, beta_j) + Var(eps_ij)
and alpha_i, beta_j are independent, so Cov(alpha_i, beta_j) = 0

can you make this work w/ interaction effects?
data-science  stats  methodology  hypothesis-testing  variance-components  concept  conceptual-vocab  thinking  wiki  reference  nibble  multi  visualization  visual-understanding  pic  pdf  exposition  lecture-notes  gelman  scitariat  tutorial  acm  ground-up  yoga 
july 2017 by nhaliday
Polygenic transmission disequilibrium confirms that common and rare variation act additively to create risk for autism spectrum disorders : Nature Genetics : Nature Research
Autism spectrum disorder (ASD) risk is influenced by common polygenic and de novo variation. We aimed to clarify the influence of polygenic risk for ASD and to identify subgroups of ASD cases, including those with strongly acting de novo variants, in which polygenic risk is relevant. Using a novel approach called the polygenic transmission disequilibrium test and data from 6,454 families with a child with ASD, we show that polygenic risk for ASD, schizophrenia, and greater educational attainment is over-transmitted to children with ASD. These findings hold independent of proband IQ. We find that polygenic variation contributes additively to risk in ASD cases who carry a strongly acting de novo variant. Lastly, we show that elements of polygenic risk are independent and differ in their relationship with phenotype. These results confirm that the genetic influences on ASD are additive and suggest that they create risk through at least partially distinct etiologic pathways.

https://en.wikipedia.org/wiki/Transmission_disequilibrium_test
study  biodet  behavioral-gen  genetics  population-genetics  QTL  missing-heritability  psychiatry  autism  👽  disease  org:nat  🌞  gwern  pdf  piracy  education  multi  methodology  wiki  reference  psychology  cog-psych  genetic-load  genetic-correlation  sib-study  hypothesis-testing  equilibrium  iq  correlation  intricacy  GWAS  causation  endo-exo  endogenous-exogenous 
july 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated 
june 2017 by nhaliday
Why we should love null results – The 100% CI
https://twitter.com/StuartJRitchie/status/870257682233659392
This is a must-read blog for many reasons, but biggest is: it REALLY matters if a hypothesis is likely to be true.
Strikes me that the areas of psychology with the most absurd hypotheses (ones least likely to be true) *AHEMSOCIALPRIMINGAHEM* are also...
...the ones with extremely small sample sizes. So this already-scary graph from the blogpost becomes all the more terrifying:
scitariat  explanation  science  hypothesis-testing  methodology  null-result  multi  albion  twitter  social  commentary  psychology  social-psych  social-science  meta:science  data  visualization  nitty-gritty  stat-power  priors-posteriors 
june 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
https://en.wikipedia.org/wiki/Coefficient_of_determination
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric

https://en.wikipedia.org/wiki/Explained_variation

http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.

this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)

https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
stats  science  hypothesis-testing  correlation  metrics  plots  regression  wiki  reference  nibble  methodology  multi  twitter  social  discussion  best-practices  econotariat  garett-jones  concept  conceptual-vocab  accuracy  causation  acm  matrix-factorization  todo  explanation  yoga  hsu  street-fighting  levers  🌞  2014  scitariat  variance-components  meta:prediction  biodet  s:**  mental-math  reddit  commentary  ssc  poast  gwern  data-science  metric-space  similarity  measure  dependence-independence 
may 2017 by nhaliday
'Capital in the Twenty-First Century' by Thomas Piketty, reviewed | New Republic
by Robert Solow (positive)

The data then exhibit a clear pattern. In France and Great Britain, national capital stood fairly steadily at about seven times national income from 1700 to 1910, then fell sharply from 1910 to 1950, presumably as a result of wars and depression, reaching a low of 2.5 in Britain and a bit less than 3 in France. The capital-income ratio then began to climb in both countries, and reached slightly more than 5 in Britain and slightly less than 6 in France by 2010. The trajectory in the United States was slightly different: it started at just above 3 in 1770, climbed to 5 in 1910, fell slightly in 1920, recovered to a high between 5 and 5.5 in 1930, fell to below 4 in 1950, and was back to 4.5 in 2010.

The wealth-income ratio in the United States has always been lower than in Europe. The main reason in the early years was that land values bulked less in the wide open spaces of North America. There was of course much more land, but it was very cheap. Into the twentieth century and onward, however, the lower capital-income ratio in the United States probably reflects the higher level of productivity: a given amount of capital could support a larger production of output than in Europe. It is no surprise that the two world wars caused much less destruction and dissipation of capital in the United States than in Britain and France. The important observation for Piketty’s argument is that, in all three countries, and elsewhere as well, the wealth-income ratio has been increasing since 1950, and is almost back to nineteenth-century levels. He projects this increase to continue into the current century, with weighty consequences that will be discussed as we go on.

...

Now if you multiply the rate of return on capital by the capital-income ratio, you get the share of capital in the national income. For example, if the rate of return is 5 percent a year and the stock of capital is six years worth of national income, income from capital will be 30 percent of national income, and so income from work will be the remaining 70 percent. At last, after all this preparation, we are beginning to talk about inequality, and in two distinct senses. First, we have arrived at the functional distribution of income—the split between income from work and income from wealth. Second, it is always the case that wealth is more highly concentrated among the rich than income from labor (although recent American history looks rather odd in this respect); and this being so, the larger the share of income from wealth, the more unequal the distribution of income among persons is likely to be. It is this inequality across persons that matters most for good or ill in a society.

...

The data are complicated and not easily comparable across time and space, but here is the flavor of Piketty’s summary picture. Capital is indeed very unequally distributed. Currently in the United States, the top 10 percent own about 70 percent of all the capital, half of that belonging to the top 1 percent; the next 40 percent—who compose the “middle class”—own about a quarter of the total (much of that in the form of housing), and the remaining half of the population owns next to nothing, about 5 percent of total wealth. Even that amount of middle-class property ownership is a new phenomenon in history. The typical European country is a little more egalitarian: the top 1 percent own 25 percent of the total capital, and the middle class 35 percent. (A century ago the European middle class owned essentially no wealth at all.) If the ownership of wealth in fact becomes even more concentrated during the rest of the twenty-first century, the outlook is pretty bleak unless you have a taste for oligarchy.

Income from wealth is probably even more concentrated than wealth itself because, as Piketty notes, large blocks of wealth tend to earn a higher return than small ones. Some of this advantage comes from economies of scale, but more may come from the fact that very big investors have access to a wider range of investment opportunities than smaller investors. Income from work is naturally less concentrated than income from wealth. In Piketty’s stylized picture of the United States today, the top 1 percent earns about 12 percent of all labor income, the next 9 percent earn 23 percent, the middle class gets about 40 percent, and the bottom half about a quarter of income from work. Europe is not very different: the top 10 percent collect somewhat less and the other two groups a little more.

You get the picture: modern capitalism is an unequal society, and the rich-get-richer dynamic strongly suggest that it will get more so. But there is one more loose end to tie up, already hinted at, and it has to do with the advent of very high wage incomes. First, here are some facts about the composition of top incomes. About 60 percent of the income of the top 1 percent in the United States today is labor income. Only when you get to the top tenth of 1 percent does income from capital start to predominate. The income of the top hundredth of 1 percent is 70 percent from capital. The story for France is not very different, though the proportion of labor income is a bit higher at every level. Evidently there are some very high wage incomes, as if you didn’t know.

This is a fairly recent development. In the 1960s, the top 1 percent of wage earners collected a little more than 5 percent of all wage incomes. This fraction has risen pretty steadily until nowadays, when the top 1 percent of wage earners receive 10–12 percent of all wages. This time the story is rather different in France. There the share of total wages going to the top percentile was steady at 6 percent until very recently, when it climbed to 7 percent. The recent surge of extreme inequality at the top of the wage distribution may be primarily an American development. Piketty, who with Emmanuel Saez has made a careful study of high-income tax returns in the United States, attributes this to the rise of what he calls “supermanagers.” The very highest income class consists to a substantial extent of top executives of large corporations, with very rich compensation packages. (A disproportionate number of these, but by no means all of them, come from the financial services industry.) With or without stock options, these large pay packages get converted to wealth and future income from wealth. But the fact remains that much of the increased income (and wealth) inequality in the United States is driven by the rise of these supermanagers.

and Deirdre McCloskey (p critical): https://ejpe.org/journal/article/view/170
nice discussion of empirical economics, economic history, market failures and statism, etc., with several bon mots

Piketty’s great splash will undoubtedly bring many young economically interested scholars to devote their lives to the study of the past. That is good, because economic history is one of the few scientifically quantitative branches of economics. In economic history, as in experimental economics and a few other fields, the economists confront the evidence (as they do not for example in most macroeconomics or industrial organization or international trade theory nowadays).

...

Piketty gives a fine example of how to do it. He does not get entangled as so many economists do in the sole empirical tool they are taught, namely, regression analysis on someone else’s “data” (one of the problems is the word data, meaning “things given”: scientists should deal in capta, “things seized”). Therefore he does not commit one of the two sins of modern economics, the use of meaningless “tests” of statistical significance (he occasionally refers to “statistically insignificant” relations between, say, tax rates and growth rates, but I am hoping he does not suppose that a large coefficient is “insignificant” because R. A. Fisher in 1925 said it was). Piketty constructs or uses statistics of aggregate capital and of inequality and then plots them out for inspection, which is what physicists, for example, also do in dealing with their experiments and observations. Nor does he commit the other sin, which is to waste scientific time on existence theorems. Physicists, again, don’t. If we economists are going to persist in physics envy let us at least learn what physicists actually do. Piketty stays close to the facts, and does not, for example, wander into the pointless worlds of non-cooperative game theory, long demolished by experimental economics. He also does not have recourse to non-computable general equilibrium, which never was of use for quantitative economic science, being a branch of philosophy, and a futile one at that. On both points, bravissimo.

...

Since those founding geniuses of classical economics, a market-tested betterment (a locution to be preferred to “capitalism”, with its erroneous implication that capital accumulation, not innovation, is what made us better off) has enormously enriched large parts of a humanity now seven times larger in population than in 1800, and bids fair in the next fifty years or so to enrich everyone on the planet. [Not SSA or MENA...]

...

Then economists, many on the left but some on the right, in quick succession from 1880 to the present—at the same time that market-tested betterment was driving real wages up and up and up—commenced worrying about, to name a few of the pessimisms concerning “capitalism” they discerned: greed, alienation, racial impurity, workers’ lack of bargaining strength, workers’ bad taste in consumption, immigration of lesser breeds, monopoly, unemployment, business cycles, increasing returns, externalities, under-consumption, monopolistic competition, separation of ownership from control, lack of planning, post-War stagnation, investment spillovers, unbalanced growth, dual labor markets, capital insufficiency (William Easterly calls it “capital fundamentalism”), peasant irrationality, capital-market imperfections, public … [more]
news  org:mag  big-peeps  econotariat  economics  books  review  capital  capitalism  inequality  winner-take-all  piketty  wealth  class  labor  mobility  redistribution  growth-econ  rent-seeking  history  mostly-modern  trends  compensation  article  malaise  🎩  the-bones  whiggish-hegelian  cjones-like  multi  mokyr-allen-mccloskey  expert  market-failure  government  broad-econ  cliometrics  aphorism  lens  gallic  clarity  europe  critique  rant  optimism  regularizer  pessimism  ideology  behavioral-econ  authoritarianism  intervention  polanyi-marx  politics  left-wing  absolute-relative  regression-to-mean  legacy  empirical  data-science  econometrics  methodology  hypothesis-testing  physics  iron-age  mediterranean  the-classics  quotes  krugman  world  entrepreneurialism  human-capital  education  supply-demand  plots  manifolds  intersection  markets  evolution  darwinian  giants  old-anglo  egalitarianism-hierarchy  optimate  morality  ethics  envy  stagnation  nl-and-so-can-you  expert-experience  courage  stats  randy-ayndy  reason  intersection-connectedness  detail-architect 
april 2017 by nhaliday
Meta-assessment of bias in science
Science is said to be suffering a reproducibility crisis caused by many biases. How common are these problems, across the wide diversity of research fields? We probed for multiple bias-related patterns in a large random sample of meta-analyses taken from all disciplines. The magnitude of these biases varied widely across fields and was on average relatively small. However, we consistently observed that small, early, highly cited studies published in peer-reviewed journals were likely to overestimate effects. We found little evidence that these biases were related to scientific productivity, and we found no difference between biases in male and female researchers. However, a scientist’s early-career status, isolation, and lack of scientific integrity might be significant risk factors for producing unreliable results.
study  academia  science  meta:science  metabuch  stylized-facts  ioannidis  replication  error  incentives  integrity  trends  social-science  meta-analysis  🔬  hypothesis-testing  effect-size  usa  biases  org:nat  info-dynamics 
march 2017 by nhaliday
The genetics of politics: discovery, challenges, and progress
Figure 1. Summary of relative genetic and environmental influences on political traits.

- heritability increases discontinuously on leaving home
- pretty big range of heritability for different particular traits (party identification is lowest w/ largest shared environment by far)
- overall ideology quite highly heritable
- social trust is surprisingly highly compared other measurements I've seen...
- ethnocentrism quite low (sample-dependent?)
- authoritarianism and traditionalism quite high
- voter turnout quite high

Genes, psychological traits and civic engagement: http://rstb.royalsocietypublishing.org/content/370/1683/20150015
We show an underlying genetic contribution to an index of civic engagement (0.41), as well as for the individual acts of engagement of volunteering for community or public service activities (0.33), regularly contributing to charitable causes (0.28) and voting in elections (0.27). There are closer genetic relationships between donating and the other two activities; volunteering and voting are not genetically correlated. Further, we show that most of the correlation between civic engagement and both positive emotionality and verbal IQ can be attributed to genes that affect both traits.

Are Political Orientations Genetically Transmitted?: http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1006&context=poliscifacpub
TABLE 1. Genetic and Environmental Influences on Political Attitudes: The 28 Individual Wilson–Patterson Items

The origins of party identification and its relationship to political orientations: http://sci-hub.tw/http://www.sciencedirect.com/science/article/pii/S0191886915002470

All models showed a good overall fit (see Table 3). The data indicate that party identification is substantially heritable, with about 50% of the variation in PID attributable to additive genetic effects. Moreover, the results indicate that the non-genetic influences on party identification stem primarily from unique environmental factors rather than shared ones such as growing up in the same family. This too is not consistent with the Michigan model.

Table 3 also indicates that genetic influences explained about 50% of the variance in liberalism–conservatism. This estimate is similar to previous behavior genetic findings on political attitudes (e.g., Alford et al., 2005; Bouchard, 2004; Hatemi et al., 2014; Kandler, Bleidorn, & Riemann, 2012). The remaining variance was again due primarily to nonshared environmental influences. The latter finding indicates that the Michigan hypothesis that partisan social influences affect political orientations may have some merit, although the substantial level of heritability for this variable suggests that genetic effects also play an important role.

...

As Table 4 reveals, the best fitting model indicates that 100% of the genetic variance in PID is held in common with liberalism–conservatism ([aC2]/[aC2 + aPID2] = 1.00). Similarly, 73% of the environmental variation in PID is shared with liberalism–conservatism ([eC2]/[eC2 + ePID2] = .73). All told, only 13% of the total variance in PID cannot be explained by variation in liberalism–conservatism (1 [aC2 + eC2] = .13), as illustrated in Fig. 3. Since only a small proportion of the variance in PID cannot be explained by liberalism– conservatism, the findings are consistent with the hypothesis that genetic and environmental factors influence liberalism–conservatism, which in turn affects party identification. However, as discussed below, other causal scenarios cannot be ruled out.

Table 4 and Fig. 3 also show that 55% of the total variance in liberalism–conservatism cannot be accounted for by variance in PID

Fig. 3. Venn diagram mapping the common and specific variance in party
identification and liberalism–conservatism.

intuition for how you can figure out overlap of variance: look at how corr(PID, liberal-conservative) differs between MZ and DZ twin pairs, etc., fit structural equational model

p_k,i,j = r_A a_k,i,j,p + r_C c_k,i,p + r_E e_k,i,j,p (k=MZ or DZ, i=1..n_k, j=1,2, p=PID or LC value)

c_k,i,j,p = r_{C,p} c'_k,i,p + r_{C,common} c'_k,i,common (ditto)
e_k,i,j,p = r_{E,p} e'_k,i,j,p + r_{E,common} e'_k,i,j,common (ditto)

MZ twins:
a_MZ,i,j,p = r_{A,p} a'_MZ,i,p + r_{A,common} a'_MZ,i,common (i=1..n_k, j=1,2 p=PID or LC value)

DZ twins:
a_DZ,i,j,p = r_{A,p} (1/2 a'_DZ,i,p + 1/2 a'_DZ,i,j,p) + r_{A,common} (1/2 a'_DZ,i,common + 1/2 a'_DZ,i,j,common) (i=1..n_k, j=1,2 p=PID or LC value)

Gaussian distribution for the underlying a', c' and e' variables, maximum likelihood, etc.

see page 9 here: https://pinboard.in/u:nhaliday/b:70f8b5b559a9

basically:
1. calculate population means μ from data (so just numbers)
2. calculate covariance matrix Σ in terms of latent parameters r_A, r_C, etc. (so variable correlations)
3. assume observed values are Gaussian with those parameters μ, Σ
4. maximum likelihood to figure out the parameters r_A, r_C, etc.

A Genetic Basis of Economic Egalitarianism: http://sci-hub.tw/10.1007/s11211-017-0297-y
Our results show that the large portion of the variance in a four-item economic egalitarianism scale can be attributed to genetic factor. At the same time, shared environment, as a socializing factor, has no significant effect. The effect of environment seems to be fully reserved for unique personal experience. Our findings further problematize a long-standing view that social justice attitudes are dominantly determined by socialization.

published in the journal "Social Justice Research" by some Hungarians, lol

various political science findings, w/ a few behavioral genetic, focus on Trump, right-wing populism/authoritarianism, and polarization: http://www.nationalaffairs.com/blog/detail/findings-a-daily-roundup/a-bridge-too-far
pdf  study  org:nat  biodet  politics  values  psychology  social-psych  genetics  variance-components  survey  meta-analysis  environmental-effects  🌞  parenting  replication  candidate-gene  GWAS  anthropology  society  trust  hive-mind  tribalism  authoritarianism  things  sociology  expression-survival  civic  shift  ethnocentrism  spearhead  garett-jones  broad-econ  political-econ  behavioral-gen  biophysical-econ  polisci  stylized-facts  neuro-nitgrit  phalanges  identity-politics  tradition  microfoundations  ideology  multi  genetic-correlation  data  database  twin-study  objektbuch  gender  capitalism  peace-violence  military  labor  communism  migration  civil-liberty  exit-voice  censorship  sex  sexuality  assortative-mating  usa  anglo  comparison  knowledge  coalitions  piracy  correlation  intersection  latent-variables  methodology  stats  models  ML-MAP-E  nibble  explanation  bioinformatics  graphical-models  hypothesis-testing  intersection-connectedness  poll  egalitarianism-hierarchy  envy  inequality  justice  westminster  publishing 
february 2017 by nhaliday
Information Processing: The joy of Turkheimer
In the talk Turkheimer gives the following definition of social science, which emphasizes why it is hard:

Social science is the attempt to explain the causes of complex human behavior when:
- There are a large number of potential causes.
- The potential causes are non-independent.
- Randomized experimentation is not possible.
hsu  scitariat  genetics  genomics  causation  hypothesis-testing  social-science  nonlinearity  iidness  correlation  links  slides  presentation  audio  things  lens  metabuch  thinking  GxE  commentary 
february 2017 by nhaliday
probability - Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? - Cross Validated
The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in 100p% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability pp given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself.

http://stats.stackexchange.com/questions/139290/a-psychology-journal-banned-p-values-and-confidence-intervals-is-it-indeed-wise

PS. Note that my question is not about the ban itself; it is about the suggested approach. I am not asking about frequentist vs. Bayesian inference either. The Editorial is pretty negative about Bayesian methods too; so it is essentially about using statistics vs. not using statistics at all.

wut

http://stats.stackexchange.com/questions/6966/why-continue-to-teach-and-use-hypothesis-testing-when-confidence-intervals-are
http://stats.stackexchange.com/questions/2356/are-there-any-examples-where-bayesian-credible-intervals-are-obviously-inferior
http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval
http://stats.stackexchange.com/questions/6652/what-precisely-is-a-confidence-interval
http://stats.stackexchange.com/questions/1164/why-havent-robust-and-resistant-statistics-replaced-classical-techniques/
http://stats.stackexchange.com/questions/16312/what-is-the-difference-between-confidence-intervals-and-hypothesis-testing
http://stats.stackexchange.com/questions/31679/what-is-the-connection-between-credible-regions-and-bayesian-hypothesis-tests
http://stats.stackexchange.com/questions/11609/clarification-on-interpreting-confidence-intervals
http://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals
q-n-a  overflow  nibble  stats  data-science  science  methodology  concept  confidence  conceptual-vocab  confusion  explanation  thinking  hypothesis-testing  jargon  multi  meta:science  best-practices  error  discussion  bayesian  frequentist  hmm  publishing  intricacy  wut  comparison  motivation  clarity  examples  robust  metabuch  🔬  info-dynamics  reference 
february 2017 by nhaliday
Measurement error and the replication crisis | Science
In a low-noise setting, the theoretical results of Hausman and others correctly show that measurement error will attenuate coefficient estimates. But we can demonstrate with a simple exercise that the opposite occurs in the presence of high noise and selection on statistical significance.
study  org:nat  science  meta:science  stats  signal-noise  gelman  methodology  hypothesis-testing  replication  social-science  error  metabuch  unit  nibble  bounded-cognition  measurement  🔬  info-dynamics 
february 2017 by nhaliday
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes? - Cross Validated
- "Bonferroni approach" is just union bound
- so Pr(|hat p_i - p_i| > ε for any i) <= 2k e^{-ε^2 n} = δ
- ε = sqrt(ln(2k/δ)/n)
- Bonferroni approach should work for case of any dependent Bernoulli r.v.s
q-n-a  overflow  stats  moments  distribution  acm  hypothesis-testing  nibble  confidence  concentration-of-measure  bonferroni  parametric  synchrony 
february 2017 by nhaliday
Odds ratio - Wikipedia
- (P(y=1|x=1) / P(y=0|x=1)) / (P(y=1|x=0) / P(y=0|x=0))
- when P(y=1|x=0) and P(y=1|x=1) are both small, approximately the relative risk = P(y=1|x=1)/P(y=1|x=0)

The two other major ways of quantifying association are the risk ratio ("RR") and the absolute risk reduction ("ARR"). In clinical studies and many other settings, the parameter of greatest interest is often actually the RR, which is determined in a way that is similar to the one just described for the OR, except using probabilities instead of odds. Frequently, however, the available data only allows the computation of the OR; notably, this is so in the case of case-control studies, as explained below. On the other hand, if one of the properties (say, A) is sufficiently rare (the "rare disease assumption"), then the OR of having A given that the individual has B is a good approximation to the corresponding RR (the specification "A given B" is needed because, while the OR treats the two properties symmetrically, the RR and other measures do not).
concept  metrics  methodology  science  hypothesis-testing  wiki  reference  stats  effect-size 
february 2017 by nhaliday
interpretation - How to understand degrees of freedom? - Cross Validated
From Wikipedia, there are three interpretations of the degrees of freedom of a statistic:

In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.

Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (which, in sample variance, is one, since the sample mean is the only intermediate step).

Mathematically, degrees of freedom is the dimension of the domain of a random vector, or essentially the number of 'free' components: how many components need to be known before the vector is fully determined.

...

This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests.

Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are:

- The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances).
- The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance.
- The F-test (of ratios of estimated variances).
- The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates.

In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it.

...

Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test:

...

This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter νν often referred to as the "degrees of freedom." The standard reasoning about how to determine νν goes like this

I have kk counts. That's kk pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal nn. That's one relationship. I estimated two (or pp, generally) parameters from the data. That's two (or pp) additional relationships, giving p+1p+1 total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only k−p−1k−p−1 (functionally) independent "degrees of freedom": that's the value to use for νν.

The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question.

...

Things went wrong because I violated two requirements of the Chi-squared test:

1. You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.)
2. You must base that estimate on the counts, not on the actual data! (This is crucial.)

...

The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature.

We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.)
q-n-a  overflow  stats  data-science  concept  jargon  explanation  methodology  things  nibble  degrees-of-freedom  clarity  curiosity  manifolds  dimensionality  ground-up  intricacy  hypothesis-testing  examples  list  ML-MAP-E  gotchas 
january 2017 by nhaliday
D-separation
collider C = A->C<-B
A, B d-connected (resp. conditioned on Z) iff path A~>B or B~>A w/o colliders (resp. path excluding vertices in Z)
A,B d-separated conditioned on Z iff not d-connected conditioned on Z

http://bayes.cs.ucla.edu/BOOK-2K/d-sep.html
concept  explanation  causation  bayesian  graphical-models  cmu  org:edu  stats  methodology  tutorial  jargon  graphs  hypothesis-testing  confounding  🔬  direct-indirect  philosophy  definition  volo-avolo  multi  org:junk 
january 2017 by nhaliday
Improving Economic Research | askblog
To make a long story short:

1. Economic phenomena are rife with causal density. Theories make predictions assuming “other things equal,” but other things are never equal.

2. When I was a student, the solution was thought to be multiple regression analysis. You entered a bunch of variables into an estimated equation, and in doing so you “controlled for” those variables and thereby created conditions of “other things equal.” However, in 1978, Edward Leamer pointed out that actual practice diverges from theory. The researcher typically undertakes a lot of exploratory data analysis before reporting a final result. This process of exploratory analysis creates a bias toward finding the result desired by the researcher, rather than achieving a scientific ideal of objectivity.

3. In recent decades, the approach has shifted toward “natural experiments” and laboratory experiments. These suffer from other problems. The experimental population may not be representative. Even if this problem is not present, studies that offer definitive results are more likely to be published but consequently less likely to be replicated.
econotariat  cracker-econ  study  summary  methodology  economics  causation  social-science  best-practices  academia  hypothesis-testing  thick-thin  density  replication  complex-systems  roots  noise-structure  endo-exo  info-dynamics  natural-experiment  endogenous-exogenous 
january 2017 by nhaliday
Information Processing: What is medicine’s 5 sigma?
I'm not aware of this history you reference, but I am only a recent entrant into this field. On the other hand Ioannidis is both a long time genomics researcher and someone who does meta-research on science, so he should know. He may have even written a paper on this subject -- I seem to recall he had hard numbers on the rate of replication of candidate gene studies and claimed it was in the low percents. BTW, this result shows that the vaunted intuition of biomedical types about "how things really work" in the human body is worth very little. We are much better off, in my opinion, relying on machine learning methods and brute force statistical power than priors based on, e.g., knowledge of biochemical pathways or cartoon models of cell function. (Even though such things are sometimes deemed sufficient to raise ~$100m in biotech investment!) This situation may change in the future but the record from the first decade of the 21st century is there for any serious scholar of the scientific method to study.

Both Ioannidis and I (through separate and independent analyses) feel that modern genomics is a good example of biomedical science that (now) actually works and produces results that replicate with relatively high confidence. It should be a model for other areas ...
hsu  replication  science  medicine  scitariat  meta:science  evidence-based  ioannidis  video  interview  bio  genomics  lens  methodology  thick-thin  candidate-gene  hypothesis-testing  complex-systems  stat-power  bounded-cognition  postmortem  info-dynamics  stats 
november 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractframesci

related tags

2016-election  absolute-relative  academia  accretion  accuracy  acm  acmtariat  aDNA  adversarial  advice  age-generation  agri-mindset  ai  ai-control  albion  alien-character  alt-inst  analysis  anglo  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  applicability-prereqs  arms  article  asia  assortative-mating  audio  authoritarianism  autism  axioms  backup  bayesian  behavioral-econ  behavioral-gen  best-practices  better-explained  bias-variance  biases  big-peeps  big-picture  bio  biodet  bioinformatics  biophysical-econ  bits  bonferroni  books  bounded-cognition  brain-scan  britain  broad-econ  calculator  candidate-gene  capital  capitalism  cartoons  causation  censorship  chart  cheatsheet  checking  checklists  china  civic  civil-liberty  civilization  cjones-like  clarity  class  cliometrics  cmu  coalitions  cog-psych  cohesion  comics  commentary  communism  comparison  compensation  complex-systems  computation  concentration-of-measure  concept  conceptual-vocab  confidence  confluence  confounding  confusion  conquest-empire  control  convergence  correlation  cost-benefit  counterexample  counterfactual  courage  cracker-econ  crime  criminology  critique  cs  culture  curiosity  darwinian  data  data-science  database  dataviz  death  debate  decision-making  decision-theory  definition  degrees-of-freedom  demographics  dennett  density  dependence-independence  descriptive  detail-architecture  differential-privacy  dimensionality  direct-indirect  direction  discovery  discussion  disease  distribution  domestication  draft  dropbox  duplication  dysgenics  econometrics  economics  econotariat  education  effect-size  egalitarianism-hierarchy  embodied  empirical  encyclopedic  endo-exo  endogenous-exogenous  engineering  ensembles  entrepreneurialism  environmental-effects  envy  epistemic  equilibrium  error  essay  ethics  ethnocentrism  europe  evidence-based  evolution  examples  exit-voice  expectancy  expert  expert-experience  explanation  exposition  expression-survival  faq  field-study  finiteness  fisher  flux-stasis  foreign-policy  formal-values  frequentist  gallic  garett-jones  gavisti  gelman  gender  gene-drift  gene-flow  generalization  genetic-correlation  genetic-load  genetics  genomics  giants  gibbon  gnon  gnxp  gotchas  government  gradient-descent  graphical-models  graphs  gregory-clark  ground-up  growth-econ  guide  GWAS  gwern  GxE  hari-seldon  hi-order-bits  history  hive-mind  hmm  hn  hsu  human-capital  human-ml  hypothesis-testing  ideas  identity  identity-politics  ideology  iidness  impact  incentives  inequality  inference  info-dynamics  infographic  information-theory  init  innovation  insight  institutions  integrity  interdisciplinary  internet  intersection  intersection-connectedness  intervention  interview  intricacy  ioannidis  iq  iron-age  is-ought  islam  iteration-recursion  japan  jargon  journos-pundits  judaism  justice  knowledge  korea  krugman  labor  latent-variables  leadership  learning-theory  lecture-notes  lectures  left-wing  legacy  lens  lesswrong  levers  libraries  lifts-projections  limits  liner-notes  links  list  logic  lower-bounds  machine-learning  macro  magnitude  malaise  manifolds  map-territory  marginal  marginal-rev  market-failure  markets  matching  math  matrix-factorization  measure  measurement  medicine  medieval  mediterranean  MENA  mendel-randomization  mental-math  meta-analysis  meta:prediction  meta:rhetoric  meta:science  metabuch  metameta  methodology  metric-space  metrics  michael-nielsen  microfoundations  migration  military  miri-cfar  missing-heritability  mit  ML-MAP-E  mobility  model-class  model-organism  models  mokyr-allen-mccloskey  moments  monte-carlo  morality  mostly-modern  motivation  mrtz  multi  mutation  n-factor  nationalism-globalism  natural-experiment  neuro  neuro-nitgrit  news  nibble  nitty-gritty  nl-and-so-can-you  no-go  noise-structure  nonlinearity  nonparametric  nostalgia  null-result  objektbuch  old-anglo  online-learning  optimate  optimism  optimization  orders  org:bleg  org:data  org:econlib  org:edu  org:junk  org:mag  org:nat  org:ngo  org:sci  organizing  oss  outliers  overflow  p:someday  p:whenever  papers  paradox  parametric  parenting  pdf  peace-violence  pennsylvania  performance  personality  perturbation  pessimism  phalanges  philosophy  physics  pic  piketty  pinker  piracy  plots  poast  polanyi-marx  polarization  policy  polisci  political-econ  politics  poll  pop-diff  pop-structure  population-genetics  populism  postmortem  pragmatic  pre-ww2  preprint  presentation  priors-posteriors  probability  programming  project  proofs  propaganda  pseudoE  psychiatry  psychology  psychometrics  publishing  q-n-a  qra  QTL  quixotic  quora  quotes  race  randy-ayndy  rant  rat-pack  ratty  reading  realness  realpolitik  reason  recent-selection  recommendations  reddit  redistribution  reference  reflection  regression  regression-to-mean  regularizer  religion  rent-seeking  replication  repo  research  research-program  responsibility  review  rhetoric  rigor  roadmap  robust  roots  rot  russia  s:**  sampling  sapiens  scale  scaling-up  science  scitariat  search  selection  sensitivity  sex  sexuality  shift  sib-study  signal-noise  similarity  simler  simulation  sinosphere  skeleton  slides  social  social-choice  social-psych  social-science  society  sociology  software  solid-study  spatial  spearhead  speedometer  ssc  stagnation  stat-power  state-of-art  statesmen  stats  status  stories  strategy  stream  street-fighting  study  stylized-facts  subjective-objective  summary  supply-demand  survey  synchrony  synthesis  systems  talks  tcstariat  techtariat  telos-atelos  the-bones  the-classics  the-trenches  theory-practice  theos  thick-thin  things  thinking  time  time-complexity  todo  tools  top-n  trade  tradition  trees  trends  tribalism  tricks  troll  trump  trust  truth  tutorial  twin-study  twitter  unaffiliated  uncertainty  unit  usa  values  variance-components  video  virginia-DC  visual-understanding  visualization  visuo  volo-avolo  wealth  west-hunter  westminster  whiggish-hegelian  wiki  winner-take-all  winter-2016  wire-guided  world  wut  yoga  🌞  🎩  👳  👽  🔬 

Copy this bookmark:



description:


tags: