nhaliday + measurement   140

Does left-handedness occur more in certain ethnic groups than others?
Yes. There are some aboriginal tribes in Australia who have about 70% of their population being left-handed. It’s also more than 50% for some South American tribes.

The reason is the same in both cases: a recent past of extreme aggression with other tribes. Left-handedness is caused by recessive genes, but being left-handed is a boost when in hand-to-hand combat with a right-handed guy (who usually has trained extensively with other right-handed guys, as this disposition is genetically dominant so right-handed are majority in most human populations, so lacks experience with a left-handed). Should a particular tribe enter too much war time periods, it’s proportion of left-handeds will naturally rise. As their enemy tribe’s proportion of left-handed people is rising as well, there’s a point at which the natural advantage they get in fighting disipates and can only climb higher should they continuously find new groups to fight with, who are also majority right-handed.


So the natural question is: given their advantages in 1-on-1 combat, why doesn’t the percentage grow all the way up to 50% or slightly higher? Because there are COSTS associated with being left-handed, as apparently our neural network is pre-wired towards right-handedness - showing as a reduced life expectancy for lefties. So a mathematical model was proposed to explain their distribution among different societies



Further, it appears the average left-handedness for humans (~10%) hasn’t changed in thousands of years (judging by the paintings of hands on caves)

Frequency-dependent maintenance of left handedness in humans.

Handedness frequency over more than 10,000 years

[ed.: Compare with Julius Evola's "left-hand path".]
q-n-a  qra  trivia  cocktail  farmers-and-foragers  history  antiquity  race  demographics  bio  EEA  evolution  context  peace-violence  war  ecology  EGT  unintended-consequences  game-theory  equilibrium  anthropology  cultural-dynamics  sapiens  data  database  trends  cost-benefit  strategy  time-series  art  archaeology  measurement  oscillation  pro-rata  iteration-recursion  gender  male-variability  cliometrics  roots  explanation  explanans  correlation  causation  branches 
july 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.


If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”


One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.


Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.


The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
Sex, Drugs, and Bitcoin: How Much Illegal Activity Is Financed Through Cryptocurrencies? by Sean Foley, Jonathan R. Karlsen, Tālis J. Putniņš :: SSRN
Cryptocurrencies are among the largest unregulated markets in the world. We find that approximately one-quarter of bitcoin users and one-half of bitcoin transactions are associated with illegal activity. Around $72 billion of illegal activity per year involves bitcoin, which is close to the scale of the US and European markets for illegal drugs. The illegal share of bitcoin activity declines with mainstream interest in bitcoin and with the emergence of more opaque cryptocurrencies. The techniques developed in this paper have applications in cryptocurrency surveillance. Our findings suggest that cryptocurrencies are transforming the way black markets operate by enabling “black e-commerce.”
study  economics  law  leviathan  bitcoin  cryptocurrency  crypto  impetus  scale  markets  civil-liberty  randy-ayndy  crime  criminology  measurement  estimate  pro-rata  money  monetary-fiscal  crypto-anarchy  drugs  internet  tradecraft  opsec  security 
february 2018 by nhaliday
Frontiers | Can We Validate the Results of Twin Studies? A Census-Based Study on the Heritability of Educational Achievement | Genetics
As for most phenotypes, the amount of variance in educational achievement explained by SNPs is lower than the amount of additive genetic variance estimated in twin studies. Twin-based estimates may however be biased because of self-selection and differences in cognitive ability between twins and the rest of the population. Here we compare twin registry based estimates with a census-based heritability estimate, sampling from the same Dutch birth cohort population and using the same standardized measure for educational achievement. Including important covariates (i.e., sex, migration status, school denomination, SES, and group size), we analyzed 893,127 scores from primary school children from the years 2008–2014. For genetic inference, we used pedigree information to construct an additive genetic relationship matrix. Corrected for the covariates, this resulted in an estimate of 85%, which is even higher than based on twin studies using the same cohort and same measure. We therefore conclude that the genetic variance not tagged by SNPs is not an artifact of the twin method itself.
study  biodet  behavioral-gen  iq  psychometrics  psychology  cog-psych  twin-study  methodology  variance-components  state-of-art  🌞  developmental  age-generation  missing-heritability  biases  measurement  sampling-bias  sib-study 
december 2017 by nhaliday
galaxy - How do astronomers estimate the total mass of dust in clouds and galaxies? - Astronomy Stack Exchange
Dust absorbs stellar light (primarily in the ultraviolet), and is heated up. Subsequently it cools by emitting infrared, "thermal" radiation. Assuming a dust composition and grain size distribution, the amount of emitted IR light per unit dust mass can be calculated as a function of temperature. Observing the object at several different IR wavelengths, a Planck curve can be fitted to the data points, yielding the dust temperature. The more UV light incident on the dust, the higher the temperature.

The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).
nibble  q-n-a  overflow  space  measurement  measure  estimate  physics  electromag  visuo  methodology 
december 2017 by nhaliday
How do you measure the mass of a star? (Beginner) - Curious About Astronomy? Ask an Astronomer
Measuring the mass of stars in binary systems is easy. Binary systems are sets of two or more stars in orbit about each other. By measuring the size of the orbit, the stars' orbital speeds, and their orbital periods, we can determine exactly what the masses of the stars are. We can take that knowledge and then apply it to similar stars not in multiple systems.

We also can easily measure the luminosity and temperature of any star. A plot of luminocity versus temperature for a set of stars is called a Hertsprung-Russel (H-R) diagram, and it turns out that most stars lie along a thin band in this diagram known as the main Sequence. Stars arrange themselves by mass on the Main Sequence, with massive stars being hotter and brighter than their small-mass bretheren. If a star falls on the Main Sequence, we therefore immediately know its mass.

In addition to these methods, we also have an excellent understanding of how stars work. Our models of stellar structure are excellent predictors of the properties and evolution of stars. As it turns out, the mass of a star determines its life history from day 1, for all times thereafter, not only when the star is on the Main Sequence. So actually, the position of a star on the H-R diagram is a good indicator of its mass, regardless of whether it's on the Main Sequence or not.
nibble  q-n-a  org:junk  org:edu  popsci  space  physics  electromag  measurement  mechanics  gravity  cycles  oscillation  temperature  visuo  plots  correlation  metrics  explanation  measure  methodology 
december 2017 by nhaliday
Is the speed of light really constant?
So what if the speed of light isn’t the same when moving toward or away from us? Are there any observable consequences? Not to the limits of observation so far. We know, for example, that any one-way speed of light is independent of the motion of the light source to 2 parts in a billion. We know it has no effect on the color of the light emitted to a few parts in 1020. Aspects such as polarization and interference are also indistinguishable from standard relativity. But that’s not surprising, because you don’t need to assume isotropy for relativity to work. In the 1970s, John Winnie and others showed that all the results of relativity could be modeled with anisotropic light so long as the two-way speed was a constant. The “extra” assumption that the speed of light is a uniform constant doesn’t change the physics, but it does make the mathematics much simpler. Since Einstein’s relativity is the simpler of two equivalent models, it’s the model we use. You could argue that it’s the right one citing Occam’s razor, or you could take Newton’s position that anything untestable isn’t worth arguing over.

nibble  scitariat  org:bleg  physics  relativity  electromag  speed  invariance  absolute-relative  curiosity  philosophy  direction  gedanken  axioms  definition  models  experiment  space  science  measurement  volo-avolo  synchrony  uniqueness  multi  pdf  piracy  study  article 
november 2017 by nhaliday
general relativity - What if the universe is rotating as a whole? - Physics Stack Exchange
To find out whether the universe is rotating, in principle the most straightforward test is to watch the motion of a gyroscope relative to the distant galaxies. If it rotates at an angular velocity -ω relative to them, then the universe is rotating at angular velocity ω. In practice, we do not have mechanical gyroscopes with small enough random and systematic errors to put a very low limit on ω. However, we can use the entire solar system as a kind of gyroscope. Solar-system observations put a model-independent upper limit of 10^-7 radians/year on the rotation,[Clemence 1957] which is an order of magnitude too lax to rule out the Gödel metric.
nibble  q-n-a  overflow  physics  relativity  gedanken  direction  absolute-relative  big-picture  space  experiment  measurement  volo-avolo 
november 2017 by nhaliday
The Science of Roman History: Biology, Climate, and the Future of the Past (Hardcover and eBook) | Princeton University Press
Forthcoming April 2018

How the latest cutting-edge science offers a fuller picture of life in Rome and antiquity
This groundbreaking book provides the first comprehensive look at how the latest advances in the sciences are transforming our understanding of ancient Roman history. Walter Scheidel brings together leading historians, anthropologists, and geneticists at the cutting edge of their fields, who explore novel types of evidence that enable us to reconstruct the realities of life in the Roman world.

Contributors discuss climate change and its impact on Roman history, and then cover botanical and animal remains, which cast new light on agricultural and dietary practices. They exploit the rich record of human skeletal material--both bones and teeth—which forms a bio-archive that has preserved vital information about health, nutritional status, diet, disease, working conditions, and migration. Complementing this discussion is an in-depth analysis of trends in human body height, a marker of general well-being. This book also assesses the contribution of genetics to our understanding of the past, demonstrating how ancient DNA is used to track infectious diseases, migration, and the spread of livestock and crops, while the DNA of modern populations helps us reconstruct ancient migrations, especially colonization.

Opening a path toward a genuine biohistory of Rome and the wider ancient world, The Science of RomanHistory offers an accessible introduction to the scientific methods being used in this exciting new area of research, as well as an up-to-date survey of recent findings and a tantalizing glimpse of what the future holds.

Walter Scheidel is the Dickason Professor in the Humanities, Professor of Classics and History, and a Kennedy-Grossman Fellow in Human Biology at Stanford University. He is the author or editor of seventeen previous books, including The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton).
books  draft  todo  broad-econ  economics  anthropology  genetics  genomics  aDNA  measurement  volo-avolo  environment  climate-change  archaeology  history  iron-age  mediterranean  the-classics  demographics  health  embodied  labor  migration  walter-scheidel  agriculture  frontier  malthus  letters  gibbon  traces 
november 2017 by nhaliday
Global Evidence on Economic Preferences
- Benjamin Enke et al

This paper studies the global variation in economic preferences. For this purpose, we present the Global Preference Survey (GPS), an experimentally validated survey dataset of time preference, risk preference, positive and negative reciprocity, altruism, and trust from 80,000 individuals in 76 countries. The data reveal substantial heterogeneity in preferences across countries, but even larger within-country heterogeneity. Across individuals, preferences vary with age, gender, and cognitive ability, yet these relationships appear partly country specific. At the country level, the data reveal correlations between preferences and bio-geographic and cultural variables such as agricultural suitability, language structure, and religion. Variation in preferences is also correlated with economic outcomes and behaviors. Within countries and subnational regions, preferences are linked to individual savings decisions, labor market choices, and prosocial behaviors. Across countries, preferences vary with aggregate outcomes ranging from per capita income, to entrepreneurial activities, to the frequency of armed conflicts.


This paper explores these questions by making use of the core features of the GPS: (i) coverage of 76 countries that represent approximately 90 percent of the world population; (ii) representative population samples within each country for a total of 80,000 respondents, (iii) measures designed to capture time preference, risk preference, altruism, positive reciprocity, negative reciprocity, and trust, based on an ex ante experimental validation procedure (Falk et al., 2016) as well as pre-tests in culturally heterogeneous countries, (iv) standardized elicitation and translation techniques through the pre-existing infrastructure of a global polling institute, Gallup. Upon publication, the data will be made publicly available online. The data on individual preferences are complemented by a comprehensive set of covariates provided by the Gallup World Poll 2012.


The GPS preference measures are based on twelve survey items, which were selected in an initial survey validation study (see Falk et al., 2016, for details). The validation procedure involved conducting multiple incentivized choice experiments for each preference, and testing the relative abilities of a wide range of different question wordings and formats to predict behavior in these choice experiments. The particular items used to construct the GPS preference measures were selected based on optimal performance out of menus of alternative items (for details see Falk et al., 2016). Experiments provide a valuable benchmark for selecting survey items, because they can approximate the ideal choice situations, specified in economic theory, in which individuals make choices in controlled decision contexts. Experimental measures are very costly, however, to implement in a globally representative sample, whereas survey measures are much less costly.⁴ Selecting survey measures that can stand in for incentivized revealed preference measures leverages the strengths of both approaches.

The Preference Survey Module: A Validated Instrument for Measuring Risk, Time, and Social Preferences: http://ftp.iza.org/dp9674.pdf

Table 1: Survey items of the GPS

Figure 1: World maps of patience, risk taking, and positive reciprocity.
Figure 2: World maps of negative reciprocity, altruism, and trust.

Figure 3: Gender coefficients by country. For each country, we regress the respective preference on gender, age and its square, and subjective math skills, and plot the resulting gender coefficients as well as their significance level. In order to make countries comparable, each preference was standardized (z-scores) within each country before computing the coefficients.

Figure 4: Cognitive ability coefficients by country. For each country, we regress the respective preference on gender, age and its square, and subjective math skills, and plot the resulting coefficients on subjective math skills as well as their significance level. In order to make countries comparable, each preference was standardized (z-scores) within each country before computing the coefficients.

Figure 5: Age profiles by OECD membership.

Table 6: Pairwise correlations between preferences and geographic and cultural variables

Figure 10: Distribution of preferences at individual level.
Figure 11: Distribution of preferences at country level.

interesting digression:
D Discussion of Measurement Error and Within- versus Between-Country Variation
study  dataset  data  database  let-me-see  economics  growth-econ  broad-econ  microfoundations  anthropology  cultural-dynamics  culture  psychology  behavioral-econ  values  🎩  pdf  piracy  world  spearhead  general-survey  poll  group-level  within-group  variance-components  🌞  correlation  demographics  age-generation  gender  iq  cooperate-defect  time-preference  temperance  labor  wealth  wealth-of-nations  entrepreneurialism  outcome-risk  altruism  trust  patience  developing-world  maps  visualization  n-factor  things  phalanges  personality  regression  gender-diff  pop-diff  geography  usa  canada  anglo  europe  the-great-west-whale  nordic  anglosphere  MENA  africa  china  asia  sinosphere  latin-america  self-report  hive-mind  GT-101  realness  long-short-run  endo-exo  signal-noise  communism  japan  korea  methodology  measurement  org:ngo  white-paper  endogenous-exogenous  within-without  hari-seldon 
october 2017 by nhaliday
Frontier Culture: The Roots and Persistence of “Rugged Individualism” in the United States∗
In a classic 1893 essay, Frederick Jackson Turner argued that the American frontier promoted individualism. We revisit the Frontier Thesis and examine its relevance at the subnational level. Using Census data and GIS techniques, we track the frontier throughout the 1790–1890 period and construct a novel, county-level measure of historical frontier experience. We document the distinctive demographics of frontier locations during this period—disproportionately male, prime-age adult, foreign-born, and illiterate—as well as their higher levels of individualism, proxied by the share of infrequent names among children. Many decades after the closing of the frontier, counties with longer historical frontier experience exhibit more prevalent individualism and opposition to redistribution and regulation. We take several steps towards a causal interpretation, including an instrumental variables approach that exploits variation in the speed of westward expansion induced by prior national immigration in- flows. Using linked historical Census data, we identify mechanisms giving rise to a persistent frontier culture. Greater individualism on the frontier was not driven solely by selective migration, suggesting that frontier conditions may have shaped behavior and values. We provide evidence suggesting that rugged individualism may be rooted in its adaptive advantage on the frontier and the opportunities for upward mobility through effort.


The Origins of Cultural Divergence: Evidence from a Developing Country.: http://economics.handels.gu.se/digitalAssets/1643/1643769_37.-hoang-anh-ho-ncde-2017-june.pdf
Cultural norms diverge substantially across societies, often even within the same country. In this paper, we test the voluntary settlement hypothesis, proposing that individualistic people tend to self-select into migrating out of reach from collectivist states towards the periphery and that such patterns of historical migration are reflected even in the contemporary distribution of norms. For more than one thousand years during the first millennium CE, northern Vietnam was under an exogenously imposed Chinese rule. From the eleventh to the eighteenth centuries, ancient Vietnam gradually expanded its territory through various waves of southward conquest. We demonstrate that areas being annexed earlier into ancient Vietnam are nowadays more (less) prone to collectivist (individualist) culture. We argue that the southward out-migration of individualist people was the main mechanism behind this finding. The result is consistent across various measures obtained from an extensive household survey and robust to various control variables as well as to different empirical specifications, including an instrumental variable estimation. A lab-in-the-field experiment also confirms the finding.
pdf  study  economics  broad-econ  cliometrics  path-dependence  evidence-based  empirical  stylized-facts  values  culture  cultural-dynamics  anthropology  usa  frontier  allodium  the-west  correlation  individualism-collectivism  measurement  politics  ideology  expression-survival  redistribution  regulation  political-econ  government  migration  history  early-modern  pre-ww2  things  phalanges  🎩  selection  polisci  roots  multi  twitter  social  commentary  scitariat  backup  gnon  growth-econ  medieval  china  asia  developing-world  shift  natural-experiment  endo-exo  endogenous-exogenous  hari-seldon 
october 2017 by nhaliday
Any particular gene has a specific location (its "locus") on a particular chromosome. For any two genes (or loci) alpha and beta, we can ask "What is the recombination frequency between them?" If the genes are on different chromosomes, the answer is 50% (independent assortment). If the two genes are on the same chromosome, the recombination frequency will be somewhere in the range from 0 to 50%. The "map unit" (1 cM) is the genetic map distance that corresponds to a recombination frequency of 1%. In large chromosomes, the cumulative map distance may be much greater than 50cM, but the maximum recombination frequency is 50%. Why? In large chromosomes, there is enough length to allow for multiple cross-overs, so we have to ask what result we expect for random multiple cross-overs.

1. How is it that random multiple cross-overs give the same result as independent assortment?

Figure 5.12 shows how the various double cross-over possibilities add up, resulting in gamete genotype percentages that are indistinguisable from independent assortment (50% parental type, 50% non-parental type). This is a very important figure. It provides the explanation for why genes that are far apart on a very large chromosome sort out in crosses just as if they were on separate chromosomes.

2. Is there a way to measure how close together two crossovers can occur involving the same two chromatids? That is, how could we measure whether there is spacial "interference"?

Figure 5.13 shows how a measurement of the gamete frequencies resulting from a "three point cross" can answer this question. If we would get a "lower than expected" occurrence of recombinant genotypes aCb and AcB, it would suggest that there is some hindrance to the two cross-overs occurring this close together. Crosses of this type in Drosophila have shown that, in this organism, double cross-overs do not occur at distances of less than about 10 cM between the two cross-over sites. ( Textbook, page 196. )

3. How does all of this lead to the "mapping function", the mathematical (graphical) relation between the observed recombination frequency (percent non-parental gametes) and the cumulative genetic distance in map units?

Figure 5.14 shows the result for the two extremes of "complete interference" and "no interference". The situation for real chromosomes in real organisms is somewhere between these extremes, such as the curve labelled "interference decreasing with distance".
org:junk  org:edu  explanation  faq  nibble  genetics  genomics  bio  ground-up  magnitude  data  flux-stasis  homo-hetero  measure  orders  metric-space  limits  measurement 
october 2017 by nhaliday
Tax Evasion and Inequality
This paper attempts to estimate the size and distribution of tax evasion in rich countries. We combine stratified random audits—the key source used to study tax evasion so far—with new micro-data leaked from two large offshore financial institutions, HSBC Switzerland (“Swiss leaks”) and Mossack Fonseca (“Panama Papers”). We match these data to population-wide wealth records in Norway, Sweden, and Denmark. We find that tax evasion rises sharply with wealth, a phenomenon that random audits fail to capture. On average about 3% of personal taxes are evaded in Scandinavia, but this figure rises to about 30% in the top 0.01% of the wealth distribution, a group that includes households with more than $40 million in net wealth. A simple model of the supply of tax evasion services can explain why evasion rises steeply with wealth. Taking tax evasion into account increases the rise in inequality seen in tax data since the 1970s markedly, highlighting the need to move beyond tax data to capture income and wealth at the top, even in countries where tax compliance is generally high. We also find that after reducing tax evasion—by using tax amnesties—tax evaders do not legally avoid taxes more. This result suggests that fighting tax evasion can be an effective way to collect more tax revenue from the ultra-wealthy.

Figure 1

America’s unreported economy: measuring the size, growth and determinants of income tax evasion in the U.S.: https://link.springer.com/article/10.1007/s10611-011-9346-x
This study empirically investigates the extent of noncompliance with the tax code and examines the determinants of federal income tax evasion in the U.S. Employing a refined version of Feige’s (Staff Papers, International Monetary Fund 33(4):768–881, 1986, 1989) General Currency Ratio (GCR) model to estimate a time series of unreported income as our measure of tax evasion, we find that 18–23% of total reportable income may not properly be reported to the IRS. This gives rise to a 2009 “tax gap” in the range of $390–$540 billion. As regards the determinants of tax noncompliance, we find that federal income tax evasion is an increasing function of the average effective federal income tax rate, the unemployment rate, the nominal interest rate, and per capita real GDP, and a decreasing function of the IRS audit rate. Despite important refinements of the traditional currency ratio approach for estimating the aggregate size and growth of unreported economies, we conclude that the sensitivity of the results to different benchmarks, imperfect data sources and alternative specifying assumptions precludes obtaining results of sufficient accuracy and reliability to serve as effective policy guides.
pdf  study  economics  micro  evidence-based  data  europe  nordic  scale  class  compensation  money  monetary-fiscal  political-econ  redistribution  taxes  madisonian  inequality  history  mostly-modern  natural-experiment  empirical  🎩  cocktail  correlation  models  supply-demand  GT-101  crooked  elite  vampire-squid  nationalism-globalism  multi  pro-rata  usa  time-series  trends  world-war  cold-war  government  todo  planning  long-term  trivia  law  crime  criminology  estimate  speculation  measurement  labor  macro  econ-metrics  wealth  stock-flow  time  density  criminal-justice  frequency  dark-arts  traces  evidence 
october 2017 by nhaliday
Does Learning to Read Improve Intelligence? A Longitudinal Multivariate Analysis in Identical Twins From Age 7 to 16
Stuart Richie, Bates, Plomin

SEM: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354297/figure/fig03/

The variance explained by each path in the diagrams included here can be calculated by squaring its path weight. To take one example, reading differences at age 12 in the model shown in Figure​Figure33 explain 7% of intelligence differences at age 16 (.262). However, since our measures are of differences, they are likely to include substantial amounts of noise: Measurement error may produce spurious differences. To remove this error variance, we can take an estimate of the reliability of the measures (generally high, since our measures are normed, standardized tests), which indicates the variance expected purely by the reliability of the measure, and subtract it from the observed variance between twins in our sample. Correcting for reliability in this way, the effect size estimates are somewhat larger; to take the above example, the reliability-corrected effect size of age 12 reading differences on age 16 intelligence differences is around 13% of the “signal” variance. It should be noted that the age 12 reading differences themselves are influenced by many previous paths from both reading and intelligence, as illustrated in Figure​Figure33.


The present study provided compelling evidence that improvements in reading ability, themselves caused purely by the nonshared environment, may result in improvements in both verbal and nonverbal cognitive ability, and may thus be a factor increasing cognitive diversity within families (Plomin, 2011). These associations are present at least as early as age 7, and are not—to the extent we were able to test this possibility—driven by differences in reading exposure. Since reading is a potentially remediable ability, these findings have implications for reading instruction: Early remediation of reading problems might not only aid in the growth of literacy, but may also improve more general cognitive abilities that are of critical importance across the life span.

Does Reading Cause Later Intelligence? Accounting for Stability in Models of Change: http://sci-hub.tw/10.1111/cdev.12669
Results from a state–trait model suggest that reported effects of reading ability on later intelligence may be artifacts of previously uncontrolled factors, both environmental in origin and stable during this developmental period, influencing both constructs throughout development.
study  albion  scitariat  spearhead  psychology  cog-psych  psychometrics  iq  intelligence  eden  language  psych-architecture  longitudinal  twin-study  developmental  environmental-effects  studying  🌞  retrofit  signal-noise  intervention  causation  graphs  graphical-models  flexibility  britain  neuro-nitgrit  effect-size  variance-components  measurement  multi  sequential  time  composition-decomposition  biodet  behavioral-gen  direct-indirect  systematic-ad-hoc  debate  hmm  pdf  piracy  flux-stasis 
september 2017 by nhaliday
Caught in the act | West Hunter
The fossil record is sparse. Let me try to explain that. We have at most a few hundred Neanderthal skeletons, most in pretty poor shape. How many Neanderthals ever lived? I think their population varied in size quite a bit – lowest during glacial maxima, probably highest in interglacials. Their degree of genetic diversity suggests an effective population size of ~1000, but that would be dominated by the low points (harmonic average). So let’s say 50,000 on average, over their whole range (Europe, central Asia, the Levant, perhaps more). Say they were around for 300,000 years, with a generation time of 30 years – 10,000 generations, for a total of five hundred million Neanderthals over all time. So one in a million Neanderthals ends up in a museum: one every 20 generations. Low time resolution!

So if anatomically modern humans rapidly wiped out Neanderthals, we probably couldn’t tell. In much the same way, you don’t expect to find the remains of many dinosaurs killed by the Cretaceous meteor impact (at most one millionth of one generation, right?), or of Columbian mammoths killed by a wave of Amerindian hunters. Sometimes invaders leave a bigger footprint: a bunch of cities burning down with no rebuilding tells you something. But even when you know that population A completely replaced population B, it can be hard to prove that just how it happened. After all, population A could have all committed suicide just before B showed up. Stranger things have happened – but not often.
west-hunter  scitariat  discussion  ideas  data  objektbuch  scale  magnitude  estimate  population  sapiens  archaics  archaeology  pro-rata  history  antiquity  methodology  volo-avolo  measurement  pop-structure  density  time  frequency  apollonian-dionysian  traces  evidence 
september 2017 by nhaliday
Gimbal lock - Wikipedia
Gimbal lock is the loss of one degree of freedom in a three-dimensional, three-gimbal mechanism that occurs when the axes of two of the three gimbals are driven into a parallel configuration, "locking" the system into rotation in a degenerate two-dimensional space.

The word lock is misleading: no gimbal is restrained. All three gimbals can still rotate freely about their respective axes of suspension. Nevertheless, because of the parallel orientation of two of the gimbals' axes there is no gimbal available to accommodate rotation along one axis.

Now this is where most people stop thinking about the issue and move on with their life. They just conclude that Euler angles are somehow broken. This is also where a lot of misunderstandings happen so it's worth investigating the matter slightly further than what causes gimbal lock.

It is important to understand that this is only problematic if you interpolate in Euler angles**! In a real physical gimbal this is given - you have no other choice. In computer graphics you have many other choices, from normalized matrix, axis angle or quaternion interpolation. Gimbal lock has a much more dramatic implication to designing control systems than it has to 3d graphics. Which is why a mechanical engineer for example will have a very different take on gimbal locking.

You don't have to give up using Euler angles to get rid of gimbal locking, just stop interpolating values in Euler angles. Of course, this means that you can now no longer drive a rotation by doing direct manipulation of one of the channels. But as long as you key the 3 angles simultaneously you have no problems and you can internally convert your interpolation target to something that has less problems.

Using Euler angles is just simply more intuitive to think in most cases. And indeed Euler never claimed it was good for interpolating but just that it can model all possible space orientations. So Euler angles are just fine for setting orientations like they were meant to do. Also incidentally Euler angles have the benefit of being able to model multi turn rotations which will not happen sanely for the other representations.
nibble  dirty-hands  physics  mechanics  robotics  degrees-of-freedom  measurement  gotchas  volo-avolo  duplication  wiki  reference  multi  q-n-a  stackex  graphics  spatial  direction  dimensionality  sky 
september 2017 by nhaliday
During the Renaissance, the focus, especially in the arts, was on representing as accurately as possible the real world whether on a 2 dimensional surface or a solid such as marble or granite. This required two things. The first was new methods for drawing or painting, e.g., perspective. The second, relevant to this topic, was careful observation.

With the spread of cannon in warfare, the study of projectile motion had taken on greater importance, and now, with more careful observation and more accurate representation, came the realization that projectiles did not move the way Aristotle and his followers had said they did: the path of a projectile did not consist of two consecutive straight line components but was instead a smooth curve. [1]

Now someone needed to come up with a method to determine if there was a special curve a projectile followed. But measuring the path of a projectile was not easy.

Using an inclined plane, Galileo had performed experiments on uniformly accelerated motion, and he now used the same apparatus to study projectile motion. He placed an inclined plane on a table and provided it with a curved piece at the bottom which deflected an inked bronze ball into a horizontal direction. The ball thus accelerated rolled over the table-top with uniform motion and then fell off the edge of the table Where it hit the floor, it left a small mark. The mark allowed the horizontal and vertical distances traveled by the ball to be measured. [2]

By varying the ball's horizontal velocity and vertical drop, Galileo was able to determine that the path of a projectile is parabolic.


Galileo's Discovery of the Parabolic Trajectory: http://www.jstor.org/stable/24949756

Galileo's Experimental Confirmation of Horizontal Inertia: Unpublished Manuscripts (Galileo
Gleanings XXII): https://sci-hub.tw/https://www.jstor.org/stable/229718
- Drake Stillman

MORE THAN A DECADE HAS ELAPSED since Thomas Settle published a classic paper in which Galileo's well-known statements about his experiments on inclined planes were completely vindicated.' Settle's paper replied to an earlier attempt by Alexandre Koyre to show that Galileo could not have obtained the results he claimed in his Two New Sciences by actual observations using the equipment there described. The practical ineffectiveness of Settle's painstaking repetition of the experiments in altering the opinion of historians of science is only too evident. Koyre's paper was reprinted years later in book form without so much as a note by the editors concerning Settle's refutation of its thesis.2 And the general literature continues to belittle the role of experiment in Galileo's physics.

More recently James MacLachlan has repeated and confirmed a different experiment reported by Galileo-one which has always seemed highly exaggerated and which was also rejected by Koyre with withering sarcasm.3 In this case, however, it was accuracy of observation rather than precision of experimental data that was in question. Until now, nothing has been produced to demonstrate Galileo's skill in the design and the accurate execution of physical experiment in the modern sense.

Pant of a page of Galileo's unpublished manuscript notes, written late in 7608, corroborating his inertial assumption and leading directly to his discovery of the parabolic trajectory. (Folio 1 16v Vol. 72, MSS Galileiani; courtesy of the Biblioteca Nazionale di Firenze.)


(The same skeptical historians, however, believe that to show that Galileo could have used the medieval mean-speed theorem suffices to prove that he did use it, though it is found nowhere in his published or unpublished writings.)


Now, it happens that among Galileo's manuscript notes on motion there are many pages that were not published by Favaro, since they contained only calculations or diagrams without attendant propositions or explanations. Some pages that were published had first undergone considerable editing, making it difficult if not impossible to discern their full significance from their printed form. This unpublished material includes at least one group of notes which cannot satisfactorily be accounted for except as representing a series of experiments designed to test a fundamental assumption, which led to a new, important discovery. In these documents precise empirical data are given numerically, comparisons are made with calculated values derived from theory, a source of discrepancy from still another expected result is noted, a new experiment is designed to eliminate this, and further empirical data are recorded. The last-named data, although proving to be beyond Galileo's powers of mathematical analysis at the time, when subjected to modern analysis turn out to be remarkably precise. If this does not represent the experimental process in its fully modern sense, it is hard to imagine what standards historians require to be met.

The discovery of these notes confirms the opinion of earlier historians. They read only Galileo's published works, but did so without a preconceived notion of continuity in the history of ideas. The opinion of our more sophisticated colleagues has its sole support in philosophical interpretations that fit with preconceived views of orderly long-term scientific development. To find manuscript evidence that Galileo was at home in the physics laboratory hardly surprises me. I should find it much more astonishing if, by reasoning alone, working only from fourteenth-century theories and conclusions, he had continued along lines so different from those followed by profound philosophers in earlier centuries. It is to be hoped that, warned by these examples, historians will begin to restore the old cautionary clauses in analogous instances in which scholarly opinions are revised without new evidence, simply to fit historical theories.

In what follows, the newly discovered documents are presented in the context of a hypothetical reconstruction of Galileo's thought.


As early as 1590, if we are correct in ascribing Galileo's juvenile De motu to that date, it was his belief that an ideal body resting on an ideal horizontal plane could be set in motion by a force smaller than any previously assigned force, however small. By "horizontal plane" he meant a surface concentric with the earth but which for reasonable distances would be indistinguishable from a level plane. Galileo noted at the time that experiment did not confirm this belief that the body could be set in motion by a vanishingly small force, and he attributed the failure to friction, pressure, the imperfection of material surfaces and spheres, and the departure of level planes from concentricity with the earth.5

It followed from this belief that under ideal conditions the motion so induced would also be perpetual and uniform. Galileo did not mention these consequences until much later, and it is impossible to say just when he perceived them. They are, however, so evident that it is safe to assume that he saw them almost from the start. They constitute a trivial case of the proposition he seems to have been teaching before 1607-that a mover is required to start motion, but that absence of resistance is then sufficient to account for its continuation.6

In mid-1604, following some investigations of motions along circular arcs and motions of pendulums, Galileo hit upon the law that in free fall the times elapsed from rest are as the smaller distance is to the mean proportional between two distances fallen.7 This gave him the times-squared law as well as the rule of odd numbers for successive distances and speeds in free fall. During the next few years he worked out a large number of theorems relating to motion along inclined planes, later published in the Two New Sciences. He also arrived at the rule that the speed terminating free fall from rest was double the speed of the fall itself. These theorems survive in manuscript notes of the period 1604-1609. (Work during these years can be identified with virtual certainty by the watermarks in the paper used, as I have explained elsewhere.8)

In the autumn of 1608, after a summer at Florence, Galileo seems to have interested himself in the question whether the actual slowing of a body moving horizontally followed any particular rule. On folio 117i of the manuscripts just mentioned, the numbers 196, 155, 121, 100 are noted along the horizontal line near the middle of the page (see Fig. 1). I believe that this was the first entry on this leaf, for reasons that will appear later, and that Galileo placed his grooved plane in the level position and recorded distances traversed in equal times along it. Using a metronome, and rolling a light wooden ball about 4 3/4 inches in diameter along a plane with a groove 1 3/4 inches wide, I obtained similar relations over a distance of 6 feet. The figures obtained vary greatly for balls of different materials and weights and for greatly different initial speeds.9 But it suffices for my present purposes that Galileo could have obtained the figures noted by observing the actual deceleration of a ball along a level plane. It should be noted that the watermark on this leaf is like that on folio 116, to which we shall come presently, and it will be seen later that the two sheets are closely connected in time in other ways as well.

The relatively rapid deceleration is obviously related to the contact of ball and groove. Were the ball to roll right off the end of the plane, all resistance to horizontal motion would be virtually removed. If, then, there were any way to have a given ball leave the plane at different speeds of which the ratios were known, Galileo's old idea that horizontal motion would continue uniformly in the absence of resistance could be put to test. His law of free fall made this possible. The ratios of speeds could be controlled by allowing the ball to fall vertically through known heights, at the ends of which it would be deflected horizontally. Falls through given heights … [more]
nibble  org:junk  org:edu  physics  mechanics  gravity  giants  the-trenches  discovery  history  early-modern  europe  mediterranean  the-great-west-whale  frontier  science  empirical  experiment  arms  technology  lived-experience  time  measurement  dirty-hands  iron-age  the-classics  medieval  sequential  wire-guided  error  wiki  reference  people  quantitative-qualitative  multi  pdf  piracy  study  essay  letters  discrete  news  org:mag  org:sci  popsci 
august 2017 by nhaliday
Mainspring - Wikipedia
A mainspring is a spiral torsion spring of metal ribbon—commonly spring steel—used as a power source in mechanical watches, some clocks, and other clockwork mechanisms. Winding the timepiece, by turning a knob or key, stores energy in the mainspring by twisting the spiral tighter. The force of the mainspring then turns the clock's wheels as it unwinds, until the next winding is needed. The adjectives wind-up and spring-powered refer to mechanisms powered by mainsprings, which also include kitchen timers, music boxes, wind-up toys and clockwork radios.

torque basically follows Hooke's Law
nibble  wiki  reference  physics  mechanics  spatial  diy  jargon  trivia  concept  time  technology  dirty-hands  history  medieval  early-modern  europe  the-great-west-whale  measurement 
august 2017 by nhaliday
Demography of the Roman Empire - Wikipedia
There are few recorded population numbers for the whole of antiquity, and those that exist are often rhetorical or symbolic. Unlike the contemporaneous Han Dynasty, no general census survives for the Roman Empire. The late period of the Roman Republic provides a small exception to this general rule: serial statistics for Roman citizen numbers, taken from census returns, survive for the early Republic through the 1st century CE.[41] Only the figures for periods after the mid-3rd century BCE are reliable, however. Fourteen figures are available for the 2nd century BCE (from 258,318 to 394,736). Only four figures are available for the 1st century BCE, and are feature a large break between 70/69 BCE (910,000) and 28 BCE (4,063,000). The interpretation of the later figures—the Augustan censuses of 28 BCE, 8 BCE, and 14 CE—is therefore controversial.[42] Alternate interpretations of the Augustan censuses (such as those of E. Lo Cascio[43]) produce divergent population histories across the whole imperial period.[44]

Roman population size: the logic of the debate: https://www.princeton.edu/~pswpc/pdfs/scheidel/070706.pdf
- Walter Scheidel (cited in book by Vaclav Smil, "Why America is Not a New Rome")

Our ignorance of ancient population numbers is one of the biggest obstacles to our understanding of Roman history. After generations of prolific scholarship, we still do not know how many people inhabited Roman Italy and the Mediterranean at any given point in time. When I say ‘we do not know’ I do not simply mean that we lack numbers that are both precise and safely known to be accurate: that would surely be an unreasonably high standard to apply to any pre-modern society. What I mean is that even the appropriate order of magnitude remains a matter of intense dispute.

Historical urban community sizes: https://en.wikipedia.org/wiki/Historical_urban_community_sizes

World population estimates: https://en.wikipedia.org/wiki/World_population_estimates
As a general rule, the confidence of estimates on historical world population decreases for the more distant past. Robust population data only exists for the last two or three centuries. Until the late 18th century, few governments had ever performed an accurate census. In many early attempts, such as in Ancient Egypt and the Persian Empire, the focus was on counting merely a subset of the population for purposes of taxation or military service.[3] Published estimates for the 1st century ("AD 1") suggest an uncertainty of the order of 50% (estimates range between 150 and 330 million). Some estimates extend their timeline into deep prehistory, to "10,000 BC", i.e. the early Holocene, when world population estimates range roughly between one and ten million (with an uncertainty of up to an order of magnitude).[4][5]

Estimates for yet deeper prehistory, into the Paleolithic, are of a different nature. At this time human populations consisted entirely of non-sedentary hunter-gatherer populations, with anatomically modern humans existing alongside archaic human varieties, some of which are still ancestral to the modern human population due to interbreeding with modern humans during the Upper Paleolithic. Estimates of the size of these populations are a topic of paleoanthropology. A late human population bottleneck is postulated by some scholars at approximately 70,000 years ago, during the Toba catastrophe, when Homo sapiens population may have dropped to as low as between 1,000 and 10,000 individuals.[6][7] For the time of speciation of Homo sapiens, some 200,000 years ago, an effective population size of the order of 10,000 to 30,000 individuals has been estimated, with an actual "census population" of early Homo sapiens of roughly 100,000 to 300,000 individuals.[8]
history  iron-age  mediterranean  the-classics  demographics  fertility  data  europe  population  measurement  volo-avolo  estimate  wiki  reference  article  conquest-empire  migration  canon  scale  archaeology  multi  broad-econ  pdf  study  survey  debate  uncertainty  walter-scheidel  vaclav-smil  urban  military  economics  labor  time-series  embodied  health  density  malthus  letters  urban-rural  database  list  antiquity  medieval  early-modern  mostly-modern  time  sequential  MENA  the-great-west-whale  china  asia  sinosphere  occident  orient  japan  britain  germanic  gallic  summary  big-picture  objektbuch  confidence  sapiens  anthropology  methodology  farmers-and-foragers  genetics  genomics  chart 
august 2017 by nhaliday
Is the economy illegible? | askblog
In the model of the economy as a GDP factory, the most fundamental equation is the production function, Y = f(K,L).

This says that total output (Y) is determined by the total amount of capital (K) and the total amount of labor (L).

Let me stipulate that the economy is legible to the extent that this model can be applied usefully to explain economic developments. I want to point out that the economy, while never as legible as economists might have thought, is rapidly becoming less legible.
econotariat  cracker-econ  economics  macro  big-picture  empirical  legibility  let-me-see  metrics  measurement  econ-metrics  volo-avolo  securities  markets  amazon  business-models  business  tech  sv  corporation  inequality  compensation  polarization  econ-productivity  stagnation  monetary-fiscal  models  complex-systems  map-territory  thinking  nationalism-globalism  time-preference  cost-disease  education  healthcare  composition-decomposition  econometrics  methodology  lens  arrows  labor  capital  trends  intricacy  🎩  moments  winner-take-all  efficiency  input-output 
august 2017 by nhaliday
The Determinants of Trust
Both individual experiences and community characteristics influence how much people trust each other. Using data drawn from US localities we find that the strongest factors that reduce trust are: i) a recent history of traumatic experiences, even though the passage of time reduces this effect fairly rapidly; ii) belonging to a group that historically felt discriminated against, such as minorities (black in particular) and, to a lesser extent, women; iii) being economically unsuccessful in terms of income and education; iv) living in a racially mixed community and/or in one with a high degree of income disparity. Religious beliefs and ethnic origins do not significantly affect trust. The latter result may be an indication that the American melting pot at least up to a point works, in terms of homogenizing attitudes of different cultures, even though racial cleavages leading to low trust are still quite high.

Understanding Trust: http://www.nber.org/papers/w13387
In this paper we resolve this puzzle by recognizing that trust has two components: a belief-based one and a preference based one. While the sender's behavior reflects both, we show that WVS-like measures capture mostly the belief-based component, while questions on past trusting behavior are better at capturing the preference component of trust.

MEASURING TRUST: http://scholar.harvard.edu/files/laibson/files/measuring_trust.pdf
We combine two experiments and a survey to measure trust and trustworthiness— two key components of social capital. Standard attitudinal survey questions about trust predict trustworthy behavior in our experiments much better than they predict trusting behavior. Trusting behavior in the experiments is predicted by past trusting behavior outside of the experiments. When individuals are closer socially, both trust and trustworthiness rise. Trustworthiness declines when partners are of different races or nationalities. High status individuals are able to elicit more trustworthiness in others.

What is Social Capital? The Determinants of Trust and Trustworthiness: http://www.nber.org/papers/w7216
Using a sample of Harvard undergraduates, we analyze trust and social capital in two experiments. Trusting behavior and trustworthiness rise with social connection; differences in race and nationality reduce the level of trustworthiness. Certain individuals appear to be persistently more trusting, but these people do not say they are more trusting in surveys. Survey questions about trust predict trustworthiness not trust. Only children are less trustworthy. People behave in a more trustworthy manner towards higher status individuals, and therefore status increases earnings in the experiment. As such, high status persons can be said to have more social capital.

Trust and Cheating: http://www.nber.org/papers/w18509
We find that: i) both parties to a trust exchange have implicit notions of what constitutes cheating even in a context without promises or messages; ii) these notions are not unique - the vast majority of senders would feel cheated by a negative return on their trust/investment, whereas a sizable minority defines cheating according to an equal split rule; iii) these implicit notions affect the behavior of both sides to the exchange in terms of whether to trust or cheat and to what extent. Finally, we show that individual's notions of what constitutes cheating can be traced back to two classes of values instilled by parents: cooperative and competitive. The first class of values tends to soften the notion while the other tightens it.

Nationalism and Ethnic-Based Trust: Evidence from an African Border Region: https://u.osu.edu/robinson.1012/files/2015/12/Robinson_NationalismTrust-1q3q9u1.pdf
These results offer microlevel evidence that a strong and salient national identity can diminish ethnic barriers to trust in diverse societies.

One Team, One Nation: Football, Ethnic Identity, and Conflict in Africa: http://conference.nber.org/confer//2017/SI2017/DEV/Durante_Depetris-Chauvin.pdf
Do collective experiences that prime sentiments of national unity reduce interethnic tensions and conflict? We examine this question by looking at the impact of national football teams’ victories in sub-Saharan Africa. Combining individual survey data with information on over 70 official matches played between 2000 and 2015, we find that individuals interviewed in the days after a victory of their country’s national team are less likely to report a strong sense of ethnic identity and more likely to trust people of other ethnicities than those interviewed just before. The effect is sizable and robust and is not explained by generic euphoria or optimism. Crucially, national victories do not only affect attitudes but also reduce violence. Indeed, using plausibly exogenous variation from close qualifications to the Africa Cup of Nations, we find that countries that (barely) qualified experience significantly less conflict in the following six months than countries that (barely) did not. Our findings indicate that, even where ethnic tensions have deep historical roots, patriotic shocks can reduce inter-ethnic tensions and have a tangible impact on conflict.

Why Does Ethnic Diversity Undermine Public Goods Provision?: http://www.columbia.edu/~mh2245/papers1/HHPW.pdf
We identify three families of mechanisms that link diversity to public goods provision—–what we term “preferences,” “technology,” and “strategy selection” mechanisms—–and run a series of experimental games that permit us to compare the explanatory power of distinct mechanisms within each of these three families. Results from games conducted with a random sample of 300 subjects from a slum neighborhood of Kampala, Uganda, suggest that successful public goods provision in homogenous ethnic communities can be attributed to a strategy selection mechanism: in similar settings, co-ethnics play cooperative equilibria, whereas non-co-ethnics do not. In addition, we find evidence for a technology mechanism: co-ethnics are more closely linked on social networks and thus plausibly better able to support cooperation through the threat of social sanction. We find no evidence for prominent preference mechanisms that emphasize the commonality of tastes within ethnic groups or a greater degree of altruism toward co-ethnics, and only weak evidence for technology mechanisms that focus on the impact of shared ethnicity on the productivity of teams.

does it generalize to first world?

Higher Intelligence Groups Have Higher Cooperation Rates in the Repeated Prisoner's Dilemma: https://ideas.repec.org/p/iza/izadps/dp8499.html
The initial cooperation rates are similar, it increases in the groups with higher intelligence to reach almost full cooperation, while declining in the groups with lower intelligence. The difference is produced by the cumulation of small but persistent differences in the response to past cooperation of the partner. In higher intelligence subjects, cooperation after the initial stages is immediate and becomes the default mode, defection instead requires more time. For lower intelligence groups this difference is absent. Cooperation of higher intelligence subjects is payoff sensitive, thus not automatic: in a treatment with lower continuation probability there is no difference between different intelligence groups

Why societies cooperate: https://voxeu.org/article/why-societies-cooperate
Three attributes are often suggested to generate cooperative behaviour – a good heart, good norms, and intelligence. This column reports the results of a laboratory experiment in which groups of players benefited from learning to cooperate. It finds overwhelming support for the idea that intelligence is the primary condition for a socially cohesive, cooperative society. Warm feelings towards others and good norms have only a small and transitory effect.

individual payoff, etc.:

Trust, Values and False Consensus: http://www.nber.org/papers/w18460
Trust beliefs are heterogeneous across individuals and, at the same time, persistent across generations. We investigate one mechanism yielding these dual patterns: false consensus. In the context of a trust game experiment, we show that individuals extrapolate from their own type when forming trust beliefs about the same pool of potential partners - i.e., more (less) trustworthy individuals form more optimistic (pessimistic) trust beliefs - and that this tendency continues to color trust beliefs after several rounds of game-play. Moreover, we show that one's own type/trustworthiness can be traced back to the values parents transmit to their children during their upbringing. In a second closely-related experiment, we show the economic impact of mis-calibrated trust beliefs stemming from false consensus. Miscalibrated beliefs lower participants' experimental trust game earnings by about 20 percent on average.

The Right Amount of Trust: http://www.nber.org/papers/w15344
We investigate the relationship between individual trust and individual economic performance. We find that individual income is hump-shaped in a measure of intensity of trust beliefs. Our interpretation is that highly trusting individuals tend to assume too much social risk and to be cheated more often, ultimately performing less well than those with a belief close to the mean trustworthiness of the population. On the other hand, individuals with overly pessimistic beliefs avoid being cheated, but give up profitable opportunities, therefore underperforming. The cost of either too much or too little trust is comparable to the income lost by forgoing college.


This framework allows us to show that income-maximizing trust typically exceeds the trust level of the average person as well as to estimate the distribution of income lost to trust mistakes. We find that although a majority of individuals has well calibrated beliefs, a non-trivial proportion of the population (10%) has trust beliefs sufficiently poorly calibrated to lower income by more than 13%.

Do Trust and … [more]
study  economics  alesina  growth-econ  broad-econ  trust  cohesion  social-capital  religion  demographics  race  diversity  putnam-like  compensation  class  education  roots  phalanges  general-survey  multi  usa  GT-101  conceptual-vocab  concept  behavioral-econ  intricacy  composition-decomposition  values  descriptive  correlation  harvard  field-study  migration  poll  status  🎩  🌞  chart  anthropology  cultural-dynamics  psychology  social-psych  sociology  cooperate-defect  justice  egalitarianism-hierarchy  inequality  envy  n-factor  axelrod  pdf  microfoundations  nationalism-globalism  africa  intervention  counter-revolution  tribalism  culture  society  ethnocentrism  coordination  world  developing-world  innovation  econ-productivity  government  stylized-facts  madisonian  wealth-of-nations  identity-politics  public-goodish  s:*  legacy  things  optimization  curvature  s-factor  success  homo-hetero  higher-ed  models  empirical  contracts  human-capital  natural-experiment  endo-exo  data  scale  trade  markets  time  supply-demand  summary 
august 2017 by nhaliday
How to estimate distance using your finger | Outdoor Herbivore Blog
1. Hold your right arm out directly in front of you, elbow straight, thumb upright.
2. Align your thumb with one eye closed so that it covers (or aligns) the distant object. Point marked X in the drawing.
3. Do not move your head, arm or thumb, but switch eyes, so that your open eye is now closed and the other eye is open. Observe closely where the object now appears with the other open eye. Your thumb should appear to have moved to some other point: no longer in front of the object. This new point is marked as Y in the drawing.
4. Estimate this displacement XY, by equating it to the estimated size of something you are familiar with (height of tree, building width, length of a car, power line poles, distance between nearby objects). In this case, the distant barn is estimated to be 100′ wide. It appears 5 barn widths could fit this displacement, or 500 feet. Now multiply that figure by 10 (the ratio of the length of your arm to the distance between your eyes), and you get the distance between you and the thicket of blueberry bushes — 5000 feet away(about 1 mile).

- Basically uses parallax (similar triangles) with each eye.
- When they say to compare apparent shift to known distance, won't that scale with the unknown distance? The example uses width of an object at the point whose distance is being estimated.

per here: https://www.trails.com/how_26316_estimate-distances-outdoors.html
Select a distant object that the width can be accurately determined. For example, use a large rock outcropping. Estimate the width of the rock. Use 200 feet wide as an example here.
outdoors  human-bean  embodied  embodied-pack  visuo  spatial  measurement  lifehack  howto  navigation  prepping  survival  objektbuch  multi  measure  estimate 
august 2017 by nhaliday
Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? | Scientific Reports
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest – and hope – that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited – in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
study  org:nat  papers  machine-learning  chemistry  measurement  volo-avolo  lower-bounds  analysis  realness  speedometer  nibble  🔬  applications  frontier  state-of-art  no-go  accuracy  interdisciplinary 
july 2017 by nhaliday
On the measuring and mis-measuring of Chinese growth | VOX, CEPR’s Policy Portal
Unofficial indicators of Chinese GDP often suggest that Beijing’s growth figures are exaggerated. This column uses nighttime light as a proxy to estimate Chinese GDP growth. Since 2012, the authors’ estimate is never appreciably lower, and is in many years higher, than the GDP growth rate reported in the official statistics. While not ruling out the risk of future turmoil, the analysis presents few immediate indications that Chinese growth is being systematically overestimated.

org:ngo  econotariat  study  summary  economics  growth-econ  econometrics  econ-metrics  measurement  broad-econ  china  asia  sinosphere  the-world-is-just-atoms  energy-resources  trends  correlation  wonkish  realness  article  multi  news  org:foreign  n-factor  corruption  crooked  wealth  visuo  electromag  sky  space 
july 2017 by nhaliday
A Review of Avner Greif’s Institutions and the Path to the Modern Economy: Lessons from Medieval Trade
Avner Greif’s Institutions and the Path to the Modern Economy: Lessons from Medieval Trade (Cambridge University Press, 2006) is a major work in the ongoing project of many economists and economic historians to show that institutions are the fundamental driver of all economic history, and of all contemporary differences in economic performance. This review outlines the contribution of this book to the project and the general status of this long standing ambition.
pdf  spearhead  gregory-clark  essay  article  books  review  economics  growth-econ  broad-econ  institutions  history  early-modern  europe  the-great-west-whale  divergence  🎩  industrial-revolution  medieval  critique  roots  world  measurement  empirical  realness  cultural-dynamics  north-weingast-like  modernity  microfoundations  aphorism  track-record 
july 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

This post should have been entitled “Zombies who only think of their next cool IV fix”
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated 
june 2017 by nhaliday
10 million DTC dense marker genotypes by end of 2017? – Gene Expression
Ultimately I do wonder if I was a bit too optimistic that 50% of the US population will be sequenced at 30x by 2025. But the dynamic is quite likely to change rapidly because of a technological shift as the sector goes through a productivity uptick. We’re talking about exponential growth, which humans have weak intuition about….
gnxp  scitariat  commentary  biotech  scaling-up  genetics  genomics  scale  bioinformatics  multi  toys  measurement  duplication  signal-noise  coding-theory 
june 2017 by nhaliday
Economic Growth in Ancient Greece | pseudoerasmus
Maybe land-and-dung expansion does not really require a fancy institutional explanation. Territory expanded, land yields rose, and people have always traded their surpluses. Why invoke “inclusive institutions”, as Ober effectively does, for something so mundane ? Perhaps the seminal cultural accomplishments of classical Greece bias some of us to look for “special” causes of the expansion.

Note, this is not an argument that political economy or “institutions” play no role in the rise and decline of economies. But in this particular case, so little seems established about the descriptive statistics, let alone the “growth accounting”, of Greek economic expansion in 800-300 BCE that it’s premature to be speculating about its institutional causes.
econotariat  pseudoE  broad-econ  commentary  books  review  economics  growth-econ  history  iron-age  mediterranean  the-classics  critique  institutions  egalitarianism-hierarchy  malthus  demographics  population  density  wealth  wealth-of-nations  political-econ  divergence  europe  the-great-west-whale  data  archaeology  measurement  scale  agriculture  econ-productivity  efficiency  article  gregory-clark  galor-like  long-short-run  medieval  nordic  technology  north-weingast-like  democracy  roots  summary  endo-exo  input-output  walter-scheidel  endogenous-exogenous  uncertainty 
june 2017 by nhaliday
Validation is a Galilean enterprise
We contend that Frey's analyses actually have little bearing on the external validity of the PGG. Evidence from recent experiments using modified versions of the PGG and stringent comprehension checks indicate that individual differences in people's tendencies to contribute to the public good are better explained by individual differences in participants' comprehension of the game's payoff structure than by individual differences in cooperativeness (Burton-Chellew, El Mouden, & West, 2016). For example, only free riders reliably understand right away that complete defection maximizes one's own payoff, regardless of how much other participants contribute. This difference in comprehension alone explains the so-called free riders' low PGG contributions. These recent results also provide a new interpretation of why conditional cooperators often contribute generously in early rounds, and then less in later rounds (Fischbacher et al., 2001). Fischbacher et al. (2001) attribute the relatively high contributions in the early rounds to cooperativeness and the subsequent decline in contributions to conditional cooperators' frustration with free riders. In reality, the decline in cooperation observed over the course of PGGs occurs because so-called conditional cooperators initially believe that their payoff-maximizing decision depends on whether others contribute, but eventually learn that contributing never benefits the contributor (Burton-Chellew, Nax, & West, 2015). Because contributions in the PGG do not actually reflect cooperativeness, there is no real-world cooperative setting to which inferences about contributions in the PGG can generalize.
study  behavioral-econ  economics  psychology  social-psych  coordination  cooperate-defect  piracy  altruism  bounded-cognition  error  lol  pdf  map-territory  GT-101  realness  free-riding  public-goodish  decision-making  microfoundations  descriptive  values  interests  generalization  measurement  checking 
june 2017 by nhaliday
How important was colonial trade for the rise of Europe? | Economic Growth in History
The latter view became the orthodoxy among economists and economic historians after Patrick O’Brien’s 1982 paper, which in one of many of Patrick’s celebrated phrases, claims that “”the periphery vs peripheral” for Europe. He concludes the paper by writing:

“[G]rowth, stagnation, and decay everywhere in Western Europe can be explained mainly by reference to endogenous forces. … for the economic growth of the core, the periphery was peripheral.”

This is the view that remarkable scholars such as N. Crafts, Deirdre McCloskey, or Joel Mokyr repeat today (though Crafts would argue cotton imports would have mattered in a late stage, and my reading of Mokyr is that he has softened his earlier view from the 1980s a little, specifically in the book The Enlightened Economy.) Even recently, Brad deLong has classifyied O’Brien’s 1982 position as “air tight”.

Among economists and economic historians more on the economics side, I would say that O’Brien’s paper was only one of two strong hits against the “Worlds-System” and related schools of thoughts of the 1970s, the other hit being Solow’s earlier conclusion that TFP growth (usually interpreted as technology, though there’s more to it than that) has accounted for economic growth a great deal more than capital accumulation, which is what Hobsbawm and Wallerstein, in their neo-Marxist framework, emphasize.

A friend tonight, on the third world and the first world, and our relationships to the past: "They don't forget, and we don't remember."
imo the European Intifada is being fueled by anti-Europeanism & widely taught ideas like this one discussed - Europe stole its riches

The British Empire was cruel, rapacious and racist. But contrary to what Shashi Tharoor writes in An Era Of Darkness, the fault for India’s miseries lies upon itself.

Indeed, the anti-Tharoor argument is arguably closer to the truth, because the British tended to use the landlord system in places where landlords were already in place, and at times when the British were relatively weak and couldn’t afford to upset tradition. Only after they became confident in their power did the British start to bypass the landlord class and tax the cultivators directly. King’s College London historian Jon Wilson (2016) writes in India Conquered, “Wherever it was implemented, raiyatwar began as a form of military rule.” Thus the system that Tharoor implicitly promotes, and which is associated with higher agricultural productivity today, arose from the very same colonialism that he blames for so many of India’s current woes. History does not always tell the parables that we wish to hear.


India’s share of the world economy was large in the eighteenth century for one simple reason: when the entire world was poor, India had a large share of the world’s population. India’s share fell because with the coming of the Industrial Revolution, Europe and North America saw increases of income per capita to levels never before seen in all of human history. This unprecedented growth cannot be explained by Britain’s depredations against India. Britain was not importing steam engines from India.

The big story of the Great Divergence is not that India got poorer, but that other countries got much richer. Even at the peak of Mughal wealth in 1600, the best estimates of economic historians suggest that GDP per capita was 61% higher in Great Britain. By 1750–before the battle of Plassey and the British takeover–GDP per capita in Great Britain was more than twice what it was in India (Broadberry, Custodis, and Gupta 2015). The Great Divergence has long roots.

Tharoor seems blinded by the glittering jewels of the Maharajas and the Mughals. He writes with evident satisfaction that when in 1615 the first British ambassador presented himself to the court of Emperor Jehangir in Agra, “the Englishman was a supplicant at the feet of the world’s mightiest and most opulent monarch.” True; but the Emperor’s opulence was produced on the backs of millions of poor subjects. Writing at the same time and place, the Dutch merchant Francisco Pelsaert (1626) contrasted the “great superfluity and absolute power” of the rich with “the utter subjection and poverty of the common people–poverty so great and miserable that the life of the people can be depicted…only as the home of stark want and the dwelling-place of bitter woe.” Indian rulers were rich because the empire was large and inequality was extreme.

In pre-colonial India the rulers, both Mughal and Maratha, extracted _anywhere from one-third to one half of all gross agricultural output_ and most of what was extracted was spent on opulence and the armed forces, not on improving agricultural productivity (Raychaudhuri 1982).


The British were awful rulers but the history of India is a long story of awful rulers (just as it is for most countries). Indeed, by Maddison’s (2007) calculations _the British extracted less from the Indian economy than did the Mughal Dynasty_. The Mughals built their palaces in India while the British built most of their palaces in Britain, but that was little comfort to the Indian peasant who paid for both. The Kohinoor diamond that graces the cover of Inglorious Empire is a telling symbol. Yes, it was stolen by the British (who stole it from the Sikhs who stole it from the Afghanis who stole it from the Mughals who stole it from one of the kings of South India). But how many Indians would have been better off if this bauble had stayed in India? Perhaps one reason why more Indians didn’t take up arms against the British was that for most of them, British rule was a case of meet the new boss, same as the old boss.

more for effect on colonies: https://pinboard.in/u:nhaliday/b:4b0128372fe9

INDIA AND THE GREAT DIVERGENCE: AN ANGLO-INDIAN COMPARISON OF GDP PER CAPITA, 1600-1871: http://eh.net/eha/wp-content/uploads/2013/11/Guptaetal.pdf
This paper provides estimates of Indian GDP constructed from the output side for the pre-1871 period, and combines them with population estimates to track changes in living standards. Indian per capita GDP declined steadily during the seventeenth and eighteenth centuries before stabilising during the nineteenth century. As British living standards increased from the mid-seventeenth century, India fell increasingly behind. Whereas in 1600, Indian per capita GDP was over 60 per cent of the British level, by 1871 it had fallen to less than 15 per cent. As well as placing the origins of the Great Divergence firmly in the early modern period, the estimates suggest a relatively prosperous India at the height of the Mughal Empire, with living standards well above bare bones subsistence.

but some of the Asian wage data (especialy India) have laughably small samples (see Broadberry & Gupta)

How profitable was colonialism for various European powers?: https://www.reddit.com/r/AskHistorians/comments/p1q1q/how_profitable_was_colonialism_for_various/

How did Britain benefit from colonising India? What did colonial powers gain except for a sense of power?: https://www.quora.com/How-did-Britain-benefit-from-colonising-India-What-did-colonial-powers-gain-except-for-a-sense-of-power
The EIC period was mostly profitable, though it had recurring problems with its finances. The initial voyages from Surat in 1600s were hugely successful and brought profits as high as 200%. However, the competition from the Dutch East India Company started to drive down prices, at least for spices. Investing in EIC wasn’t always a sure shot way to gains - British investors who contributed to the second East India joint stock of 1.6 million pounds between 1617 and 1632 ended up losing money.


An alternate view is that the revenues of EIC were very small compared to the GDP of Britain, and hardly made an impact to the overall economy. For instance, the EIC Revenue in 1800 was 7.8m pounds while the British GDP in the same period was 343m pounds, and hence EIC revenue was only 2% of the overall GDP. (I got these figures from an individual blog and haven’t verified them).


The British Crown period - The territory of British India Provinces had expanded greatly and therefore the tax revenues had grown in proportion. The efficient taxation system paid its own administrative expenses as well as the cost of the large British Indian Army. British salaries were lucrative - the Viceroy received £25,000 a year, and Governors £10,000 for instance besides the lavish amenities in the form of subsidized housing, utilities, rest houses, etc.


Indian eminent intellectual, Dadabhai Naoroji wrote how the British systematically ensured the draining of Indian economy of its wealth and his theory is famously known as ‘Drain of Wealth’ theory. In his book 'Poverty' he estimated a 200–300 million pounds loss of revenue to Britain that is not returned.

At the same time, a fair bit of money did go back into India itself to support further colonial infrastructure. Note the explosion of infrastructure (Railway lines, 100+ Cantonment towns, 60+ Hill stations, Courthouses, Universities, Colleges, Irrigation Canals, Imperial capital of New Delhi) from 1857 onward till 1930s. Of course, these infrastructure projects were not due to any altruistic motive of the British. They were intended to make their India empire more secure, comfortable, efficient, and to display their grandeur. Huge sums of money were spent in the 3 Delhi Durbars conducted in this period.

So how profitable was the British Crown period? Probably not much. Instead bureaucracy, prestige, grandeur, comfort reigned supreme for the 70,000 odd British people in India.


There was a realization in Britain that colonies were not particularly economically beneficial to the home economy. … [more]
econotariat  broad-econ  article  history  early-modern  age-of-discovery  europe  the-great-west-whale  divergence  conquest-empire  economics  growth-econ  roots  trade  endo-exo  patho-altruism  expansionism  multi  twitter  social  discussion  gnon  unaffiliated  right-wing  🎩  attaq  albion  journos-pundits  mokyr-allen-mccloskey  cjones-like  big-picture  chart  news  org:mag  org:foreign  marginal-rev  wealth-of-nations  britain  india  asia  cost-benefit  leviathan  antidemos  religion  islam  class  pop-structure  nationalism-globalism  authoritarianism  property-rights  agriculture  econ-metrics  data  scale  government  industrial-revolution  pdf  regularizer  pseudoE  measurement  volo-avolo  time-series  anthropology  macro  sapiens  books  review  summary  counterfactual  stylized-facts  critique  heavy-industry  pre-ww2  study  technology  energy-resources  labor  capitalism  debate  org:data  org:lite  commentary  usa  piketty  variance-components  automation  west-hunter  scitariat  visualization  northeast  the-south  aphorism  h2o  fluid 
june 2017 by nhaliday
The Data We Have vs. the Data We Need: A Comment on the State of the “Divergence” Debate (Part I) | The NEP-HIS Blog
Maybe as reaction to Pomeranz, the Great Divergence gets dated earlier & earlier & earlier on the slimmest evidence. Next: Pangaea breakup
I think it's a bit out of control, the urge to keep bringing the roots of the great divergence earlier and earlier and earlier
@s8mb @antonhowes I am impatient w explanations which do not start w origination/adoption/diffusion technology as proximate cause
@s8mb @antonhowes in respect of which finance, market integration, & formal institutions all dead ends for divergence of West with the Rest
Are you more with Pomeranz that there's not major difference until c. 1750 or 1800, or do you put departure much earlier?
it's now beyond doubt established there was a major diff in living standards, state capacity, market integr+
between the most advanced regions of China and the most advanced regions of Europe, no doubt
@bswud +broadberry estimates evidence groupthink on matter (i.e., everyone wants to locate precursor to IR earlier and earlier) @antonhowes

The Little Divergence: https://pseudoerasmus.com/2014/06/12/the-little-divergence/
The Early Transformation of Britain's Economy: https://growthecon.com/blog/Britain-Shares/
There’s a nice working paper out by Patrick Wallis, Justin Colson, and David Chilosi called “Puncturing the Malthus Delusion: Structural Change in the British Economy before the Industrial Revolution, 1500-1800”. The big project they undertake here is to mine the probate inventories (along with several other sources) from Britain in this period to build up a picture of the rough allocation of workers across sectors. They do a very nice job of walking through their data sources, and the limitations, in the paper, so let me leave those details aside. In short, they use the reported occupations in wills to back out a picture of the sectoral structure, finding it consistent with other sources based on apprentice records, as well as prior estimates from specific years.

econotariat  commentary  broad-econ  growth-econ  divergence  history  early-modern  world  europe  the-great-west-whale  china  asia  sinosphere  comparison  chart  critique  measurement  debate  pseudoE  multi  wealth-of-nations  econ-metrics  twitter  social  discussion  lol  troll  rant  org:ngo  🎩  s:*  unaffiliated  occident  orient  article  cliometrics  economics  data  mostly-modern  japan  usa  india  anglo  pre-ww2  medieval  roots  path-dependence  revolution  stylized-facts  industrial-revolution  time-series  wealth  visualization  malthus  econ-productivity  technology  ideas  marginal  hari-seldon  flux-stasis  questions  agriculture  heavy-industry  labor  distribution  evidence 
june 2017 by nhaliday
Suspicious Banana on Twitter: ""platonic forms" seem more sinister when you realize that integers were reaching down into his head and giving him city planning advice https://t.co/4qaTdwOlry"
Plato mentions in his Laws that 5040 is a convenient number to use for dividing many things (including both the citizens and the land of a state) into lesser parts. He remarks that this number can be divided by all the (natural) numbers from 1 to 12 with the single exception of 11 (however, it is not the smallest number to have this property; 2520 is). He rectifies this "defect" by suggesting that two families could be subtracted from the citizen body to produce the number 5038, which is divisible by 11. Plato also took notice of the fact that 5040 can be divided by 12 twice over. Indeed, Plato's repeated insistence on the use of 5040 for various state purposes is so evident that it is written, "Plato, writing under Pythagorean influences, seems really to have supposed that the well-being of the city depended almost as much on the number 5040 as on justice and moderation."[1]

"Now for divine begettings there is a period comprehended by a perfect number, and for mortal by the first in which augmentations dominating and dominated when they have attained to three distances and four limits of the assimilating and the dissimilating, the waxing and the waning, render all things conversable and commensurable [546c] with one another, whereof a basal four-thirds wedded to the pempad yields two harmonies at the third augmentation, the one the product of equal factors taken one hundred times, the other of equal length one way but oblong,-one dimension of a hundred numbers determined by the rational diameters of the pempad lacking one in each case, or of the irrational lacking two; the other dimension of a hundred cubes of the triad. And this entire geometrical number is determinative of this thing, of better and inferior births."[3]

Shortly after Plato's time his meaning apparently did not cause puzzlement as Aristotle's casual remark attests.[6] Half a millennium later, however, it was an enigma for the Neoplatonists, who had a somewhat mystic penchant and wrote frequently about it, proposing geometrical and numerical interpretations. Next, for nearly a thousand years, Plato's texts disappeared and it is only in the Renaissance that the enigma briefly resurfaced. During the 19th century, when classical scholars restored original texts, the problem reappeared. Schleiermacher interrupted his edition of Plato for a decade while attempting to make sense of the paragraph. Victor Cousin inserted a note that it has to be skipped in his French translation of Plato's works. In the early 20th century, scholarly findings suggested a Babylonian origin for the topic.[7]


Socrates: Surely we agree nothing more virtuous than sacrificing each newborn infant while reciting the factors of 39,916,800?

Turgidas: Uh

different but interesting: https://aeon.co/essays/can-we-hope-to-understand-how-the-greeks-saw-their-world
Another explanation for the apparent oddness of Greek perception came from the eminent politician and Hellenist William Gladstone, who devoted a chapter of his Studies on Homer and the Homeric Age (1858) to ‘perceptions and use of colour’. He too noticed the vagueness of the green and blue designations in Homer, as well as the absence of words covering the centre of the ‘blue’ area. Where Gladstone differed was in taking as normative the Newtonian list of colours (red, orange, yellow, green, blue, indigo, violet). He interpreted the Greeks’ supposed linguistic poverty as deriving from an imperfect discrimination of prismatic colours. The visual organ of the ancients was still in its infancy, hence their strong sensitivity to light rather than hue, and the related inability to clearly distinguish one hue from another. This argument fit well with the post-Darwinian climate of the late 19th century, and came to be widely believed. Indeed, it prompted Nietzsche’s own judgment, and led to a series of investigations that sought to prove that the Greek chromatic categories do not fit in with modern taxonomies.

Today, no one thinks that there has been a stage in the history of humanity when some colours were ‘not yet’ being perceived. But thanks to our modern ‘anthropological gaze’ it is accepted that every culture has its own way of naming and categorising colours. This is not due to varying anatomical structures of the human eye, but to the fact that different ocular areas are stimulated, which triggers different emotional responses, all according to different cultural contexts.
postrat  carcinisation  twitter  social  discussion  lol  hmm  :/  history  iron-age  mediterranean  the-classics  cocktail  trivia  quantitative-qualitative  mystic  simler  weird  multi  wiki  👽  dennett  article  philosophy  alien-character  news  org:mag  org:popup  literature  quotes  poetry  concrete  big-peeps  nietzschean  early-modern  europe  germanic  visuo  language  foreign-lang  embodied  oceans  h2o  measurement  fluid  forms-instances  westminster  lexical 
june 2017 by nhaliday
Fig. 1: maximum possible Gini index still allowing subsistence of population (all surplus redistributed to 1 head honcho)
Fig. 2: scatter plot of Gini vs income, as well as possibility frontier

Ye Olde Inæqualitee Shoppe: https://pseudoerasmus.com/2014/10/01/inequality-possibility-frontier/
Gini indices, mean income, maximum feasible Gini, and "inequality extraction ratios" (gini2/max poss. inequality): https://pseudoerasmus.files.wordpress.com/2014/09/blwpg263.pdf
Growth and inequality in the great and little divergence debate: a Japanese perspective: http://onlinelibrary.wiley.com/doi/10.1111/ehr.12071/epdf
pdf  study  pseudoE  economics  growth-econ  broad-econ  inequality  industrial-revolution  agriculture  compensation  wealth-of-nations  wealth  britain  history  medieval  early-modern  europe  the-great-west-whale  🎩  cultural-dynamics  econ-metrics  data  multi  article  modernity  rent-seeking  vampire-squid  elite  india  asia  japan  civilization  time-series  plots  volo-avolo  malthus  manifolds  database  iron-age  mediterranean  the-classics  conquest-empire  germanic  gallic  latin-america  world  china  leviathan  usa  measurement  crosstab  pro-rata  MENA  africa  developing-world  distribution  archaeology  taxes  redistribution  egalitarianism-hierarchy  feudal 
june 2017 by nhaliday
Biological Measures of the Standard of Living - American Economic Association
The evidence suggests that the most important proximate source of increasing height was the improving disease environment as reflected by the fall in infant mortality. Rising income and education and falling family size had more modest effects. Improvements in health care are hard to identify, and the effects of welfare state spending seem to have been small.

GROWING TALL BUT UNEQUAL: NEW FINDINGS AND NEW BACKGROUND EVIDENCE ON ANTHROPOMETRIC WELFARE IN 156 COUNTRIES, 18101989: https://pseudoerasmus.files.wordpress.com/2017/03/baten-blum-2012.pdf
This is the first initiative to collate the entire body of anthropometric evidence during the 19th and 20th centuries, on a global scale. By providing a comprehensive dataset on global height developments we are able to emphasise an alternative view of the history of human well-being and a basis for understanding characteristics of well-being in 156 countries, 1810-1989.

Bones of Contention: The Political Economy of Height Inequality: http://piketty.pse.ens.fr/files/BoixRosenbluth2014.pdf
- Carles Boix, et al.

Height in the Dark Ages: https://pseudoerasmus.com/2014/06/12/aside-angus-maddison/
study  economics  growth-econ  broad-econ  history  early-modern  mostly-modern  measurement  methodology  embodied  health  longevity  sapiens  death  wealth  pseudoE  🎩  multi  epidemiology  public-health  roots  europe  policy  wonkish  healthcare  redistribution  welfare-state  disease  parasites-microbiome  wealth-of-nations  education  top-n  data  world  pdf  political-econ  inequality  farmers-and-foragers  leviathan  archaeology  🌞  article  time-series  civilization  iron-age  mediterranean  medieval  gibbon  the-classics  demographics  gender  britain  evidence  traces 
june 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.


simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Why I see academic economics moving left | askblog
I have a long essay on the scientific status of economics in National Affairs. A few excerpts from the conclusion:

In the end, can we really have effective theory in economics? If by effective theory we mean theory that is verifiable and reliable for prediction and control, the answer is likely no. Instead, economics deals in speculative interpretations and must continue to do so.

Young economists who employ pluralistic methods to study problems are admired rather than marginalized, as they were in 1980. But economists who question the wisdom of interventionist economic policies seem headed toward the fringes of the profession.

This is my essay in which I say that academic economics is on the road to sociology.

Property Is Only Another Name for Monopoly: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2818494
Hanson's take more positive: http://www.overcomingbias.com/2017/10/for-stability-rents.html

econotariat  cracker-econ  commentary  prediction  trends  economics  social-science  ideology  politics  left-wing  regulation  empirical  measurement  methodology  academia  multi  links  news  org:mag  essay  longform  randy-ayndy  sociology  technocracy  realness  hypocrisy  letters  study  property-rights  taxes  civil-liberty  efficiency  arbitrage  alt-inst  proposal  incentives  westminster  lens  truth  info-foraging  ratty  hanson  summary  review  biases  concrete  abstraction  managerial-state  gender  identity-politics  higher-ed 
may 2017 by nhaliday
Estimating the number of unseen variants in the human genome
To find all common variants (frequency at least 1%) the number of individuals that need to be sequenced is small (∼350) and does not differ much among the different populations; our data show that, subject to sequence accuracy, the 1000 Genomes Project is likely to find most of these common variants and a high proportion of the rarer ones (frequency between 0.1 and 1%). The data reveal a rule of diminishing returns: a small number of individuals (∼150) is sufficient to identify 80% of variants with a frequency of at least 0.1%, while a much larger number (> 3,000 individuals) is necessary to find all of those variants.

A map of human genome variation from population-scale sequencing: http://www.internationalgenome.org/sites/1000genomes.org/files/docs/nature09534.pdf

Scientists using data from the 1000 Genomes Project, which sequenced one thousand individuals from 26 human populations, found that "a typical [individual] genome differs from the reference human genome at 4.1 million to 5.0 million sites … affecting 20 million bases of sequence."[11] Nearly all (>99.9%) of these sites are small differences, either single nucleotide polymorphisms or brief insertion-deletions in the genetic sequence, but structural variations account for a greater number of base-pairs than the SNPs and indels.[11]

Human genetic variation: https://en.wikipedia.org/wiki/Human_genetic_variation

Singleton Variants Dominate the Genetic Architecture of Human Gene Expression: https://www.biorxiv.org/content/early/2017/12/15/219238
study  sapiens  genetics  genomics  population-genetics  bioinformatics  data  prediction  cost-benefit  scale  scaling-up  org:nat  QTL  methodology  multi  pdf  curvature  convexity-curvature  nonlinearity  measurement  magnitude  🌞  distribution  missing-heritability  pop-structure  genetic-load  mutation  wiki  reference  article  structure  bio  preprint  biodet  variance-components  nibble  chart 
may 2017 by nhaliday
Missing heritability problem - Wikipedia
The "missing heritability" problem[1][2][3][4][5][6] can be defined as the fact that single genetic variations cannot account for much of the heritability of diseases, behaviors, and other phenotypes. This is a problem that has significant implications for medicine, since a person's susceptibility to disease may depend more on "the combined effect of all the genes in the background than on the disease genes in the foreground", or the role of genes may have been severely overestimated.

The 'missing heritability' problem was named as such in 2008. The Human Genome Project led to optimistic forecasts that the large genetic contributions to many traits and diseases (which were identified by quantitative genetics and behavioral genetics in particular) would soon be mapped and pinned down to specific genes and their genetic variants by methods such as candidate-gene studies which used small samples with limited genetic sequencing to focus on specific genes believed to be involved, examining the SNP kinds of variants. While many hits were found, they often failed to replicate in other studies.

The exponential fall in genome sequencing costs led to the use of GWAS studies which could simultaneously examine all candidate-genes in larger samples than the original finding, where the candidate-gene hits were found to almost always be false positives and only 2-6% replicate;[7][8][9][10][11][12] in the specific case of intelligence candidate-gene hits, only 1 candidate-gene hit replicated,[13] and of 15 neuroimaging hits, none did.[14] The editorial board of Behavior Genetics noted, in setting more stringent requirements for candidate-gene publications, that "the literature on candidate gene associations is full of reports that have not stood up to rigorous replication...it now seems likely that many of the published findings of the last decade are wrong or misleading and have not contributed to real advances in knowledge".[15] Other researchers have characterized the literature as having "yielded an infinitude of publications with very few consistent replications" and called for a phase out of candidate-gene studies in favor of polygenic scores.[16]

This led to a dilemma. Standard genetics methods have long estimated large heritabilities such as 80% for traits such as height or intelligence, yet none of the genes had been found despite sample sizes that, while small, should have been able to detect variants of reasonable effect size such as 1 inch or 5 IQ points. If genes have such strong cumulative effects - where were they? Several resolutions have been proposed, that the missing heritability is some combination of:


7. Genetic effects are indeed through common SNPs acting additively, but are highly polygenic: dispersed over hundreds or thousands of variants each of small effect like a fraction of an inch or a fifth of an IQ point and with low prior probability: unexpected enough that a candidate-gene study is unlikely to select the right SNP out of hundreds of thousands of known SNPs, and GWASes up to 2010, with n<20000, would be unable to find hits which reach genome-wide statistical-significance thresholds. Much larger GWAS sample sizes, often n>100k, would be required to find any hits at all, and would steadily increase after that.
This resolution to the missing heritability problem was supported by the introduction of Genome-wide complex trait analysis (GCTA) in 2010, which demonstrated that trait similarity could be predicted by the genetic similarity of unrelated strangers on common SNPs treated additively, and for many traits the SNP heritability was indeed a substantial fraction of the overall heritability. The GCTA results were further buttressed by findings that a small percent of trait variance could be predicted in GWASes without any genome-wide statistically-significant hits by a linear model including all SNPs regardless of p-value; if there were no SNP contribution, this would be unlikely, but it would be what one expected from SNPs whose effects were very imprecisely estimated by a too-small sample. Combined with the upper bound on maximum effect sizes set by the GWASes up to then, this strongly implied that the highly polygenic theory was correct. Examples of complex traits where increasingly large-scale GWASes have yielded the initial hits and then increasing numbers of hits as sample sizes increased from n<20k to n>100k or n>300k include height,[23] intelligence,[24] and schizophrenia.
article  bio  biodet  behavioral-gen  genetics  genomics  GWAS  candidate-gene  methodology  QTL  missing-heritability  twin-study  measurement  epigenetics  nonlinearity  error  history  mostly-modern  reflection  wiki  reference  science  bounded-cognition  replication  being-right  info-dynamics  🌞  linearity  ideas  GCTA  spearhead 
may 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractngphysics

related tags

2016-election  80000-hours  :/  aaronson  ability-competence  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  aDNA  adversarial  advertising  advice  aesthetics  africa  age-generation  age-of-discovery  aging  agri-mindset  agriculture  ai  ai-control  akrasia  albion  alesina  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  anomie  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  arbitrage  archaeology  archaics  aristos  arms  arrows  art  article  asia  assimilation  atmosphere  attaq  attention  audio  authoritarianism  autism  automation  axelrod  axioms  backup  barons  bayesian  behavioral-econ  behavioral-gen  being-becoming  being-right  benevolence  bias-variance  biases  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  bitcoin  bits  blowhards  books  bostrom  bounded-cognition  brain-scan  branches  brands  brexit  britain  broad-econ  business  business-models  c:**  c:***  calculation  california  canada  cancer  candidate-gene  canon  capital  capitalism  carcinisation  career  cartoons  causation  chan  charity  chart  cheatsheet  checking  checklists  chemistry  china  christianity  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  classic  classification  clever-rats  climate-change  cliometrics  clown-world  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  cold-war  collaboration  coming-apart  commentary  communication  communism  community  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  compressed-sensing  computation  computer-vision  concept  conceptual-vocab  concrete  confidence  confluence  confounding  conquest-empire  consilience  context  contracts  contrarianism  control  convexity-curvature  cooperate-defect  coordination  core-rats  corporation  correlation  corruption  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  courage  course  cracker-econ  creative  crime  criminal-justice  criminology  CRISPR  critique  crooked  crosstab  crypto  crypto-anarchy  cryptocurrency  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  dark-arts  darwinian  data  data-science  database  dataset  death  debate  debt  decentralized  decision-making  decision-theory  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographic-transition  demographics  dennett  density  descriptive  detail-architecture  deterrence  developing-world  developmental  diet  differential-privacy  dimensionality  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distribution  divergence  diversity  diy  domestication  douthatish  draft  drama  drugs  duplication  duty  dysgenics  early-modern  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  einstein  elections  electromag  elite  embedded-cognition  embodied  embodied-pack  emotion  empirical  ems  encyclopedic  endo-exo  endogenous-exogenous  energy-resources  engineering  enhancement  ensembles  entertainment  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epigenetics  epistemic  equilibrium  error  essay  essence-existence  estimate  ethanol  ethical-algorithms  ethics  ethnocentrism  ethnography  EU  europe  evidence  evidence-based  evolution  examples  existence  exocortex  expanders  expansionism  expectancy  experiment  expert-experience  explanans  explanation  exploratory  expression-survival  externalities  extra-introversion  extrema  facebook  faq  farmers-and-foragers  fashun  FDA  fermi  fertility  feudal  fiction  field-study  fields  fighting  film  finance  fire  fisher  fitness  flexibility  fluid  flux-stasis  focus  food  foreign-lang  foreign-policy  formal-values  forms-instances  free-riding  frequency  frontier  futurism  gallic  galor-like  galton  game-theory  games  garett-jones  gavisti  gbooks  GCTA  gedanken  gelman  gender  gender-diff  gene-drift  general-survey  generalization  genetic-load  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  giants  gibbon  gnon  gnosis-logos  gnxp  god-man-beast-victim  google  gotchas  government  grad-school  graph-theory  graphical-models  graphics  graphs  gravity  gray-econ  gregory-clark  ground-up  group-level  group-selection  growth-econ  GT-101  GWAS  gwern  h2o  habit  hanson  hanushek  happy-sad  hard-tech  hardness  hardware  hari-seldon  harvard  hci  health  healthcare  heavy-industry  heterodox  hidden-motives  high-dimension  high-variance  higher-ed  history  hive-mind  hmm  hn  homo-hetero  honor  housing  howto  hsu  huge-data-the-biggest  human-bean  human-capital  human-ml  humanity  humility  huntington  hypocrisy  hypothesis-testing  ideas  identity  identity-politics  ideology  idk  iidness  illusion  immune  impact  impetus  impro  incentives  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  information-theory  inhibition  innovation  input-output  insight  institutions  integrity  intel  intelligence  interdisciplinary  interests  internet  interpretation  intersection-connectedness  intervention  interview  intricacy  intuition  invariance  investing  ioannidis  iq  iran  iraq-syria  iron-age  is-ought  islam  isteveish  iteration-recursion  janus  japan  jargon  jobs  journos-pundits  judaism  justice  kinship  knowledge  korea  kumbaya-kult  labor  language  large-factor  latin-america  law  leadership  learning  lecture-notes  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifehack  limits  linear-algebra  linear-models  linearity  liner-notes  links  list  literature  lived-experience  local-global  lol  long-short-run  long-term  longevity  longform  longitudinal  love-hate  lower-bounds  lurid  machine-learning  macro  madisonian  magnitude  malaise  male-variability  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  market-power  markets  martial  matching  math  math.CA  math.CO  math.DS  matrix-factorization  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memes(ew)  MENA  mena4  meta-analysis  meta:medicine  meta:prediction  meta:science  meta:war  metabolic  metabuch  metameta  methodology  metric-space  metrics  micro  microfoundations  microsoft  migrant-crisis  migration  military  minimalism  miri-cfar  missing-heritability  ML-MAP-E  mobile  mobility  model-class  models  modernity  mokyr-allen-mccloskey  moments  monetary-fiscal  money  monte-carlo  morality  mostly-modern  multi  multiplicative  musk  mutation  mystic  myth  n-factor  narrative  nascent-state  nationalism-globalism  natural-experiment  nature  navigation  neocons  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  no-go  noahpinion  noble-lie  noise-structure  nonlinearity  nordic  norms  north-weingast-like  northeast  novelty  nuclear  null-result  number  nutrition  nyc  obesity  objective-measure  objektbuch  observer-report  occam  occident  oceans  offense-defense  old-anglo  open-closed  opioids  opsec  optimate  optimism  optimization  order-disorder  orders  org:anglo  org:biz  org:bleg  org:bv  org:data  org:econlib  org:edu  org:foreign  org:gov  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  orwellian  oscillation  outcome-risk  outdoors  outliers  overflow  oxbridge  papers  parable  paradox  parallax  parametric  parasites-microbiome  parenting  parsimony  paste  paternal-age  path-dependence  patho-altruism  patience  paul-romer  pdf  peace-violence  people  personality  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  pic  piketty  pinboard  pinker  piracy  planning  plots  poast  podcast  poetry  polanyi-marx  polarization  policy  polisci  political-econ  politics  poll  pop-diff  pop-structure  popsci  population  population-genetics  populism  positivity  postrat  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  preference-falsification  prejudice  prepping  preprint  presentation  primitivism  princeton  priors-posteriors  privacy  pro-rata  probability  problem-solving  productivity  progression  project  propaganda  properties  property-rights  proposal  protestant-catholic  protocol  pseudoE  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  public-health  putnam-like  q-n-a  qra  QTL  quality  quantified-self  quantitative-qualitative  quantum  questions  quixotic  quiz  quotes  race  random  random-matrices  randy-ayndy  ranking  rant  rationality  ratty  reading  realness  reason  recent-selection  recommendations  recruiting  reddit  redistribution  reduction  reference  reflection  regression  regression-to-mean  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  retention  retrofit  review  revolution  rhetoric  rhythm  right-wing  rigor  rindermann-thompson  risk  ritual  robotics  roots  rot  russia  s-factor  s:*  sampling  sampling-bias  sapiens  scale  scaling-up  scholar  science  science-anxiety  scifi-fantasy  scitariat  search  securities  security  selection  self-report  sentiment  sequential  sex  shakespeare  shift  sib-study  signal-noise  signaling  signum  similarity  simler  simulation  sinosphere  skeleton  skunkworks  sky  sleep  slides  soccer  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  society  sociology  socs-and-mops  software  space  sparsity  spatial  spearhead  speculation  speed  speedometer  spock  sports  spreading  ssc  stackex  stagnation  stanford  startups  stat-power  state-of-art  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  straussian  stream  street-fighting  structure  study  studying  stylized-facts  subculture  success  sulla  summary  supply-demand  survey  survival  sv  symmetry  synchrony  synthesis  systematic-ad-hoc  szabo  tactics  tails  talks  taxes  tcs  tcstariat  teaching  tech  technocracy  technology  techtariat  telos-atelos  temperance  temperature  tetlock  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-south  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thesis  thick-thin  thiel  things  thinking  threat-modeling  time  time-preference  time-series  time-use  todo  toolkit  top-n  toys  traces  track-record  trade  tradecraft  tradeoffs  tradition  transportation  travel  trees  trends  tribalism  tricks  trivia  troll  trump  trust  truth  turing  tutoring  twin-study  twitter  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  urban  urban-rural  us-them  usa  vaclav-smil  vague  values  vampire-squid  variance-components  venture  vgr  video  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  walls  walter-scheidel  war  water  wealth  wealth-of-nations  weird  welfare-state  west-hunter  westminster  white-paper  whole-partial-many  wiki  winner-take-all  wire-guided  wisdom  within-group  within-without  woah  wonkish  working-stiff  world  world-war  worrydream  X-not-about-Y  xenobio  yvain  zeitgeist  zero-positive-sum  zooming  🌞  🎓  🎩  🐸  👽  🔬  🖥 

Copy this bookmark: