nhaliday + empirical   128

Linus's Law - Wikipedia
Linus's Law is a claim about software development, named in honor of Linus Torvalds and formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999).[1][2] The law states that "given enough eyeballs, all bugs are shallow";

--

In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate.[4] While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".[5][6]

Although detection of even deliberately inserted flaws[7][8] can be attributed to Raymond's claim, the persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum.[9][10][11][12] Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain.[12] In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".[11] Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.

Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty programs: https://academic.oup.com/cybersecurity/article/3/2/81/4524054

https://hbfs.wordpress.com/2009/03/31/how-many-eyeballs-to-make-a-bug-shallow/
wiki  reference  aphorism  ideas  stylized-facts  programming  engineering  linux  worse-is-better/the-right-thing  correctness  debugging  checking  best-practices  security  error  scale  ubiquity  collaboration  oss  realness  empirical  evidence-based  multi  study  info-econ  economics  intricacy  plots  manifolds  techtariat  cracker-prog  os  systems  magnitude  quantitative-qualitative  number  threat-modeling 
5 weeks ago by nhaliday
Is there a common method for detecting the convergence of the Gibbs sampler and the expectation-maximization algorithm? - Quora
In practice and theory it is much easier to diagnose convergence in EM (vanilla or variational) than in any MCMC algorithm (including Gibbs sampling).

https://www.quora.com/How-can-you-determine-if-your-Gibbs-sampler-has-converged
There is a special case when you can actually obtain the stationary distribution, and be sure that you did! If your markov chain consists of a discrete state space, then take the first time that a state repeats in your chain: if you randomly sample an element between the repeating states (but only including one of the endpoints) you will have a sample from your true distribution.

One can achieve this 'exact MCMC sampling' more generally by using the coupling from the past algorithm (Coupling from the past).

Otherwise, there is no rigorous statistical test for convergence. It may be possible to obtain a theoretical bound for the convergence rates: but these are quite difficult to obtain, and quite often too large to be of practical use. For example, even for the simple case of using the Metropolis algorithm for sampling from a two-dimensional uniform distribution, the best convergence rate upper bound achieved, by Persi Diaconis, was something with an astronomical constant factor like 10^300.

In fact, it is fair to say that for most high dimensional problems, we have really no idea whether Gibbs sampling ever comes close to converging, but the best we can do is use some simple diagnostics to detect the most obvious failures.
nibble  q-n-a  qra  acm  stats  probability  limits  convergence  distribution  sampling  markov  monte-carlo  ML-MAP-E  checking  equilibrium  stylized-facts  gelman  levers  mixing  empirical  plots  manifolds  multi  fixed-point  iteration-recursion  heuristic  expert-experience  theory-practice  project 
5 weeks ago by nhaliday
Sci-Hub | The Moral Machine experiment. Nature | 10.1038/s41586-018-0637-6
Preference for inaction
Sparing pedestrians
Sparing the lawful
Sparing females
Sparing the fit
Sparing higher status
Sparing more characters
Sparing the young
Sparing humans

We selected the 130 countries with at least 100 respondents (n range 101–448,125), standardized the nine target AMCEs of each country, and conducted a hierarchical clustering on these nine scores, using Euclidean distance and Ward’s minimum variance method20. This analysis identified three distinct ‘moral clusters’ of countries. These are shown in Fig. 3a, and are broadly consistent with both geographical and cultural proximity according to the Inglehart–Welzel Cultural Map 2010–201421.

The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster.

...

Fig. 3 | Country-level clusters.

[ed.: I actually rather like how the values the West has compare w/ the global mean according in this plot.]

...
Participants from individualistic cultures, which emphasize the distinctive value of each individual23, show a stronger preference for sparing the greater number of characters (Fig. 4a). Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community23, show a weaker preference for sparing younger characters (Fig. 4a, inset).
pdf  study  org:nat  psychology  social-psych  poll  values  data  experiment  empirical  morality  ethics  pop-diff  cultural-dynamics  tradeoffs  death  safety  ai  automation  things  world  gender  biases  status  class  egalitarianism-hierarchy  order-disorder  anarcho-tyranny  crime  age-generation  quantitative-qualitative  number  nature  piracy  exploratory  phalanges  n-factor  europe  the-great-west-whale  nordic  usa  anglo  anglosphere  sinosphere  asia  japan  china  islam  MENA  latin-america  gallic  wonkish  correlation  measure  similarity  dignity  universalism-particularism  law  leviathan  wealth  econ-metrics  institutions  demographics  religion  group-level  within-group  expression-survival  comparison  technocracy  visualization  trees  developing-world  regional-scatter-plots 
6 weeks ago by nhaliday
Extreme inbreeding in a European ancestry sample from the contemporary UK population | Nature Communications
Visscher et al

In most human societies, there are taboos and laws banning mating between first- and second-degree relatives, but actual prevalence and effects on health and fitness are poorly quantified. Here, we leverage a large observational study of ~450,000 participants of European ancestry from the UK Biobank (UKB) to quantify extreme inbreeding (EI) and its consequences. We use genotyped SNPs to detect large runs of homozygosity (ROH) and call EI when >10% of an individual’s genome comprise ROHs. We estimate a prevalence of EI of ~0.03%, i.e., ~1/3652. EI cases have phenotypic means between 0.3 and 0.7 standard deviation below the population mean for 7 traits, including stature and cognitive ability, consistent with inbreeding depression estimated from individuals with low levels of inbreeding.
study  org:nat  bio  genetics  genomics  kinship  britain  pro-rata  distribution  embodied  iq  effect-size  tails  gwern  evidence-based  empirical 
6 weeks ago by nhaliday
The returns to speaking a second language
Does speaking a foreign language have an impact on earnings? The authors use a variety of empirical strategies to address this issue for a representative sample of U.S. college graduates. OLS regressions with a complete set of controls to minimize concerns about omitted variable biases, propensity score methods, and panel data techniques all lead to similar conclusions. The hourly earnings of those who speak a foreign language are more than 2 percent higher than the earnings of those who do not. The authors obtain higher and more imprecise point estimates using state high school graduation and college entry and graduation requirements as instrumental variables.

...

We find that college graduates who speak a second language earn, on average, wages that are 2 percent higher than those who don’t. We include a complete set of controls for general ability using information on grades and college admission tests and reduce the concern that selection drives the results controlling for the academic major chosen by the student. We obtain similar results with simple regression methods if we use nonparametric methods based on the propensity score and if we exploit the temporal variation in the knowledge of a second language. The estimates, thus, are not driven by observable differences in the composition of the pools of bilinguals and monolinguals, by the linear functional form that we impose in OLS regressions, or by constant unobserved heterogeneity. To reduce the concern that omitted variables bias our estimates, we make use of several instrumental variables (IVs). Using high school and college graduation requirements as instruments, we estimate more substantial returns to learning a second language, on the order of 14 to 30 percent. These results have high standard errors, but they suggest that OLS estimates may actually be biased downward.

...

In separate (unreported) regressions, we explore the labor market returns to speaking specific languages. We estimate OLS regressions following the previous specifications but allow the coefficient to vary by language spoken. In our sample, German is the language that obtains the highest rewards in the labor market. The returns to speaking German are 3.8 percent, while they are 2.3 for speaking French and 1.5 for speaking Spanish. In fact, only the returns to speaking German remain statistically significant in this regression. The results indicate that those who speak languages known by a smaller number of people obtain higher rewards in the labor market.14

The Relative Importance of the European Languages: https://ideas.repec.org/p/kud/kuiedp/0623.html
study  economics  labor  cost-benefit  hmm  language  foreign-lang  usa  empirical  evidence-based  education  human-capital  compensation  correlation  endogenous-exogenous  natural-experiment  policy  wonkish  🎩  french  germanic  latin-america  multi  spanish  china  asia  japan 
july 2019 by nhaliday
An Eye Tracking Study on camelCase and under_score Identifier Styles - IEEE Conference Publication
One main difference is that subjects were trained mainly in the underscore style and were all programmers. While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly.

ToCamelCaseorUnderscore: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.158.9499
An empirical study of 135 programmers and non-programmers was conducted to better understand the impact of identifier style on code readability. The experiment builds on past work of others who study how readers of natural language perform such tasks. Results indicate that camel casing leads to higher accuracy among all subjects regardless of training, and those trained in camel casing are able to recognize identifiers in the camel case style faster than identifiers in the underscore style.

https://en.wikipedia.org/wiki/Camel_case#Readability_studies
A 2009 study comparing snake case to camel case found that camel case identifiers could be recognised with higher accuracy among both programmers and non-programmers, and that programmers already trained in camel case were able to recognise those identifiers faster than underscored snake-case identifiers.[35]

A 2010 follow-up study, under the same conditions but using an improved measurement method with use of eye-tracking equipment, indicates: "While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly."[36]
study  psychology  cog-psych  hci  programming  best-practices  stylized-facts  null-result  multi  wiki  reference  concept  empirical  evidence-based  efficiency  accuracy  time  code-organizing  grokkability  protocol-metadata  form-design  grokkability-clarity 
july 2019 by nhaliday
history - Why are UNIX/POSIX system call namings so illegible? - Unix & Linux Stack Exchange
It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names.

http://neverworkintheory.org/2017/11/26/abbreviated-full-names.html
We carried out a family of controlled experiments to investigate whether the use of abbreviated identifier names, with respect to full-word identifier names, affects fault fixing in C and Java source code. This family consists of an original (or baseline) controlled experiment and three replications. We involved 100 participants with different backgrounds and experiences in total. Overall results suggested that there is no difference in terms of effort, effectiveness, and efficiency to fix faults, when source code contains either only abbreviated or only full-word identifier names. We also conducted a qualitative study to understand the values, beliefs, and assumptions that inform and shape fault fixing when identifier names are either abbreviated or full-word. We involved in this qualitative study six professional developers with 1--3 years of work experience. A number of insights emerged from this qualitative study and can be considered a useful complement to the quantitative results from our family of experiments. One of the most interesting insights is that developers, when working on source code with abbreviated identifier names, adopt a more methodical approach to identify and fix faults by extending their focus point and only in a few cases do they expand abbreviated identifiers.
q-n-a  stackex  trivia  programming  os  systems  legacy  legibility  ux  libraries  unix  linux  hacker  cracker-prog  multi  evidence-based  empirical  expert-experience  engineering  study  best-practices  comparison  quality  debugging  efficiency  time  code-organizing  grokkability  grokkability-clarity 
july 2019 by nhaliday
c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

hmm:
http://vlsiarch.eecs.harvard.edu/wp-content/uploads/2017/02/asplos17mallacc.pdf
Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.

edit: Actually, factor x100 for 7% is pretty high, could be increase constant factor by almost an order of magnitude.

edit: Well actually that's not the right math. 93% + 7%*.01 is not much smaller than 100%
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming  computer-memory 
june 2019 by nhaliday
Measuring fitness heritability: Life history traits versus morphological traits in humans - Gavrus‐Ion - 2017 - American Journal of Physical Anthropology - Wiley Online Library
Traditional interpretation of Fisher's Fundamental Theorem of Natural Selection is that life history traits (LHT), which are closely related with fitness, show lower heritabilities, whereas morphological traits (MT) are less related with fitness and they are expected to show higher heritabilities.

...

LHT heritabilities ranged from 2.3 to 34% for the whole sample, with men showing higher heritabilities (4–45%) than women (0‐23.7%). Overall, MT presented higher heritability values than most of LHT, ranging from 0 to 40.5% in craniofacial indices, and from 13.8 to 32.4% in craniofacial angles. LHT showed considerable additive genetic variance values, similar to MT, but also high environmental variance values, and most of them presenting a higher evolutionary potential than MT.
study  biodet  behavioral-gen  population-genetics  hmm  contrarianism  levers  inference  variance-components  fertility  life-history  demographics  embodied  prediction  contradiction  empirical  sib-study 
may 2019 by nhaliday
Workshop Abstract | Identifying and Understanding Deep Learning Phenomena
ICML 2019 workshop, June 15th 2019, Long Beach, CA

We solicit contributions that view the behavior of deep nets as natural phenomena, to be investigated with methods inspired from the natural sciences like physics, astronomy, and biology.
unit  workshop  acm  machine-learning  science  empirical  nitty-gritty  atoms  deep-learning  model-class  icml  data-science  rigor  replication  examples  ben-recht  physics 
april 2019 by nhaliday
A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.

Is spoken English more efficient than other languages?: https://linguistics.stackexchange.com/questions/2550/is-spoken-english-more-efficient-than-other-languages
As a translator, I can assure you that English is no more efficient than other languages.
--
[some comments on a different answer:]
Russian, when spoken, is somewhat less efficient than English, and that is for sure. No one who has ever worked as an interpreter can deny it. You can convey somewhat more information in English than in Russian within an hour. The English language is not constrained by the rigid case and gender systems of the Russian language, which somewhat reduce the information density of the Russian language. The rules of the Russian language force the speaker to incorporate sometimes unnecessary details in his speech, which can be problematic for interpreters – user74809 Nov 12 '18 at 12:48
But in writing, though, I do think that Russian is somewhat superior. However, when it comes to common daily speech, I do not think that anyone can claim that English is less efficient than Russian. As a matter of fact, I also find Russian to be somewhat more mentally taxing than English when interpreting. I mean, anyone who has lived in the world of Russian and then moved to the world of English is certain to notice that English is somewhat more efficient in everyday life. It is not a night-and-day difference, but it is certainly noticeable. – user74809 Nov 12 '18 at 13:01
...
By the way, I am not knocking Russian. I love Russian, it is my mother tongue and the only language, in which I sound like a native speaker. I mean, I still have a pretty thick Russian accent. I am not losing it anytime soon, if ever. But like I said, living in both worlds, the Moscow world and the Washington D.C. world, I do notice that English is objectively more efficient, even if I am myself not as efficient in it as most other people. – user74809 Nov 12 '18 at 13:40

Do most languages need more space than English?: https://english.stackexchange.com/questions/2998/do-most-languages-need-more-space-than-english
Speaking as a translator, I can share a few rules of thumb that are popular in our profession:
- Hebrew texts are usually shorter than their English equivalents by approximately 1/3. To a large extent, that can be attributed to cheating, what with no vowels and all.
- Spanish, Portuguese and French (I guess we can just settle on Romance) texts are longer than their English counterparts by about 1/5 to 1/4.
- Scandinavian languages are pretty much on par with English. Swedish is a tiny bit more compact.
- Whether or not Russian (and by extension, Ukrainian and Belorussian) is more compact than English is subject to heated debate, and if you ask five people, you'll be presented with six different opinions. However, everybody seems to agree that the difference is just a couple percent, be it this way or the other.

--

A point of reference from the website I maintain. The files where we store the translations have the following sizes:

English: 200k
Portuguese: 208k
Spanish: 209k
German: 219k
And the translations are out of date. That is, there are strings in the English file that aren't yet in the other files.

For Chinese, the situation is a bit different because the character encoding comes into play. Chinese text will have shorter strings, because most words are one or two characters, but each character takes 3–4 bytes (for UTF-8 encoding), so each word is 3–12 bytes long on average. So visually the text takes less space but in terms of the information exchanged it uses more space. This Language Log post suggests that if you account for the encoding and remove redundancy in the data using compression you find that English is slightly more efficient than Chinese.

Is English more efficient than Chinese after all?: https://languagelog.ldc.upenn.edu/nll/?p=93
[Executive summary: Who knows?]

This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote:
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data  multi  comparison  writing  meta:reading  measure  compression  empirical  evidence-based  experiment  analysis  chart  trivia  cocktail  org:edu 
february 2019 by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Jordan Peterson is Wrong About the Case for the Left
I suggest that the tension of which he speaks is fully formed and self-contained completely within conservatism. Balancing those two forces is, in fact, what conservatism is all about. Thomas Sowell, in A Conflict of Visions: Ideological Origins of Political Struggles describes the conservative outlook as (paraphrasing): “There are no solutions, only tradeoffs.”

The real tension is between balance on the right and imbalance on the left.

In Towards a Cognitive Theory of Polics in the online magazine Quillette I make the case that left and right are best understood as psychological profiles consisting of 1) cognitive style, and 2) moral matrix.

There are two predominant cognitive styles and two predominant moral matrices.

The two cognitive styles are described by Arthur Herman in his book The Cave and the Light: Plato Versus Aristotle, and the Struggle for the Soul of Western Civilization, in which Plato and Aristotle serve as metaphors for them. These two quotes from the book summarize the two styles:

Despite their differences, Plato and Aristotle agreed on many things. They both stressed the importance of reason as our guide for understanding and shaping the world. Both believed that our physical world is shaped by certain eternal forms that are more real than matter. The difference was that Plato’s forms existed outside matter, whereas Aristotle’s forms were unrealizable without it. (p. 61)

The twentieth century’s greatest ideological conflicts do mark the violent unfolding of a Platonist versus Aristotelian view of what it means to be free and how reason and knowledge ultimately fit into our lives (p.539-540)

The Platonic cognitive style amounts to pure abstract reason, “unconstrained” by reality. It has no limiting principle. It is imbalanced. Aristotelian thinking also relies on reason, but it is “constrained” by empirical reality. It has a limiting principle. It is balanced.

The two moral matrices are described by Jonathan Haidt in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion. Moral matrices are collections of moral foundations, which are psychological adaptations of social cognition created in us by hundreds of millions of years of natural selection as we evolved into the social animal. There are six moral foundations. They are:

Care/Harm
Fairness/Cheating
Liberty/Oppression
Loyalty/Betrayal
Authority/Subversion
Sanctity/Degradation
The first three moral foundations are called the “individualizing” foundations because they’re focused on the autonomy and well being of the individual person. The second three foundations are called the “binding” foundations because they’re focused on helping individuals form into cooperative groups.

One of the two predominant moral matrices relies almost entirely on the individualizing foundations, and of those mostly just care. It is all individualizing all the time. No balance. The other moral matrix relies on all of the moral foundations relatively equally; individualizing and binding in tension. Balanced.

The leftist psychological profile is made from the imbalanced Platonic cognitive style in combination with the first, imbalanced, moral matrix.

The conservative psychological profile is made from the balanced Aristotelian cognitive style in combination with the balanced moral matrix.

It is not true that the tension between left and right is a balance between the defense of the dispossessed and the defense of hierarchies.

It is true that the tension between left and right is between an imbalanced worldview unconstrained by empirical reality and a balanced worldview constrained by it.

A Venn Diagram of the two psychological profiles looks like this:
commentary  albion  canada  journos-pundits  philosophy  politics  polisci  ideology  coalitions  left-wing  right-wing  things  phalanges  reason  darwinian  tradition  empirical  the-classics  big-peeps  canon  comparison  thinking  metabuch  skeleton  lens  psychology  social-psych  morality  justice  civil-liberty  authoritarianism  love-hate  duty  tribalism  us-them  sanctity-degradation  revolution  individualism-collectivism  n-factor  europe  the-great-west-whale  pragmatic  prudence  universalism-particularism  analytical-holistic  nationalism-globalism  social-capital  whole-partial-many  pic  intersection-connectedness  links  news  org:mag  letters  rhetoric  contrarianism  intricacy  haidt  scitariat  critique  debate  forms-instances  reduction  infographic  apollonian-dionysian  being-becoming  essence-existence 
july 2018 by nhaliday
The Religions of the Three Castes – The Neo-Ciceronian Times
The sudra caste is that which is most fitted to physical labor and bodily exercise and, as noted above, occupies the lowest position in the social hierarchy. The sudra soul is characterized by a sensual approach to life and tends to exhibit short time preferences. Thus, sudra are most often concerned with gross physical satisfaction and instant gratification. It is not surprising that it is among individuals of this caste that drug addiction, property crimes, and alcoholism are the most prevalent.

As a result, sudra religion tends to focus on feedbacks relating to the body and to the satisfaction of felt needs. It tends to be characterized the most by fleshly music, swaying, rhythmic movements, and the like. It also inclines toward specific concerns about health and physical safety, day by day provision, and so forth. Again, keep in mind that there is nothing about this that is necessarily “bad.” These are things appropriate to this caste which reflect its character and which enable it to fulfill its roles in society.

The vaisya caste is that which is most fitted to the soulish/psychical occupations which involve money and finance, technics, science, and production. The psychology of this caste involves a propensity toward materialism, empiricism, and a focus on what can be seen and felt over the intangible and faith-oriented. This caste emphasizes the pragmatic benefits provided by entrepreneurship and science, as well as exhibiting the long time preferences which enable those fields of endeavor to be successful.

The vaisya carry these concentrations over into the religious realm. Religion for the vaisya consists primarily in what can be seen and touched, whether as tangible benefits deriving from religious piety or as aids in worship itself. The intercession of gods or demigods into specific areas of life (usually pertaining to wealth or business) is considered necessary for success in those fields. Vaisya religion tends to require the reification of religious experience, rather than taking matters of religion on faith. Indeed, pure spiritualism or symbolism in religion is considered unreal, and therefore not truly efficacious.

The aristocratic caste (brahman and kshatriyan considered together) is the noble caste, and thus its tendencies are those that concern leadership and guidance, a desire to guide others in the right way. This caste tends to be focused more on eternal truths (whether specifically pertaining to religion or otherwise), and is subconsciously, and perhaps even divinely, guided toward the polar and axial tendencies which have been described by Evola in numerous places. As such, the aristocratic caste eschews materialism and the grasping for pecuniary gain and democratic political power. Instead, its focus tends to be on intangible qualities of excellence and spiritual alignment with divine and eternal realities.

Thus, aristocratic religion tends to focus more on faith and accord with spiritual realities and truths. This caste attempts to see “the reality behind the image” and is concerned with direct connection with the Divine. It will often emphasize the role of written and spoken revelation (holy books and preaching) as the means of bridging the gap between the Divine and man, ways which are used by the Divine to express His eternal laws and truths, with which man is expected to come into accord and which must be accepted through spiritual faith. The aristocratic caste, whether as priest or as king/ruler, seeks to lead and guide their people into harmony with the eternal truths espoused by religion. This caste is scrupulous to uphold the rites, while at the same time grasping and teaching the deeper spiritual realities represented by those rites.
gnon  religion  theos  gavisti  class  egalitarianism-hierarchy  military  martial  aristos  labor  things  phalanges  mystic  ascetic  empirical  new-religion  pragmatic  time-preference  embodied  realness  leadership  leviathan  sanctity-degradation  elite  social-structure  culture  society  paganism 
may 2018 by nhaliday
Contingent, Not Arbitrary | Truth is contingent on what is, not on what we wish to be true.
A vital attribute of a value system of any kind is that it works. I consider this a necessary (but not sufficient) condition for goodness. A value system, when followed, should contribute to human flourishing and not produce results that violate its core ideals. This is a pragmatic, I-know-it-when-I-see-it definition. I may refine it further if the need arises.

I think that the prevailing Western values fail by this standard. I will not spend much time arguing this; many others have already. If you reject this premise, this blog may not be for you.

I consider old traditions an important source of wisdom: they have proven their worth over centuries of use. Where they agree, we should listen. Where they disagree, we should figure out why. Where modernity departs from tradition, we should be wary of the new.

Tradition has one nagging problem: it was abandoned by the West. How and why did that happen? I consider this a central question. I expect the reasons to be varied and complex. Understanding them seems necessary if we are to fix what may have been broken.

In short, I want to answer these questions:

1. How do values spread and persist? An ideology does no good if no one holds it.
2. Which values do good? Sounding good is worse than useless if it leads to ruin.

The ultimate hope would be to find a way to combine the two. Many have tried and failed. I don’t expect to succeed either, but I hope I’ll manage to clarify the questions.

Christianity Is The Schelling Point: https://contingentnotarbitrary.com/2018/02/22/christianity-is-the-schelling-point/
Restoring true Christianity is both necessary and sufficient for restoring civilization. The task is neither easy nor simple but that’s what it takes. It is also our best chance of weathering the collapse if that’s too late to avoid.

Christianity is the ultimate coordination mechanism: it unites us with a higher purpose, aligns us with the laws of reality and works on all scales, from individuals to entire civilizations. Christendom took over the world and then lost it when its faith faltered. Historically and culturally, Christianity is the unique Schelling point for the West – or it would be if we could agree on which church (if any) was the true one.

Here are my arguments for true Christianity as the Schelling point. I hope to demonstrate these points in subsequent posts; for now I’ll just list them.

- A society of saints is the most powerful human arrangement possible. It is united in purpose, ideologically stable and operates in harmony with natural law. This is true independent of scale and organization: from military hierarchy to total decentralization, from persecuted minority to total hegemony. Even democracy works among saints – that’s why it took so long to fail.
- There is such a thing as true Christianity. I don’t know how to pinpoint it but it does exist; that holds from both secular and religious perspectives. Our task is to converge on it the best we can.
- Don’t worry too much about the existence of God. I’m proof that you don’t need that assumption in order to believe – it helps but isn’t mandatory.

Pascal’s Wager never sat right with me. Now I know why: it’s a sucker bet. Let’s update it.

If God exists, we must believe because our souls and civilization depend on it. If He doesn’t exist, we must believe because civilization depends on it.

Morality Should Be Adaptive: http://www.overcomingbias.com/2012/04/morals-should-be-adaptive.html
I agree with this
gnon  todo  blog  stream  religion  christianity  theos  morality  ethics  formal-values  philosophy  truth  is-ought  coordination  cooperate-defect  alignment  tribalism  cohesion  nascent-state  counter-revolution  epistemic  civilization  rot  fertility  intervention  europe  the-great-west-whale  occident  telos-atelos  multi  ratty  hanson  big-picture  society  culture  evolution  competition  🤖  rationality  rhetoric  contrarianism  values  water  embedded-cognition  ideology  deep-materialism  moloch  new-religion  patho-altruism  darwinian  existence  good-evil  memetics  direct-indirect  endogenous-exogenous  tradition  anthropology  cultural-dynamics  farmers-and-foragers  egalitarianism-hierarchy  organizing  institutions  protestant-catholic  enlightenment-renaissance-restoration-reformation  realness  science  empirical  modernity  revolution  inference  parallax  axioms  pragmatic  zeitgeist  schelling  prioritizing  ends-means  degrees-of-freedom  logic  reason  interdisciplinary  exegesis-hermeneutics  o 
april 2018 by nhaliday
High male sexual investment as a driver of extinction in fossil ostracods | Nature
Sexual selection favours traits that confer advantages in the competition for mates. In many cases, such traits are costly to produce and maintain, because the costs help to enforce the honesty of these signals and cues1. Some evolutionary models predict that sexual selection also produces costs at the population level, which could limit the ability of populations to adapt to changing conditions and thus increase the risk of extinction2,3,4.
study  org:nat  bio  evolution  selection  sex  competition  cost-benefit  unintended-consequences  signaling  existence  gender  gender-diff  empirical  branches  rot  modernity  fertility  intervention  explanans  humility  status  matching  ranking  ratty  hanson 
april 2018 by nhaliday
The Gelman View – spottedtoad
I have read Andrew Gelman’s blog for about five years, and gradually, I’ve decided that among his many blog posts and hundreds of academic articles, he is advancing a philosophy not just of statistics but of quantitative social science in general. Not a statistician myself, here is how I would articulate the Gelman View:

A. Purposes

1. The purpose of social statistics is to describe and understand variation in the world. The world is a complicated place, and we shouldn’t expect things to be simple.
2. The purpose of scientific publication is to allow for communication, dialogue, and critique, not to “certify” a specific finding as absolute truth.
3. The incentive structure of science needs to reward attempts to independently investigate, reproduce, and refute existing claims and observed patterns, not just to advance new hypotheses or support a particular research agenda.

B. Approach

1. Because the world is complicated, the most valuable statistical models for the world will generally be complicated. The result of statistical investigations will only rarely be to  give a stamp of truth on a specific effect or causal claim, but will generally show variation in effects and outcomes.
2. Whenever possible, the data, analytic approach, and methods should be made as transparent and replicable as possible, and should be fair game for anyone to examine, critique, or amend.
3. Social scientists should look to build upon a broad shared body of knowledge, not to “own” a particular intervention, theoretic framework, or technique. Such ownership creates incentive problems when the intervention, framework, or technique fail and the scientist is left trying to support a flawed structure.

Components

1. Measurement. How and what we measure is the first question, well before we decide on what the effects are or what is making that measurement change.
2. Sampling. Who we talk to or collect information from always matters, because we should always expect effects to depend on context.
3. Inference. While models should usually be complex, our inferential framework should be simple enough for anyone to follow along. And no p values.

He might disagree with all of this, or how it reflects his understanding of his own work. But I think it is a valuable guide to empirical work.
ratty  unaffiliated  summary  gelman  scitariat  philosophy  lens  stats  hypothesis-testing  science  meta:science  social-science  institutions  truth  is-ought  best-practices  data-science  info-dynamics  alt-inst  academia  empirical  evidence-based  checklists  strategy  epistemic 
november 2017 by nhaliday
ON THE ORIGIN OF STATES: STATIONARY BANDITS AND TAXATION IN EASTERN CONGO
As a foundation for this study, I organized the collection of village-level panel data on violent actors, managing teams of surveyors, village elders, and households in 380 war-torn areas of DRC. I introduce optimal taxation theory to the decision of violent actors to establish local monopolies of violence. The value of such decision hinges on their ability to tax the local population. A sharp rise in the global demand for coltan, a bulky commodity used in the electronics industry, leads violent actors to impose monopolies of violence and taxation in coltan sites, which persist even years after demand collapses. A similar rise in the demand for gold, easier to conceal and more difficult to tax, does not. However, the groups who nevertheless control gold sites are more likely to respond by undertaking investments in fiscal capacity, consistent with the difficulty to observe gold, and with well-documented trajectories of state formation in Europe (Ardant, 1975). The findings support the view that the expected revenue from taxation, determined in particular by tax base elasticity and costly investments in fiscal capacity, can explain the stages of state formation preceding the states as we recognize them today.
pdf  study  economics  growth-econ  broad-econ  political-econ  polisci  leviathan  north-weingast-like  unintended-consequences  institutions  microfoundations  econometrics  empirical  government  taxes  rent-seeking  supply-demand  incentives  property-rights  africa  developing-world  peace-violence  interests  longitudinal  natural-experiment  endogenous-exogenous  archaeology  trade  world  feudal  roots  ideas  cost-benefit  econ-productivity  traces 
november 2017 by nhaliday
Review of Yuval Harari's Sapiens: A Brief History of Humankind.
https://twitter.com/whyvert/status/928472237052649472
https://archive.is/MPO5Q
Yuval Harari's prominent book Sapiens: A Brief History of Humankind gets a thorough and well deserved fisking by C.R. Hallpike.

For Harari the great innovation that separated us from the apes was what he calls the Cognitive Revolution, around 70,000 years ago when we started migrating out of Africa, which he thinks gave us the same sort of modern minds that we have now. 'At the individual level, ancient foragers were the most knowledgeable and skilful people in history...Survival in that area required superb mental abilities from everyone' (55), and 'The people who carved the Stadel lion-man some 30,000 years ago had the same physical, emotional, and intellectual abilities we have' (44). Not surprisingly, then, 'We'd be able to explain to them everything we know - from the adventures of Alice in Wonderland to the paradoxes of quantum physics - and they could teach us how their people view the world' (23).

It's a sweet idea, and something like this imagined meeting actually took place a few years ago between the linguist Daniel Everett and the Piraha foragers of the Amazon in Peru (Everett 2008). But far from being able to discuss quantum theory with them, he found that the Piraha couldn't even count, and had no numbers of any kind, They could teach Everett how they saw the world, which was entirely confined to the immediate experience of the here-and-now, with no interest in past or future, or really in anything that could not be seen or touched. They had no myths or stories, so Alice in Wonderland would have fallen rather flat as well.

...

Summing up the book as a whole, one has often had to point out how surprisingly little he seems to have read on quite a number of essential topics. It would be fair to say that whenever his facts are broadly correct they are not new, and whenever he tries to strike out on his own he often gets things wrong, sometimes seriously. So we should not judge Sapiens as a serious contribution to knowledge but as 'infotainment', a publishing event to titillate its readers by a wild intellectual ride across the landscape of history, dotted with sensational displays of speculation, and ending with blood-curdling predictions about human destiny. By these criteria it is a most successful book.
pdf  books  review  expert-experience  critique  sapiens  history  antiquity  anthropology  multi  twitter  social  scitariat  commentary  quotes  attaq  westminster  backup  culture  realness  farmers-and-foragers  language  egalitarianism-hierarchy  inequality  learning  absolute-relative  malthus  tribalism  kinship  leviathan  government  leadership  volo-avolo  social-structure  taxes  studying  technology  religion  theos  sequential  universalism-particularism  antidemos  revolution  enlightenment-renaissance-restoration-reformation  science  europe  the-great-west-whale  age-of-discovery  iron-age  mediterranean  the-classics  reason  empirical  experiment  early-modern  islam  MENA  civic  institutions  the-trenches  innovation  agriculture  gnon 
november 2017 by nhaliday
The Constitutional Economics of Autocratic Succession on JSTOR
Abstract. The paper extends and empirically tests Gordon Tullock’s public choice theory of the nature of autocracy. A simple model of the relationship between constitutional rules governing succession in autocratic regimes and the occurrence of coups against autocrats is sketched. The model is applied to a case study of coups against monarchs in Denmark in the period ca. 935–1849. A clear connection is found between the specific constitutional rules governing succession and the frequency of coups. Specifically, the introduction of automatic hereditary succession in an autocracy provides stability and limits the number of coups conducted by contenders.

Table 2. General constitutional rules of succession, Denmark ca. 935–1849

To see this the data may be divided into three categories of constitutional rules of succession: One of open succession (for the periods 935–1165 and 1326–40), one of appointed succession combined with election (for the periods 1165–1326 and 1340–1536), and one of more or less formalized hereditary succession (1536–1849). On the basis of this categorization the data have been summarized in Table 3.

validity of empirics is a little sketchy

https://twitter.com/GarettJones/status/922103073257824257
https://archive.is/NXbdQ
The graphic novel it is based on is insightful, illustrates Tullock's game-theoretic, asymmetric information views on autocracy.

Conclusions from Gorton Tullock's book Autocracy, p. 211-215.: https://astro.temple.edu/~bstavis/courses/tulluck.htm
study  polisci  political-econ  economics  cracker-econ  big-peeps  GT-101  info-econ  authoritarianism  antidemos  government  micro  leviathan  elite  power  institutions  garett-jones  multi  econotariat  twitter  social  commentary  backup  art  film  comics  fiction  competition  europe  nordic  empirical  evidence-based  incentives  legacy  peace-violence  order-disorder  🎩  organizing  info-dynamics  history  medieval  law  axioms  stylized-facts  early-modern  data  longitudinal  flux-stasis  shift  revolution  correlation  org:junk  org:edu  summary  military  war  top-n  hi-order-bits  feudal  democracy  sulla  leadership  nascent-state  protocol-metadata 
october 2017 by nhaliday
Frontier Culture: The Roots and Persistence of “Rugged Individualism” in the United States∗
In a classic 1893 essay, Frederick Jackson Turner argued that the American frontier promoted individualism. We revisit the Frontier Thesis and examine its relevance at the subnational level. Using Census data and GIS techniques, we track the frontier throughout the 1790–1890 period and construct a novel, county-level measure of historical frontier experience. We document the distinctive demographics of frontier locations during this period—disproportionately male, prime-age adult, foreign-born, and illiterate—as well as their higher levels of individualism, proxied by the share of infrequent names among children. Many decades after the closing of the frontier, counties with longer historical frontier experience exhibit more prevalent individualism and opposition to redistribution and regulation. We take several steps towards a causal interpretation, including an instrumental variables approach that exploits variation in the speed of westward expansion induced by prior national immigration in- flows. Using linked historical Census data, we identify mechanisms giving rise to a persistent frontier culture. Greater individualism on the frontier was not driven solely by selective migration, suggesting that frontier conditions may have shaped behavior and values. We provide evidence suggesting that rugged individualism may be rooted in its adaptive advantage on the frontier and the opportunities for upward mobility through effort.

https://twitter.com/whyvert/status/921900860224897024
https://archive.is/jTzSe

The Origins of Cultural Divergence: Evidence from a Developing Country.: http://economics.handels.gu.se/digitalAssets/1643/1643769_37.-hoang-anh-ho-ncde-2017-june.pdf
Cultural norms diverge substantially across societies, often even within the same country. In this paper, we test the voluntary settlement hypothesis, proposing that individualistic people tend to self-select into migrating out of reach from collectivist states towards the periphery and that such patterns of historical migration are reflected even in the contemporary distribution of norms. For more than one thousand years during the first millennium CE, northern Vietnam was under an exogenously imposed Chinese rule. From the eleventh to the eighteenth centuries, ancient Vietnam gradually expanded its territory through various waves of southward conquest. We demonstrate that areas being annexed earlier into ancient Vietnam are nowadays more (less) prone to collectivist (individualist) culture. We argue that the southward out-migration of individualist people was the main mechanism behind this finding. The result is consistent across various measures obtained from an extensive household survey and robust to various control variables as well as to different empirical specifications, including an instrumental variable estimation. A lab-in-the-field experiment also confirms the finding.
pdf  study  economics  broad-econ  cliometrics  path-dependence  evidence-based  empirical  stylized-facts  values  culture  cultural-dynamics  anthropology  usa  frontier  allodium  the-west  correlation  individualism-collectivism  measurement  politics  ideology  expression-survival  redistribution  regulation  political-econ  government  migration  history  early-modern  pre-ww2  things  phalanges  🎩  selection  polisci  roots  multi  twitter  social  commentary  scitariat  backup  gnon  growth-econ  medieval  china  asia  developing-world  shift  natural-experiment  endo-exo  endogenous-exogenous  hari-seldon 
october 2017 by nhaliday
Tax Evasion and Inequality
This paper attempts to estimate the size and distribution of tax evasion in rich countries. We combine stratified random audits—the key source used to study tax evasion so far—with new micro-data leaked from two large offshore financial institutions, HSBC Switzerland (“Swiss leaks”) and Mossack Fonseca (“Panama Papers”). We match these data to population-wide wealth records in Norway, Sweden, and Denmark. We find that tax evasion rises sharply with wealth, a phenomenon that random audits fail to capture. On average about 3% of personal taxes are evaded in Scandinavia, but this figure rises to about 30% in the top 0.01% of the wealth distribution, a group that includes households with more than $40 million in net wealth. A simple model of the supply of tax evasion services can explain why evasion rises steeply with wealth. Taking tax evasion into account increases the rise in inequality seen in tax data since the 1970s markedly, highlighting the need to move beyond tax data to capture income and wealth at the top, even in countries where tax compliance is generally high. We also find that after reducing tax evasion—by using tax amnesties—tax evaders do not legally avoid taxes more. This result suggests that fighting tax evasion can be an effective way to collect more tax revenue from the ultra-wealthy.

Figure 1

America’s unreported economy: measuring the size, growth and determinants of income tax evasion in the U.S.: https://link.springer.com/article/10.1007/s10611-011-9346-x
This study empirically investigates the extent of noncompliance with the tax code and examines the determinants of federal income tax evasion in the U.S. Employing a refined version of Feige’s (Staff Papers, International Monetary Fund 33(4):768–881, 1986, 1989) General Currency Ratio (GCR) model to estimate a time series of unreported income as our measure of tax evasion, we find that 18–23% of total reportable income may not properly be reported to the IRS. This gives rise to a 2009 “tax gap” in the range of $390–$540 billion. As regards the determinants of tax noncompliance, we find that federal income tax evasion is an increasing function of the average effective federal income tax rate, the unemployment rate, the nominal interest rate, and per capita real GDP, and a decreasing function of the IRS audit rate. Despite important refinements of the traditional currency ratio approach for estimating the aggregate size and growth of unreported economies, we conclude that the sensitivity of the results to different benchmarks, imperfect data sources and alternative specifying assumptions precludes obtaining results of sufficient accuracy and reliability to serve as effective policy guides.
pdf  study  economics  micro  evidence-based  data  europe  nordic  scale  class  compensation  money  monetary-fiscal  political-econ  redistribution  taxes  madisonian  inequality  history  mostly-modern  natural-experiment  empirical  🎩  cocktail  correlation  models  supply-demand  GT-101  crooked  elite  vampire-squid  nationalism-globalism  multi  pro-rata  usa  time-series  trends  world-war  cold-war  government  todo  planning  long-term  trivia  law  crime  criminology  estimate  speculation  measurement  labor  macro  econ-metrics  wealth  stock-flow  time  density  criminal-justice  frequency  dark-arts  traces  evidence 
october 2017 by nhaliday
Evidence-based | West Hunter
The central notion of evidence-based medicine is that our understanding of human biology is imperfect. Some of the idea we come up with for treating and preventing disease are effective, but most are not, worse than useless. So we need careful, rigorous statistical studies before implementing those ideas on a wide scale. A good example of doing this the wrong way was when when doctors started recommending having babies sleep prone, which roughly doubled the incidence of sudden infant death syndrome for the next several decades.

It seems to me that our understanding of psychology, sociology, economics, political science, and education is at least as imperfect as our understanding of biomedicine.

https://westhunt.wordpress.com/2015/01/24/evidence-based/#comment-65904
“Measure twice, cut once” – can’t get much more elitist than that!

Carefully testing innovations on a small scale before widely implementing them is pretty much the opposite of what self-appointed elites have done. Are you deef or something?

https://westhunt.wordpress.com/2015/01/24/evidence-based/#comment-66035
To the extent that they diverge from accepted best practice, physicians, on average, add negative value. I’ve seen this in action, and statistical studies back it up. In other words, Gregory House is a fictional character.
west-hunter  scitariat  discussion  truth  westminster  social-science  academia  psychology  social-psych  sociology  economics  polisci  education  medicine  meta:medicine  evidence-based  empirical  elite  technocracy  cochrane  best-practices  marginal  multi  poast  vampire-squid  humility  reason  ability-competence  the-watchers 
september 2017 by nhaliday
Medicine as a pseudoscience | West Hunter
The idea that venesection was a good thing, or at least not so bad, on the grounds that one in a few hundred people have hemochromatosis (in Northern Europe) reminds me of the people who don’t wear a seatbelt, since it would keep them from being thrown out of their convertible into a waiting haystack, complete with nubile farmer’s daughter. Daughters. It could happen. But it’s not the way to bet.

Back in the good old days, Charles II, age 53, had a fit one Sunday evening, while fondling two of his mistresses.

Monday they bled him (cupping and scarifying) of eight ounces of blood. Followed by an antimony emetic, vitriol in peony water, purgative pills, and a clyster. Followed by another clyster after two hours. Then syrup of blackthorn, more antimony, and rock salt. Next, more laxatives, white hellebore root up the nostrils. Powdered cowslip flowers. More purgatives. Then Spanish Fly. They shaved his head and stuck blistering plasters all over it, plastered the soles of his feet with tar and pigeon-dung, then said good-night.

...

Friday. The king was worse. He tells them not to let poor Nelly starve. They try the Oriental Bezoar Stone, and more bleeding. Dies at noon.

Most people didn’t suffer this kind of problem with doctors, since they never saw one. Charles had six. Now Bach and Handel saw the same eye surgeon, John Taylor – who blinded both of them. Not everyone can put that on his resume!

You may wonder how medicine continued to exist, if it had a negative effect, on the whole. There’s always the placebo effect – at least there would be, if it existed. Any real placebo effect is very small: I’d guess exactly zero. But there is regression to the mean. You see the doctor when you’re feeling worse than average – and afterwards, if he doesn’t kill you outright, you’re likely to feel better. Which would have happened whether you’d seen him or not, but they didn’t often do RCTs back in the day – I think James Lind was the first (1747).

Back in the late 19th century, Christian Scientists did better than others when sick, because they didn’t believe in medicine. For reasons I think mistaken, because Mary Baker Eddy rejected the reality of the entire material world, but hey, it worked. Parenthetically, what triggered all that New Age nonsense in 19th century New England? Hash?

This did not change until fairly recently. Sometime in the early 20th medicine, clinical medicine, what doctors do, hit break-even. Now we can’t do without it. I wonder if there are, or will be, other examples of such a pile of crap turning (mostly) into a real science.

good tweet: https://twitter.com/bowmanthebard/status/897146294191390720
The brilliant GP I've had for 35+ years has retired. How can I find another one who meets my requirements?

1 is overweight
2 drinks more than officially recommended amounts
3 has an amused, tolerant atitude to human failings
4 is well aware that we're all going to die anyway, & there are better or worse ways to die
5 has a healthy skeptical attitude to mainstream medical science
6 is wholly dismissive of "a|ternative” medicine
7 believes in evolution
8 thinks most diseases get better without intervention, & knows the dangers of false positives
9 understands the base rate fallacy

EconPapers: Was Civil War Surgery Effective?: http://econpapers.repec.org/paper/htrhcecon/444.htm
contra Greg Cochran:
To shed light on the subject, I analyze a data set created by Dr. Edmund Andrews, a Civil war surgeon with the 1st Illinois Light Artillery. Dr. Andrews’s data can be rendered into an observational data set on surgical intervention and recovery, with controls for wound location and severity. The data also admits instruments for the surgical decision. My analysis suggests that Civil War surgery was effective, and increased the probability of survival of the typical wounded soldier, with average treatment effect of 0.25-0.28.

Medical Prehistory: https://westhunt.wordpress.com/2016/03/14/medical-prehistory/
What ancient medical treatments worked?

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76878
In some very, very limited conditions, bleeding?
--
Bad for you 99% of the time.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76947
Colchicine – used to treat gout – discovered by the Ancient Greeks.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76973
Dracunculiasis (Guinea worm)
Wrap the emerging end of the worm around a stick and slowly pull it out.
(3,500 years later, this remains the standard treatment.)
https://en.wikipedia.org/wiki/Ebers_Papyrus

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76971
Some of the progress is from formal medicine, most is from civil engineering, better nutrition ( ag science and physical chemistry), less crowded housing.

Nurses vs doctors: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/
Medicine, the things that doctors do, was an ineffective pseudoscience until fairly recently. Until 1800 or so, they were wrong about almost everything. Bleeding, cupping, purging, the four humors – useless. In the 1800s, some began to realize that they were wrong, and became medical nihilists that improved outcomes by doing less. Some patients themselves came to this realization, as when Civil War casualties hid from the surgeons and had better outcomes. Sometime in the early 20th century, MDs reached break-even, and became an increasingly positive influence on human health. As Lewis Thomas said, medicine is the youngest science.

Nursing, on the other hand, has always been useful. Just making sure that a patient is warm and nourished when too sick to take care of himself has helped many survive. In fact, some of the truly crushing epidemics have been greatly exacerbated when there were too few healthy people to take care of the sick.

Nursing must be old, but it can’t have existed forever. Whenever it came into existence, it must have changed the selective forces acting on the human immune system. Before nursing, being sufficiently incapacitated would have been uniformly fatal – afterwards, immune responses that involved a period of incapacitation (with eventual recovery) could have been selectively favored.

when MDs broke even: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/#comment-58981
I’d guess the 1930s. Lewis Thomas thought that he was living through big changes. They had a working serum therapy for lobar pneumonia ( antibody-based). They had many new vaccines ( diphtheria in 1923, whopping cough in 1926, BCG and tetanus in 1927, yellow fever in 1935, typhus in 1937.) Vitamins had been mostly worked out. Insulin was discovered in 1929. Blood transfusions. The sulfa drugs, first broad-spectrum antibiotics, showed up in 1935.

DALYs per doctor: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/
The disability-adjusted life year (DALY) is a measure of overall disease burden – the number of years lost. I’m wondering just much harm premodern medicine did, per doctor. How many healthy years of life did a typical doctor destroy (net) in past times?

...

It looks as if the average doctor (in Western medicine) killed a bunch of people over his career ( when contrasted with doing nothing). In the Charles Manson class.

Eventually the market saw through this illusion. Only took a couple of thousand years.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100741
That a very large part of healthcare spending is done for non-health reasons. He has a chapter on this in his new book, also check out his paper “Showing That You Care: The Evolution of Health Altruism” http://mason.gmu.edu/~rhanson/showcare.pdf
--
I ran into too much stupidity to finish the article. Hanson’s a loon. For example when he talks about the paradox of blacks being more sentenced on drug offenses than whites although they use drugs at similar rate. No paradox: guys go to the big house for dealing, not for using. Where does he live – Mars?

I had the same reaction when Hanson parroted some dipshit anthropologist arguing that the stupid things people do while drunk are due to social expectations, not really the alcohol.
Horseshit.

I don’t think that being totally unable to understand everybody around you necessarily leads to deep insights.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100744
What I’ve wondered is if there was anything that doctors did that actually was helpful and if perhaps that little bit of success helped them fool people into thinking the rest of it helped.
--
Setting bones. extracting arrows: spoon of Diocles. Colchicine for gout. Extracting the Guinea worm. Sometimes they got away with removing the stone. There must be others.
--
Quinine is relatively recent: post-1500. Obstetrical forceps also. Caesarean deliveries were almost always fatal to the mother until fairly recently.

Opium has been around for a long while : it works.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100839
If pre-modern medicine was indeed worse than useless – how do you explain no one noticing that patients who get expensive treatments are worse off than those who didn’t?
--
were worse off. People are kinda dumb – you’ve noticed?
--
My impression is that while people may be “kinda dumb”, ancient customs typically aren’t.
Even if we assume that all people who lived prior to the 19th century were too dumb to make the rational observation, wouldn’t you expect this ancient practice to be subject to selective pressure?
--
Your impression is wrong. Do you think that there some slick reason for Carthaginians incinerating their first-born?

Theodoric of York, bloodletting: https://www.youtube.com/watch?v=yvff3TViXmY

details on blood-letting and hemochromatosis: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100746

Starting Over: https://westhunt.wordpress.com/2018/01/23/starting-over/
Looking back on it, human health would have … [more]
west-hunter  scitariat  discussion  ideas  medicine  meta:medicine  science  realness  cost-benefit  the-trenches  info-dynamics  europe  the-great-west-whale  history  iron-age  the-classics  mediterranean  medieval  early-modern  mostly-modern  🌞  harvard  aphorism  rant  healthcare  regression-to-mean  illusion  public-health  multi  usa  northeast  pre-ww2  checklists  twitter  social  albion  ability-competence  study  cliometrics  war  trivia  evidence-based  data  intervention  effect-size  revolution  speculation  sapiens  drugs  antiquity  lived-experience  list  survey  questions  housing  population  density  nutrition  wiki  embodied  immune  evolution  poast  chart  markets  civil-liberty  randy-ayndy  market-failure  impact  scale  pro-rata  estimate  street-fighting  fermi  marginal  truth  recruiting  alt-inst  academia  social-science  space  physics  interdisciplinary  ratty  lesswrong  autism  👽  subculture  hanson  people  track-record  crime  criminal-justice  criminology  race  ethanol  error  video  lol  comedy  tradition  institutions  iq  intelligence  MENA  impetus  legacy 
august 2017 by nhaliday
GALILEO'S STUDIES OF PROJECTILE MOTION
During the Renaissance, the focus, especially in the arts, was on representing as accurately as possible the real world whether on a 2 dimensional surface or a solid such as marble or granite. This required two things. The first was new methods for drawing or painting, e.g., perspective. The second, relevant to this topic, was careful observation.

With the spread of cannon in warfare, the study of projectile motion had taken on greater importance, and now, with more careful observation and more accurate representation, came the realization that projectiles did not move the way Aristotle and his followers had said they did: the path of a projectile did not consist of two consecutive straight line components but was instead a smooth curve. [1]

Now someone needed to come up with a method to determine if there was a special curve a projectile followed. But measuring the path of a projectile was not easy.

Using an inclined plane, Galileo had performed experiments on uniformly accelerated motion, and he now used the same apparatus to study projectile motion. He placed an inclined plane on a table and provided it with a curved piece at the bottom which deflected an inked bronze ball into a horizontal direction. The ball thus accelerated rolled over the table-top with uniform motion and then fell off the edge of the table Where it hit the floor, it left a small mark. The mark allowed the horizontal and vertical distances traveled by the ball to be measured. [2]

By varying the ball's horizontal velocity and vertical drop, Galileo was able to determine that the path of a projectile is parabolic.

https://www.scientificamerican.com/author/stillman-drake/

Galileo's Discovery of the Parabolic Trajectory: http://www.jstor.org/stable/24949756

Galileo's Experimental Confirmation of Horizontal Inertia: Unpublished Manuscripts (Galileo
Gleanings XXII): https://sci-hub.tw/https://www.jstor.org/stable/229718
- Drake Stillman

MORE THAN A DECADE HAS ELAPSED since Thomas Settle published a classic paper in which Galileo's well-known statements about his experiments on inclined planes were completely vindicated.' Settle's paper replied to an earlier attempt by Alexandre Koyre to show that Galileo could not have obtained the results he claimed in his Two New Sciences by actual observations using the equipment there described. The practical ineffectiveness of Settle's painstaking repetition of the experiments in altering the opinion of historians of science is only too evident. Koyre's paper was reprinted years later in book form without so much as a note by the editors concerning Settle's refutation of its thesis.2 And the general literature continues to belittle the role of experiment in Galileo's physics.

More recently James MacLachlan has repeated and confirmed a different experiment reported by Galileo-one which has always seemed highly exaggerated and which was also rejected by Koyre with withering sarcasm.3 In this case, however, it was accuracy of observation rather than precision of experimental data that was in question. Until now, nothing has been produced to demonstrate Galileo's skill in the design and the accurate execution of physical experiment in the modern sense.

Pant of a page of Galileo's unpublished manuscript notes, written late in 7608, corroborating his inertial assumption and leading directly to his discovery of the parabolic trajectory. (Folio 1 16v Vol. 72, MSS Galileiani; courtesy of the Biblioteca Nazionale di Firenze.)

...

(The same skeptical historians, however, believe that to show that Galileo could have used the medieval mean-speed theorem suffices to prove that he did use it, though it is found nowhere in his published or unpublished writings.)

...

Now, it happens that among Galileo's manuscript notes on motion there are many pages that were not published by Favaro, since they contained only calculations or diagrams without attendant propositions or explanations. Some pages that were published had first undergone considerable editing, making it difficult if not impossible to discern their full significance from their printed form. This unpublished material includes at least one group of notes which cannot satisfactorily be accounted for except as representing a series of experiments designed to test a fundamental assumption, which led to a new, important discovery. In these documents precise empirical data are given numerically, comparisons are made with calculated values derived from theory, a source of discrepancy from still another expected result is noted, a new experiment is designed to eliminate this, and further empirical data are recorded. The last-named data, although proving to be beyond Galileo's powers of mathematical analysis at the time, when subjected to modern analysis turn out to be remarkably precise. If this does not represent the experimental process in its fully modern sense, it is hard to imagine what standards historians require to be met.

The discovery of these notes confirms the opinion of earlier historians. They read only Galileo's published works, but did so without a preconceived notion of continuity in the history of ideas. The opinion of our more sophisticated colleagues has its sole support in philosophical interpretations that fit with preconceived views of orderly long-term scientific development. To find manuscript evidence that Galileo was at home in the physics laboratory hardly surprises me. I should find it much more astonishing if, by reasoning alone, working only from fourteenth-century theories and conclusions, he had continued along lines so different from those followed by profound philosophers in earlier centuries. It is to be hoped that, warned by these examples, historians will begin to restore the old cautionary clauses in analogous instances in which scholarly opinions are revised without new evidence, simply to fit historical theories.

In what follows, the newly discovered documents are presented in the context of a hypothetical reconstruction of Galileo's thought.

...

As early as 1590, if we are correct in ascribing Galileo's juvenile De motu to that date, it was his belief that an ideal body resting on an ideal horizontal plane could be set in motion by a force smaller than any previously assigned force, however small. By "horizontal plane" he meant a surface concentric with the earth but which for reasonable distances would be indistinguishable from a level plane. Galileo noted at the time that experiment did not confirm this belief that the body could be set in motion by a vanishingly small force, and he attributed the failure to friction, pressure, the imperfection of material surfaces and spheres, and the departure of level planes from concentricity with the earth.5

It followed from this belief that under ideal conditions the motion so induced would also be perpetual and uniform. Galileo did not mention these consequences until much later, and it is impossible to say just when he perceived them. They are, however, so evident that it is safe to assume that he saw them almost from the start. They constitute a trivial case of the proposition he seems to have been teaching before 1607-that a mover is required to start motion, but that absence of resistance is then sufficient to account for its continuation.6

In mid-1604, following some investigations of motions along circular arcs and motions of pendulums, Galileo hit upon the law that in free fall the times elapsed from rest are as the smaller distance is to the mean proportional between two distances fallen.7 This gave him the times-squared law as well as the rule of odd numbers for successive distances and speeds in free fall. During the next few years he worked out a large number of theorems relating to motion along inclined planes, later published in the Two New Sciences. He also arrived at the rule that the speed terminating free fall from rest was double the speed of the fall itself. These theorems survive in manuscript notes of the period 1604-1609. (Work during these years can be identified with virtual certainty by the watermarks in the paper used, as I have explained elsewhere.8)

In the autumn of 1608, after a summer at Florence, Galileo seems to have interested himself in the question whether the actual slowing of a body moving horizontally followed any particular rule. On folio 117i of the manuscripts just mentioned, the numbers 196, 155, 121, 100 are noted along the horizontal line near the middle of the page (see Fig. 1). I believe that this was the first entry on this leaf, for reasons that will appear later, and that Galileo placed his grooved plane in the level position and recorded distances traversed in equal times along it. Using a metronome, and rolling a light wooden ball about 4 3/4 inches in diameter along a plane with a groove 1 3/4 inches wide, I obtained similar relations over a distance of 6 feet. The figures obtained vary greatly for balls of different materials and weights and for greatly different initial speeds.9 But it suffices for my present purposes that Galileo could have obtained the figures noted by observing the actual deceleration of a ball along a level plane. It should be noted that the watermark on this leaf is like that on folio 116, to which we shall come presently, and it will be seen later that the two sheets are closely connected in time in other ways as well.

The relatively rapid deceleration is obviously related to the contact of ball and groove. Were the ball to roll right off the end of the plane, all resistance to horizontal motion would be virtually removed. If, then, there were any way to have a given ball leave the plane at different speeds of which the ratios were known, Galileo's old idea that horizontal motion would continue uniformly in the absence of resistance could be put to test. His law of free fall made this possible. The ratios of speeds could be controlled by allowing the ball to fall vertically through known heights, at the ends of which it would be deflected horizontally. Falls through given heights … [more]
nibble  org:junk  org:edu  physics  mechanics  gravity  giants  the-trenches  discovery  history  early-modern  europe  mediterranean  the-great-west-whale  frontier  science  empirical  experiment  arms  technology  lived-experience  time  measurement  dirty-hands  iron-age  the-classics  medieval  sequential  wire-guided  error  wiki  reference  people  quantitative-qualitative  multi  pdf  piracy  study  essay  letters  discrete  news  org:mag  org:sci  popsci 
august 2017 by nhaliday
Stolen generations | West Hunter
Someone was quoted as saying that if you adopted an Australian Aborigine kid and raised him in England, he’d do just fine. This is a standard prediction, or maybe really an assumption, of most social scientists: people are the same everywhere. Let me put it more precisely: If you adopted a random draw of such kids just after birth, and then treated them in the same way that local native kids were treated, they’d end up with the same adult IQ, on average. And the same rate of alcoholism, and so forth. Same with any other racial group, the prediction says.

But is this actually true? The same people would say that one-day-old babies from different groups ought to act the same, and that’s certainly not true.

I would think that there was a lot of adoption of Australian Aborigines going on in Australia, back in the day. What were the results?

https://westhunt.wordpress.com/2014/03/15/stolen-generations/#comment-23715
I don’t see how you could spend a lot of time on this (aboriginal education) and not see the pattern in front of you. But people do, certainly in the US as well. Here’s a fun quote: “There is no logical reason to expect that the number of minority students in gifted programs would not be proportional to their representation in the general population. ” (p. 498) Frasier 1997
Of course this never happens, never has happened, but still it’s gotta happen.

This is secondhand, but an interesting story. There was once a graduate student in anthropology at UNM who was very interested in Australian Aboriginal education. I believe that’s what he wanted to do when he got out. He did a lot of digging into the subject, including mimeographed stuff that never got published, and much against his will came to the conclusion that Aboriginals really were different from Europeans, really did have significantly lower intelligence. It drove him nuts – he actually had to be hospitalized. Dropped out of the program.

https://westhunt.wordpress.com/2014/03/15/stolen-generations/#comment-23811
It’s easier than you think. Just threaten the members of the IRB – they generally have no honor.

The long-term effects of American Indian boarding schools: http://marginalrevolution.com/marginalrevolution/2017/09/long-term-effects-american-indian-boarding-schools.html
http://www.sciencedirect.com/science/article/pii/S0304387817300664
Combining recent reservation-level census data and school enrollment data from 1911 to 1932, I find that reservations that sent a larger share of students to off-reservation boarding schools have higher high school graduation rates, higher per capita income, lower poverty rates, a greater proportion of exclusively English speakers, and smaller family sizes. These results are supported when distance to the nearest off-reservation boarding school that subsequently closed is used as an instrument for the proportion of past boarding school students. I conclude with a discussion of the possible reasons for this link.

...

Last, the link drawn here between higher boarding school share and assimilation should not be misinterpreted as an endorsement of coercive assimilation.
west-hunter  scitariat  discussion  ideas  pop-diff  iq  rant  attaq  rhetoric  farmers-and-foragers  anglo  history  mostly-modern  natural-experiment  experiment  empirical  field-study  troll  lol  aphorism  stories  poast  westminster  multi  truth  academia  grad-school  honor  ethics  ethanol  race  environmental-effects  education  usa  econotariat  marginal-rev  study  summary  commentary  quotes  economics  cliometrics  microfoundations  path-dependence  endo-exo  assimilation  intervention  science  censorship  courage  input-output  endogenous-exogenous  branches 
august 2017 by nhaliday
The Long-Run Weight of Communism or the Weight of LongRun History?
This study provides evidence that culture understood as values and beliefs moves very slowly. Despite massive institutional change, values and beliefs in transition countries have not changed much over the last 20 years. Evidence suggests that culture is affected by the long run historical past, in particular the participation in empires for over 100 years. Current institutional evolutions in transition countries might be more affected by their long run past than by the communist experience of the twentieth century
pdf  study  economics  growth-econ  broad-econ  cliometrics  path-dependence  wealth-of-nations  divergence  history  mostly-modern  communism  authoritarianism  political-econ  institutions  eastern-europe  russia  long-short-run  culture  cultural-dynamics  🎩  values  general-survey  nationalism-globalism  competition  individualism-collectivism  gender  labor  democracy  expert  antidemos  capitalism  microfoundations  expert-experience  roots  top-n  branches  intel  china  asia  sinosphere  orient  technocracy  europe  germanic  agriculture  heavy-industry  pre-ww2  urban-rural  EU  trust  conquest-empire  empirical  markets  usa  migration  tribalism  us-them  convergence  enlightenment-renaissance-restoration-reformation  confucian  comparison  flux-stasis  hari-seldon 
august 2017 by nhaliday
Constitutive equation - Wikipedia
In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance, and approximates the response of that material to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or forces to strains or deformations.

Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior.[1] See the article Linear response function.
nibble  wiki  reference  article  physics  mechanics  electromag  identity  estimate  approximation  empirical  stylized-facts  list  dirty-hands  fluid  logos 
august 2017 by nhaliday
Is the economy illegible? | askblog
In the model of the economy as a GDP factory, the most fundamental equation is the production function, Y = f(K,L).

This says that total output (Y) is determined by the total amount of capital (K) and the total amount of labor (L).

Let me stipulate that the economy is legible to the extent that this model can be applied usefully to explain economic developments. I want to point out that the economy, while never as legible as economists might have thought, is rapidly becoming less legible.
econotariat  cracker-econ  economics  macro  big-picture  empirical  legibility  let-me-see  metrics  measurement  econ-metrics  volo-avolo  securities  markets  amazon  business-models  business  tech  sv  corporation  inequality  compensation  polarization  econ-productivity  stagnation  monetary-fiscal  models  complex-systems  map-territory  thinking  nationalism-globalism  time-preference  cost-disease  education  healthcare  composition-decomposition  econometrics  methodology  lens  arrows  labor  capital  trends  intricacy  🎩  moments  winner-take-all  efficiency  input-output 
august 2017 by nhaliday
The Determinants of Trust
Both individual experiences and community characteristics influence how much people trust each other. Using data drawn from US localities we find that the strongest factors that reduce trust are: i) a recent history of traumatic experiences, even though the passage of time reduces this effect fairly rapidly; ii) belonging to a group that historically felt discriminated against, such as minorities (black in particular) and, to a lesser extent, women; iii) being economically unsuccessful in terms of income and education; iv) living in a racially mixed community and/or in one with a high degree of income disparity. Religious beliefs and ethnic origins do not significantly affect trust. The latter result may be an indication that the American melting pot at least up to a point works, in terms of homogenizing attitudes of different cultures, even though racial cleavages leading to low trust are still quite high.

Understanding Trust: http://www.nber.org/papers/w13387
In this paper we resolve this puzzle by recognizing that trust has two components: a belief-based one and a preference based one. While the sender's behavior reflects both, we show that WVS-like measures capture mostly the belief-based component, while questions on past trusting behavior are better at capturing the preference component of trust.

MEASURING TRUST: http://scholar.harvard.edu/files/laibson/files/measuring_trust.pdf
We combine two experiments and a survey to measure trust and trustworthiness— two key components of social capital. Standard attitudinal survey questions about trust predict trustworthy behavior in our experiments much better than they predict trusting behavior. Trusting behavior in the experiments is predicted by past trusting behavior outside of the experiments. When individuals are closer socially, both trust and trustworthiness rise. Trustworthiness declines when partners are of different races or nationalities. High status individuals are able to elicit more trustworthiness in others.

What is Social Capital? The Determinants of Trust and Trustworthiness: http://www.nber.org/papers/w7216
Using a sample of Harvard undergraduates, we analyze trust and social capital in two experiments. Trusting behavior and trustworthiness rise with social connection; differences in race and nationality reduce the level of trustworthiness. Certain individuals appear to be persistently more trusting, but these people do not say they are more trusting in surveys. Survey questions about trust predict trustworthiness not trust. Only children are less trustworthy. People behave in a more trustworthy manner towards higher status individuals, and therefore status increases earnings in the experiment. As such, high status persons can be said to have more social capital.

Trust and Cheating: http://www.nber.org/papers/w18509
We find that: i) both parties to a trust exchange have implicit notions of what constitutes cheating even in a context without promises or messages; ii) these notions are not unique - the vast majority of senders would feel cheated by a negative return on their trust/investment, whereas a sizable minority defines cheating according to an equal split rule; iii) these implicit notions affect the behavior of both sides to the exchange in terms of whether to trust or cheat and to what extent. Finally, we show that individual's notions of what constitutes cheating can be traced back to two classes of values instilled by parents: cooperative and competitive. The first class of values tends to soften the notion while the other tightens it.

Nationalism and Ethnic-Based Trust: Evidence from an African Border Region: https://u.osu.edu/robinson.1012/files/2015/12/Robinson_NationalismTrust-1q3q9u1.pdf
These results offer microlevel evidence that a strong and salient national identity can diminish ethnic barriers to trust in diverse societies.

One Team, One Nation: Football, Ethnic Identity, and Conflict in Africa: http://conference.nber.org/confer//2017/SI2017/DEV/Durante_Depetris-Chauvin.pdf
Do collective experiences that prime sentiments of national unity reduce interethnic tensions and conflict? We examine this question by looking at the impact of national football teams’ victories in sub-Saharan Africa. Combining individual survey data with information on over 70 official matches played between 2000 and 2015, we find that individuals interviewed in the days after a victory of their country’s national team are less likely to report a strong sense of ethnic identity and more likely to trust people of other ethnicities than those interviewed just before. The effect is sizable and robust and is not explained by generic euphoria or optimism. Crucially, national victories do not only affect attitudes but also reduce violence. Indeed, using plausibly exogenous variation from close qualifications to the Africa Cup of Nations, we find that countries that (barely) qualified experience significantly less conflict in the following six months than countries that (barely) did not. Our findings indicate that, even where ethnic tensions have deep historical roots, patriotic shocks can reduce inter-ethnic tensions and have a tangible impact on conflict.

Why Does Ethnic Diversity Undermine Public Goods Provision?: http://www.columbia.edu/~mh2245/papers1/HHPW.pdf
We identify three families of mechanisms that link diversity to public goods provision—–what we term “preferences,” “technology,” and “strategy selection” mechanisms—–and run a series of experimental games that permit us to compare the explanatory power of distinct mechanisms within each of these three families. Results from games conducted with a random sample of 300 subjects from a slum neighborhood of Kampala, Uganda, suggest that successful public goods provision in homogenous ethnic communities can be attributed to a strategy selection mechanism: in similar settings, co-ethnics play cooperative equilibria, whereas non-co-ethnics do not. In addition, we find evidence for a technology mechanism: co-ethnics are more closely linked on social networks and thus plausibly better able to support cooperation through the threat of social sanction. We find no evidence for prominent preference mechanisms that emphasize the commonality of tastes within ethnic groups or a greater degree of altruism toward co-ethnics, and only weak evidence for technology mechanisms that focus on the impact of shared ethnicity on the productivity of teams.

does it generalize to first world?

Higher Intelligence Groups Have Higher Cooperation Rates in the Repeated Prisoner's Dilemma: https://ideas.repec.org/p/iza/izadps/dp8499.html
The initial cooperation rates are similar, it increases in the groups with higher intelligence to reach almost full cooperation, while declining in the groups with lower intelligence. The difference is produced by the cumulation of small but persistent differences in the response to past cooperation of the partner. In higher intelligence subjects, cooperation after the initial stages is immediate and becomes the default mode, defection instead requires more time. For lower intelligence groups this difference is absent. Cooperation of higher intelligence subjects is payoff sensitive, thus not automatic: in a treatment with lower continuation probability there is no difference between different intelligence groups

Why societies cooperate: https://voxeu.org/article/why-societies-cooperate
Three attributes are often suggested to generate cooperative behaviour – a good heart, good norms, and intelligence. This column reports the results of a laboratory experiment in which groups of players benefited from learning to cooperate. It finds overwhelming support for the idea that intelligence is the primary condition for a socially cohesive, cooperative society. Warm feelings towards others and good norms have only a small and transitory effect.

individual payoff, etc.:

Trust, Values and False Consensus: http://www.nber.org/papers/w18460
Trust beliefs are heterogeneous across individuals and, at the same time, persistent across generations. We investigate one mechanism yielding these dual patterns: false consensus. In the context of a trust game experiment, we show that individuals extrapolate from their own type when forming trust beliefs about the same pool of potential partners - i.e., more (less) trustworthy individuals form more optimistic (pessimistic) trust beliefs - and that this tendency continues to color trust beliefs after several rounds of game-play. Moreover, we show that one's own type/trustworthiness can be traced back to the values parents transmit to their children during their upbringing. In a second closely-related experiment, we show the economic impact of mis-calibrated trust beliefs stemming from false consensus. Miscalibrated beliefs lower participants' experimental trust game earnings by about 20 percent on average.

The Right Amount of Trust: http://www.nber.org/papers/w15344
We investigate the relationship between individual trust and individual economic performance. We find that individual income is hump-shaped in a measure of intensity of trust beliefs. Our interpretation is that highly trusting individuals tend to assume too much social risk and to be cheated more often, ultimately performing less well than those with a belief close to the mean trustworthiness of the population. On the other hand, individuals with overly pessimistic beliefs avoid being cheated, but give up profitable opportunities, therefore underperforming. The cost of either too much or too little trust is comparable to the income lost by forgoing college.

...

This framework allows us to show that income-maximizing trust typically exceeds the trust level of the average person as well as to estimate the distribution of income lost to trust mistakes. We find that although a majority of individuals has well calibrated beliefs, a non-trivial proportion of the population (10%) has trust beliefs sufficiently poorly calibrated to lower income by more than 13%.

Do Trust and … [more]
study  economics  alesina  growth-econ  broad-econ  trust  cohesion  social-capital  religion  demographics  race  diversity  putnam-like  compensation  class  education  roots  phalanges  general-survey  multi  usa  GT-101  conceptual-vocab  concept  behavioral-econ  intricacy  composition-decomposition  values  descriptive  correlation  harvard  field-study  migration  poll  status  🎩  🌞  chart  anthropology  cultural-dynamics  psychology  social-psych  sociology  cooperate-defect  justice  egalitarianism-hierarchy  inequality  envy  n-factor  axelrod  pdf  microfoundations  nationalism-globalism  africa  intervention  counter-revolution  tribalism  culture  society  ethnocentrism  coordination  world  developing-world  innovation  econ-productivity  government  stylized-facts  madisonian  wealth-of-nations  identity-politics  public-goodish  s:*  legacy  things  optimization  curvature  s-factor  success  homo-hetero  higher-ed  models  empirical  contracts  human-capital  natural-experiment  endo-exo  data  scale  trade  markets  time  supply-demand  summary 
august 2017 by nhaliday
Controversial New Theory Suggests Life Wasn't a Fluke of Biology—It Was Physics | WIRED
First Support for a Physics Theory of Life: https://www.quantamagazine.org/first-support-for-a-physics-theory-of-life-20170726/
Take chemistry, add energy, get life. The first tests of Jeremy England’s provocative origin-of-life hypothesis are in, and they appear to show how order can arise from nothing.
news  org:mag  profile  popsci  bio  xenobio  deep-materialism  roots  eden  physics  interdisciplinary  applications  ideas  thermo  complex-systems  cybernetics  entropy-like  order-disorder  arrows  phys-energy  emergent  empirical  org:sci  org:inst  nibble  chemistry  fixed-point  wild-ideas  multi 
august 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Is the U.S. Aggregate Production Function Cobb-Douglas? New Estimates of the Elasticity of Substitution∗
world-wide: http://www.socsci.uci.edu/~duffy/papers/jeg2.pdf
https://www.weforum.org/agenda/2016/01/is-the-us-labour-share-as-constant-as-we-thought
https://www.economicdynamics.org/meetpapers/2015/paper_844.pdf
We find that IPP capital entirely explains the observed decline of the US labor share, which otherwise is secularly constant over the past 65 years for structures and equipment capital. The labor share decline simply reflects the fact that the US economy is undergoing a transition toward a larger IPP sector.
https://ideas.repec.org/p/red/sed015/844.html
http://www.robertdkirkby.com/blog/2015/summary-of-piketty-i/
https://www.brookings.edu/bpea-articles/deciphering-the-fall-and-rise-in-the-net-capital-share/
The Fall of the Labor Share and the Rise of Superstar Firms: http://www.nber.org/papers/w23396
The Decline of the U.S. Labor Share: https://www.brookings.edu/wp-content/uploads/2016/07/2013b_elsby_labor_share.pdf
Table 2 has industry disaggregation
Estimating the U.S. labor share: https://www.bls.gov/opub/mlr/2017/article/estimating-the-us-labor-share.htm

Why Workers Are Losing to Capitalists: https://www.bloomberg.com/view/articles/2017-09-20/why-workers-are-losing-to-capitalists
Automation and offshoring may be conspiring to reduce labor's share of income.
pdf  study  economics  growth-econ  econometrics  usa  data  empirical  analysis  labor  capital  econ-productivity  manifolds  magnitude  multi  world  🎩  piketty  econotariat  compensation  inequality  winner-take-all  org:ngo  org:davos  flexibility  distribution  stylized-facts  regularizer  hmm  history  mostly-modern  property-rights  arrows  invariance  industrial-org  trends  wonkish  roots  synthesis  market-power  efficiency  variance-components  business  database  org:gov  article  model-class  models  automation  nationalism-globalism  trade  news  org:mag  org:biz  org:bv  noahpinion  explanation  summary  methodology  density  polarization  map-territory  input-output 
july 2017 by nhaliday
A Review of Avner Greif’s Institutions and the Path to the Modern Economy: Lessons from Medieval Trade
Avner Greif’s Institutions and the Path to the Modern Economy: Lessons from Medieval Trade (Cambridge University Press, 2006) is a major work in the ongoing project of many economists and economic historians to show that institutions are the fundamental driver of all economic history, and of all contemporary differences in economic performance. This review outlines the contribution of this book to the project and the general status of this long standing ambition.
pdf  spearhead  gregory-clark  essay  article  books  review  economics  growth-econ  broad-econ  institutions  history  early-modern  europe  the-great-west-whale  divergence  🎩  industrial-revolution  medieval  critique  roots  world  measurement  empirical  realness  cultural-dynamics  north-weingast-like  modernity  microfoundations  aphorism  track-record 
july 2017 by nhaliday
Alzheimers | West Hunter
Some disease syndromes almost have to be caused by pathogens – for example, any with a fitness impact (prevalence x fitness reduction) > 2% or so, too big to be caused by mutational pressure. I don’t think that this is the case for AD: it hits so late in life that the fitness impact is minimal. However, that hardly means that it can’t be caused by a pathogen or pathogens – a big fraction of all disease syndromes are, including many that strike in old age. That possibility is always worth checking out, not least because infectious diseases are generally easier to prevent and/or treat.

There is new work that strongly suggests that pathogens are the root cause. It appears that the amyloid is an antimicrobial peptide. amyloid-beta binds to invading microbes and then surrounds and entraps them. ‘When researchers injected Salmonella into mice’s hippocampi, a brain area damaged in Alzheimer’s, A-beta quickly sprang into action. It swarmed the bugs and formed aggregates called fibrils and plaques. “Overnight you see the plaques throughout the hippocampus where the bugs were, and then in each single plaque is a single bacterium,” Tanzi says. ‘

obesity and pathogens: https://westhunt.wordpress.com/2016/05/29/alzheimers/#comment-79757
not sure about this guy, but interesting: https://westhunt.wordpress.com/2016/05/29/alzheimers/#comment-79748
http://perfecthealthdiet.com/2010/06/is-alzheimer%E2%80%99s-caused-by-a-bacterial-infection-of-the-brain/

https://westhunt.wordpress.com/2016/12/13/the-twelfth-battle-of-the-isonzo/
All too often we see large, long-lasting research efforts that never produce, never achieve their goal.

For example, the amyloid hypothesis [accumulation of amyloid-beta oligomers is the cause of Alzheimers] has been dominant for more than 20 years, and has driven development of something like 15 drugs. None of them have worked. At the same time the well-known increased risk from APOe4 has been almost entirely ignored, even though it ought to be a clue to the cause.

In general, when a research effort has been spinning its wheels for a generation or more, shouldn’t we try something different? We could at least try putting a fraction of those research dollars into alternative approaches that have not yet failed repeatedly.

Mostly this applies to research efforts that at least wish they were science. ‘educational research’ is in a special class, and I hardly know what to recommend. Most of the remedial actions that occur to me violate one or more of the Geneva conventions.

APOe4 related to lymphatic system: https://en.wikipedia.org/wiki/Apolipoprotein_E

https://westhunt.wordpress.com/2012/03/06/spontaneous-generation/#comment-2236
Look,if I could find out the sort of places that I usually misplace my keys – if I did, which I don’t – I could find the keys more easily the next time I lose them. If you find out that practitioners of a given field are not very competent, it marks that field as a likely place to look for relatively easy discovery. Thus medicine is a promising field, because on the whole doctors are not terribly good investigators. For example, none of the drugs developed for Alzheimers have worked at all, which suggests that our ideas on the causation of Alzheimers are likely wrong. Which suggests that it may (repeat may) be possible to make good progress on Alzheimers, either by an entirely empirical approach, which is way underrated nowadays, or by dumping the current explanation, finding a better one, and applying it.

You could start by looking at basic notions of field X and asking yourself: How do we really know that? Is there serious statistical evidence? Does that notion even accord with basic theory? This sort of checking is entirely possible. In most of the social sciences, we don’t, there isn’t, and it doesn’t.

Hygiene and the world distribution of Alzheimer’s disease: Epidemiological evidence for a relationship between microbial environment and age-adjusted disease burden: https://academic.oup.com/emph/article/2013/1/173/1861845/Hygiene-and-the-world-distribution-of-Alzheimer-s

Amyloid-β peptide protects against microbial infection in mouse and worm models of Alzheimer’s disease: http://stm.sciencemag.org/content/8/340/340ra72

Fungus, the bogeyman: http://www.economist.com/news/science-and-technology/21676754-curious-result-hints-possibility-dementia-caused-fungal
Fungus and dementia
paper: http://www.nature.com/articles/srep15015

Porphyromonas gingivalis in Alzheimer’s disease brains: Evidence for disease causation and treatment with small-molecule inhibitors: https://advances.sciencemag.org/content/5/1/eaau3333
west-hunter  scitariat  disease  parasites-microbiome  medicine  dementia  neuro  speculation  ideas  low-hanging  todo  immune  roots  the-bones  big-surf  red-queen  multi  🌞  poast  obesity  strategy  info-foraging  info-dynamics  institutions  meta:medicine  social-science  curiosity  🔬  science  meta:science  meta:research  wiki  epidemiology  public-health  study  arbitrage  alt-inst  correlation  cliometrics  path-dependence  street-fighting  methodology  nibble  population-genetics  org:nat  health  embodied  longevity  aging  org:rec  org:biz  org:anglo  news  neuro-nitgrit  candidate-gene  nutrition  diet  org:health  explanans  fashun  empirical  theory-practice  ability-competence  dirty-hands  education  aphorism  truth  westminster  innovation  evidence-based  religion  prudence  track-record  problem-solving  dental  being-right  prioritizing 
july 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.

https://twitter.com/wwwojtekk/status/1190731344336293889
https://archive.is/EZu0h
Great (not completely new but still good to have it in one place) discussion of RCTs and inference in economics by Deaton, my favorite sentences (more general than just about RCT) below
Randomization in the tropics revisited: a theme and eleven variations: https://scholar.princeton.edu/sites/default/files/deaton/files/deaton_randomization_revisited_v3_2019.pdf
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated  garett-jones 
june 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
Why I see academic economics moving left | askblog
http://www.arnoldkling.com/blog/on-the-state-of-economics/
http://www.nationalaffairs.com/publications/detail/how-effective-is-economic-theory
I have a long essay on the scientific status of economics in National Affairs. A few excerpts from the conclusion:

In the end, can we really have effective theory in economics? If by effective theory we mean theory that is verifiable and reliable for prediction and control, the answer is likely no. Instead, economics deals in speculative interpretations and must continue to do so.

Young economists who employ pluralistic methods to study problems are admired rather than marginalized, as they were in 1980. But economists who question the wisdom of interventionist economic policies seem headed toward the fringes of the profession.

This is my essay in which I say that academic economics is on the road to sociology.

example...?:
Property Is Only Another Name for Monopoly: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2818494
Hanson's take more positive: http://www.overcomingbias.com/2017/10/for-stability-rents.html

women:
http://www.arnoldkling.com/blog/college-women-and-the-future-of-economics/
http://www.arnoldkling.com/blog/road-to-sociology-watch-2/
http://www.arnoldkling.com/blog/road-to-sociology-watch-3/
econotariat  cracker-econ  commentary  prediction  trends  economics  social-science  ideology  politics  left-wing  regulation  empirical  measurement  methodology  academia  multi  links  news  org:mag  essay  longform  randy-ayndy  sociology  technocracy  realness  hypocrisy  letters  study  property-rights  taxes  civil-liberty  efficiency  arbitrage  alt-inst  proposal  incentives  westminster  lens  truth  info-foraging  ratty  hanson  summary  review  biases  concrete  abstraction  managerial-state  gender  identity-politics  higher-ed 
may 2017 by nhaliday
Lucio Russo - Wikipedia
In The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had to Be Reborn (Italian: La rivoluzione dimenticata), Russo promotes the belief that Hellenistic science in the period 320-144 BC reached heights not achieved by Classical age science, and proposes that it went further than ordinarily thought, in multiple fields not normally associated with ancient science.

La Rivoluzione Dimenticata (The Forgotten Revolution), Reviewed by Sandro Graffi: http://www.ams.org/notices/199805/review-graffi.pdf

Before turning to the question of the decline of Hellenistic science, I come back to the new light shed by the book on Euclid’s Elements and on pre-Ptolemaic astronomy. Euclid’s definitions of the elementary geometric entities—point, straight line, plane—at the beginning of the Elements have long presented a problem.7 Their nature is in sharp contrast with the approach taken in the rest of the book, and continued by mathematicians ever since, of refraining from defining the fundamental entities explicitly but limiting themselves to postulating the properties which they enjoy. Why should Euclid be so hopelessly obscure right at the beginning and so smooth just after? The answer is: the definitions are not Euclid’s. Toward the beginning of the second century A.D. Heron of Alexandria found it convenient to introduce definitions of the elementary objects (a sign of decadence!) in his commentary on Euclid’s Elements, which had been written at least 400 years before. All manuscripts of the Elements copied ever since included Heron’s definitions without mention, whence their attribution to Euclid himself. The philological evidence leading to this conclusion is quite convincing.8

...

What about the general and steady (on the average) impoverishment of Hellenistic science under the Roman empire? This is a major historical problem, strongly tied to the even bigger one of the decline and fall of the antique civilization itself. I would summarize the author’s argument by saying that it basically represents an application to science of a widely accepted general theory on decadence of antique civilization going back to Max Weber. Roman society, mainly based on slave labor, underwent an ultimately unrecoverable crisis as the traditional sources of that labor force, essentially wars, progressively dried up. To save basic farming, the remaining slaves were promoted to be serfs, and poor free peasants reduced to serfdom, but this made trade disappear. A society in which production is almost entirely based on serfdom and with no trade clearly has very little need of culture, including science and technology. As Max Weber pointed out, when trade vanished, so did the marble splendor of the ancient towns, as well as the spiritual assets that went with it: art, literature, science, and sophisticated commercial laws. The recovery of Hellenistic science then had to wait until the disappearance of serfdom at the end of the Middle Ages. To quote Max Weber: “Only then with renewed vigor did the old giant rise up again.”

...

The epilogue contains the (rather pessimistic) views of the author on the future of science, threatened by the apparent triumph of today’s vogue of irrationality even in leading institutions (e.g., an astrology professorship at the Sorbonne). He looks at today’s ever-increasing tendency to teach science more on a fideistic than on a deductive or experimental basis as the first sign of a decline which could be analogous to the post-Hellenistic one.

Praising Alexandrians to excess: https://sci-hub.tw/10.1088/2058-7058/17/4/35
The Economic Record review: https://sci-hub.tw/10.1111/j.1475-4932.2004.00203.x

listed here: https://pinboard.in/u:nhaliday/b:c5c09f2687c1

Was Roman Science in Decline? (Excerpt from My New Book): https://www.richardcarrier.info/archives/13477
people  trivia  cocktail  history  iron-age  mediterranean  the-classics  speculation  west-hunter  scitariat  knowledge  wiki  ideas  wild-ideas  technology  innovation  contrarianism  multi  pdf  org:mat  books  review  critique  regularizer  todo  piracy  physics  canon  science  the-trenches  the-great-west-whale  broad-econ  the-world-is-just-atoms  frontier  speedometer  🔬  conquest-empire  giants  economics  article  growth-econ  cjones-like  industrial-revolution  empirical  absolute-relative  truth  rot  zeitgeist  gibbon  big-peeps  civilization  malthus  roots  old-anglo  britain  early-modern  medieval  social-structure  limits  quantitative-qualitative  rigor  lens  systematic-ad-hoc  analytical-holistic  cycles  space  mechanics  math  geometry  gravity  revolution  novelty  meta:science  is-ought  flexibility  trends  reason  applicability-prereqs  theory-practice  traces  evidence  psycho-atoms 
may 2017 by nhaliday
'Capital in the Twenty-First Century' by Thomas Piketty, reviewed | New Republic
by Robert Solow (positive)

The data then exhibit a clear pattern. In France and Great Britain, national capital stood fairly steadily at about seven times national income from 1700 to 1910, then fell sharply from 1910 to 1950, presumably as a result of wars and depression, reaching a low of 2.5 in Britain and a bit less than 3 in France. The capital-income ratio then began to climb in both countries, and reached slightly more than 5 in Britain and slightly less than 6 in France by 2010. The trajectory in the United States was slightly different: it started at just above 3 in 1770, climbed to 5 in 1910, fell slightly in 1920, recovered to a high between 5 and 5.5 in 1930, fell to below 4 in 1950, and was back to 4.5 in 2010.

The wealth-income ratio in the United States has always been lower than in Europe. The main reason in the early years was that land values bulked less in the wide open spaces of North America. There was of course much more land, but it was very cheap. Into the twentieth century and onward, however, the lower capital-income ratio in the United States probably reflects the higher level of productivity: a given amount of capital could support a larger production of output than in Europe. It is no surprise that the two world wars caused much less destruction and dissipation of capital in the United States than in Britain and France. The important observation for Piketty’s argument is that, in all three countries, and elsewhere as well, the wealth-income ratio has been increasing since 1950, and is almost back to nineteenth-century levels. He projects this increase to continue into the current century, with weighty consequences that will be discussed as we go on.

...

Now if you multiply the rate of return on capital by the capital-income ratio, you get the share of capital in the national income. For example, if the rate of return is 5 percent a year and the stock of capital is six years worth of national income, income from capital will be 30 percent of national income, and so income from work will be the remaining 70 percent. At last, after all this preparation, we are beginning to talk about inequality, and in two distinct senses. First, we have arrived at the functional distribution of income—the split between income from work and income from wealth. Second, it is always the case that wealth is more highly concentrated among the rich than income from labor (although recent American history looks rather odd in this respect); and this being so, the larger the share of income from wealth, the more unequal the distribution of income among persons is likely to be. It is this inequality across persons that matters most for good or ill in a society.

...

The data are complicated and not easily comparable across time and space, but here is the flavor of Piketty’s summary picture. Capital is indeed very unequally distributed. Currently in the United States, the top 10 percent own about 70 percent of all the capital, half of that belonging to the top 1 percent; the next 40 percent—who compose the “middle class”—own about a quarter of the total (much of that in the form of housing), and the remaining half of the population owns next to nothing, about 5 percent of total wealth. Even that amount of middle-class property ownership is a new phenomenon in history. The typical European country is a little more egalitarian: the top 1 percent own 25 percent of the total capital, and the middle class 35 percent. (A century ago the European middle class owned essentially no wealth at all.) If the ownership of wealth in fact becomes even more concentrated during the rest of the twenty-first century, the outlook is pretty bleak unless you have a taste for oligarchy.

Income from wealth is probably even more concentrated than wealth itself because, as Piketty notes, large blocks of wealth tend to earn a higher return than small ones. Some of this advantage comes from economies of scale, but more may come from the fact that very big investors have access to a wider range of investment opportunities than smaller investors. Income from work is naturally less concentrated than income from wealth. In Piketty’s stylized picture of the United States today, the top 1 percent earns about 12 percent of all labor income, the next 9 percent earn 23 percent, the middle class gets about 40 percent, and the bottom half about a quarter of income from work. Europe is not very different: the top 10 percent collect somewhat less and the other two groups a little more.

You get the picture: modern capitalism is an unequal society, and the rich-get-richer dynamic strongly suggest that it will get more so. But there is one more loose end to tie up, already hinted at, and it has to do with the advent of very high wage incomes. First, here are some facts about the composition of top incomes. About 60 percent of the income of the top 1 percent in the United States today is labor income. Only when you get to the top tenth of 1 percent does income from capital start to predominate. The income of the top hundredth of 1 percent is 70 percent from capital. The story for France is not very different, though the proportion of labor income is a bit higher at every level. Evidently there are some very high wage incomes, as if you didn’t know.

This is a fairly recent development. In the 1960s, the top 1 percent of wage earners collected a little more than 5 percent of all wage incomes. This fraction has risen pretty steadily until nowadays, when the top 1 percent of wage earners receive 10–12 percent of all wages. This time the story is rather different in France. There the share of total wages going to the top percentile was steady at 6 percent until very recently, when it climbed to 7 percent. The recent surge of extreme inequality at the top of the wage distribution may be primarily an American development. Piketty, who with Emmanuel Saez has made a careful study of high-income tax returns in the United States, attributes this to the rise of what he calls “supermanagers.” The very highest income class consists to a substantial extent of top executives of large corporations, with very rich compensation packages. (A disproportionate number of these, but by no means all of them, come from the financial services industry.) With or without stock options, these large pay packages get converted to wealth and future income from wealth. But the fact remains that much of the increased income (and wealth) inequality in the United States is driven by the rise of these supermanagers.

and Deirdre McCloskey (p critical): https://ejpe.org/journal/article/view/170
nice discussion of empirical economics, economic history, market failures and statism, etc., with several bon mots

Piketty’s great splash will undoubtedly bring many young economically interested scholars to devote their lives to the study of the past. That is good, because economic history is one of the few scientifically quantitative branches of economics. In economic history, as in experimental economics and a few other fields, the economists confront the evidence (as they do not for example in most macroeconomics or industrial organization or international trade theory nowadays).

...

Piketty gives a fine example of how to do it. He does not get entangled as so many economists do in the sole empirical tool they are taught, namely, regression analysis on someone else’s “data” (one of the problems is the word data, meaning “things given”: scientists should deal in capta, “things seized”). Therefore he does not commit one of the two sins of modern economics, the use of meaningless “tests” of statistical significance (he occasionally refers to “statistically insignificant” relations between, say, tax rates and growth rates, but I am hoping he does not suppose that a large coefficient is “insignificant” because R. A. Fisher in 1925 said it was). Piketty constructs or uses statistics of aggregate capital and of inequality and then plots them out for inspection, which is what physicists, for example, also do in dealing with their experiments and observations. Nor does he commit the other sin, which is to waste scientific time on existence theorems. Physicists, again, don’t. If we economists are going to persist in physics envy let us at least learn what physicists actually do. Piketty stays close to the facts, and does not, for example, wander into the pointless worlds of non-cooperative game theory, long demolished by experimental economics. He also does not have recourse to non-computable general equilibrium, which never was of use for quantitative economic science, being a branch of philosophy, and a futile one at that. On both points, bravissimo.

...

Since those founding geniuses of classical economics, a market-tested betterment (a locution to be preferred to “capitalism”, with its erroneous implication that capital accumulation, not innovation, is what made us better off) has enormously enriched large parts of a humanity now seven times larger in population than in 1800, and bids fair in the next fifty years or so to enrich everyone on the planet. [Not SSA or MENA...]

...

Then economists, many on the left but some on the right, in quick succession from 1880 to the present—at the same time that market-tested betterment was driving real wages up and up and up—commenced worrying about, to name a few of the pessimisms concerning “capitalism” they discerned: greed, alienation, racial impurity, workers’ lack of bargaining strength, workers’ bad taste in consumption, immigration of lesser breeds, monopoly, unemployment, business cycles, increasing returns, externalities, under-consumption, monopolistic competition, separation of ownership from control, lack of planning, post-War stagnation, investment spillovers, unbalanced growth, dual labor markets, capital insufficiency (William Easterly calls it “capital fundamentalism”), peasant irrationality, capital-market imperfections, public … [more]
news  org:mag  big-peeps  econotariat  economics  books  review  capital  capitalism  inequality  winner-take-all  piketty  wealth  class  labor  mobility  redistribution  growth-econ  rent-seeking  history  mostly-modern  trends  compensation  article  malaise  🎩  the-bones  whiggish-hegelian  cjones-like  multi  mokyr-allen-mccloskey  expert  market-failure  government  broad-econ  cliometrics  aphorism  lens  gallic  clarity  europe  critique  rant  optimism  regularizer  pessimism  ideology  behavioral-econ  authoritarianism  intervention  polanyi-marx  politics  left-wing  absolute-relative  regression-to-mean  legacy  empirical  data-science  econometrics  methodology  hypothesis-testing  physics  iron-age  mediterranean  the-classics  quotes  krugman  world  entrepreneurialism  human-capital  education  supply-demand  plots  manifolds  intersection  markets  evolution  darwinian  giants  old-anglo  egalitarianism-hierarchy  optimate  morality  ethics  envy  stagnation  nl-and-so-can-you  expert-experience  courage  stats  randy-ayndy  reason  intersection-connectedness  detail-architect 
april 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractinfovague

related tags

2016-election  :/  aaronson  ability-competence  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  additive  adversarial  advertising  advice  aesthetics  africa  age-generation  age-of-discovery  aging  agri-mindset  agriculture  ai  ai-control  akrasia  albion  alesina  algorithms  alien-character  alignment  allodium  alt-inst  altruism  ama  amazon  analogy  analysis  analytical-holistic  anarcho-tyranny  anglo  anglosphere  announcement  anomie  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  arbitrage  archaeology  archaics  architecture  aristos  arms  arrows  art  article  ascetic  asia  assimilation  atoms  attaq  attention  audio  authoritarianism  autism  automata-languages  automation  autor  axelrod  axioms  backup  behavioral-econ  behavioral-gen  being-becoming  being-right  ben-recht  benchmarks  benevolence  best-practices  betting  bias-variance  biases  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blog  books  bostrom  bounded-cognition  brain-scan  branches  brands  britain  broad-econ  buddhism  build-packaging  business  business-models  c(pp)  c:**  c:***  caching  caltech  canada  candidate-gene  canon  capital  capitalism  cardio  career  carmack  cartoons  causation  censorship  chan  chapman  characterization  charity  chart  checking  checklists  chemistry  chicago  china  christianity  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  classic  clever-rats  cliometrics  clown-world  coalitions  coarse-fine  cochrane  cocktail  cocoa  code-organizing  coding-theory  cog-psych  cohesion  cold-war  collaboration  comedy  comics  coming-apart  commentary  communication  communism  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  compression  computation  computer-memory  concept  conceptual-vocab  concrete  concurrency  confidence  config  confluence  confounding  confucian  conquest-empire  consilience  context  contracts  contradiction  contrarianism  control  convergence  convexity-curvature  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  coupling-cohesion  courage  cracker-econ  cracker-prog  creative  crime  criminal-justice  criminology  critique  crooked  crosstab  crux  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  dan-luu  dark-arts  darwinian  data  data-science  database  death  debate  debt  debugging  decentralized  decision-making  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  dementia  democracy  demographics  dennett  density  dental  descriptive  detail-architecture  developing-world  developmental  devops  devtools  diet  dignity  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distributed  distribution  divergence  diversity  domestication  dotnet  douthatish  drama  drugs  duality  duplication  duty  dynamical  early-modern  easterly  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  eden-heaven  education  effect-size  efficiency  egalitarianism-hierarchy  EGT  elections  electromag  elegance  elite  embedded-cognition  embodied  emergent  emotion  empirical  ems  endo-exo  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entertainment  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epistemic  equilibrium  error  error-handling  essay  essence-existence  estimate  ethanol  ethics  ethnocentrism  EU  europe  events  evidence  evidence-based  evolution  evopsych  examples  exegesis-hermeneutics  existence  exit-voice  expanders  expansionism  experiment  expert  expert-experience  explanans  explanation  exploratory  exposition  expression-survival  externalities  extrema  facebook  faq  farmers-and-foragers  fashun  FDA  fermi  fertility  feudal  feynman  fiction  field-study  fields  film  finance  fitness  fitsci  fixed-point  flexibility  fluid  flux-stasis  focus  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forms-instances  fourier  free-riding  french  frequency  frontier  functional  futurism  gallic  galton  game-theory  games  garett-jones  gavisti  gbooks  gedanken  gelman  gender  gender-diff  gene-flow  general-survey  generalization  generative  genetic-load  genetics  genomics  geography  geometry  germanic  giants  gibbon  gilens-page  gnon  gnosis-logos  gnxp  god-man-beast-victim  golang  good-evil  google  gotchas  government  grad-school  gradient-descent  graph-theory  graphical-models  graphs  gravity  gray-econ  great-powers  gregory-clark  grokkability  grokkability-clarity  group-level  group-selection  growth-econ  GT-101  GWAS  gwern  h2o  habit  hacker  haidt  hanson  hanushek  hard-tech  hardness  hardware  hari-seldon  harvard  haskell  hci  health  healthcare  heavy-industry  henrich  heterodox  heuristic  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hive-mind  hmm  hn  homepage  homo-hetero  honor  housing  hsu  huge-data-the-biggest  human-bean  human-capital  humanity  humility  hypocrisy  hypothesis-testing  icml  ideas  identity  identity-politics  ideology  idk  iidness  illusion  immune  impact  impetus  impro  incentives  increase-decrease  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  inhibition  init  innovation  input-output  insight  institutions  integrity  intel  intellectual-property  intelligence  interdisciplinary  interests  internet  interpretation  intersection  intersection-connectedness  intervention  interview  interview-prep  intricacy  intuition  invariance  investing  ioannidis  ios  iq  iran  iraq-syria  iron-age  is-ought  islam  israel  iteration-recursion  japan  jargon  jobs  journos-pundits  judgement  justice  jvm  kernels  kinship  knowledge  korea  krugman  kumbaya-kult  labor  land  language  larry-summers  latin-america  lattice  law  leadership  learning  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifehack  limits  linear-algebra  linear-models  linearity  liner-notes  linguistics  links  linux  list  literature  lived-experience  local-global  logic  logos  lol  long-short-run  long-term  longevity  longform  longitudinal  love-hate  low-hanging  machiavelli  machine-learning  macro  madisonian  magnitude  malaise  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  market-power  marketing  markets  markov  martial  matching  math  math.CO  math.DS  maxim-gun  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memetics  memory-management  MENA  mena4  meta-analysis  meta:math  meta:medicine  meta:prediction  meta:reading  meta:research  meta:rhetoric  meta:science  metabolic  metabuch  metameta  methodology  metrics  michael-jordan  michael-nielsen  micro  microfoundations  midwest  migration  military  minimalism  miri-cfar  missing-heritability  mixing  ML-MAP-E  mobile  mobility  model-class  model-organism  models  modernity  mokyr-allen-mccloskey  moloch  moments  monetary-fiscal  money  money-for-time  monte-carlo  mooc  morality  mostly-modern  move-fast-(and-break-things)  multi  murray  mutation  mystic  n-factor  nascent-state  nationalism-globalism  natural-experiment  nature  navigation  near-far  neocons  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  no-go  noahpinion  nonlinearity  nordic  north-weingast-like  northeast  notation  novelty  nuclear  null-result  number  numerics  nutrition  obesity  objective-measure  objektbuch  ocaml-sml  occam  occident  oceans  old-anglo  oly-programming  oop  open-closed  operational  opioids  optimate  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:biz  org:bleg  org:bv  org:com  org:data  org:davos  org:econlib  org:edu  org:euro  org:gov  org:health  org:inst  org:junk  org:lite  org:local  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  org:theos  organizing  orient  orwellian  os  oscillation  oss  outcome-risk  outliers  paganism  paleocon  papers  paradox  parallax  parasites-microbiome  parenting  parsimony  paste  path-dependence  patho-altruism  patience  paul-romer  paying-rent  pdf  peace-violence  people  performance  personality  persuasion  perturbation  pessimism  phalanges  pharma  philosophy  phys-energy  physics  pic  piketty  pinker  piracy  planning  plots  pls  plt  poast  podcast  polanyi-marx  polarization  policy  polisci  political-econ  politics  poll  pop-diff  pop-structure  popsci  population  population-genetics  postmortem  postrat  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  predictive-processing  preference-falsification  preprint  presentation  prioritizing  priors-posteriors  privacy  pro-rata  probability  problem-solving  productivity  profile  programming  progression  project  propaganda  properties  property-rights  proposal  protestant-catholic  protocol-metadata  prudence  pseudoE  psych-architecture  psychiatry  psycho-atoms  psychology  psychometrics  public-goodish  public-health  putnam-like  python  q-n-a  qra  QTL  quality  quantified-self  quantitative-qualitative  questions  quixotic  quiz  quotes  r-lang  race  random  randy-ayndy  ranking  rant  rationality  ratty  reading  realness  realpolitik  reason  recent-selection  recommendations  recruiting  red-queen  reddit  redistribution  reduction  reference  reflection  regional-scatter-plots  regression  regression-to-mean  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  reputation  research  research-program  responsibility  retention  review  revolution  rhetoric  right-wing  rigidity  rigor  rindermann-thompson  risk  ritual  robust  roots  rot  rounding  russia  rust  s-factor  s:*  s:**  s:***  safety  sales  sampling  sampling-bias  sanctity-degradation  sapiens  scala  scale  scaling-tech  scaling-up  schelling  scholar  science  scifi-fantasy  scitariat  search  securities  security  selection  self-control  self-interest  self-report  sequential  sex  sexuality  shift  shipping  sib-study  signal-noise  signaling  similarity  simler  simulation  singularity  sinosphere  skeleton  skunkworks  sky  sleep  sleuthin  slides  slippery-slope  smoothness  soccer  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  sociality  society  sociology  software  space  spanish  spatial  spearhead  speculation  speed  speedometer  spengler  spock  sports  spreading  ssc  stackex  stagnation  stamina  stanford  startups  stat-mech  stat-power  state  state-of-art  statesmen  static-dynamic  stats  status  steel-man  stereotypes  stochastic-processes  stock-flow  stories  strategy  stream  street-fighting  stress  structure  study  studying  stylized-facts  subculture  subjective-objective  success  sulla  summary  supply-demand  survey  sv  symmetry  syntax  synthesis  systematic-ad-hoc  systems  tactics  tails  talks  taxes  tcs  tcstariat  teaching  tech  technocracy  technology  techtariat  telos-atelos  terrorism  tetlock  the-bones  the-classics  the-devil  the-great-west-whale