nhaliday + generalization   67

Carryover vs “Far Transfer” | West Hunter
It used to be thought that studying certain subjects ( like Latin) made you better at learning others, or smarter generally – “They supple the mind, sir; they render it pliant and receptive.” This doesn’t appear to be the case, certainly not for Latin – although it seems to me that math can help you understand other subjects?

A different question: to what extent does being (some flavor of) crazy, or crazy about one subject, or being really painfully wrong about some subject, predict how likely you are to be wrong on other things? We know that someone can be strange, downright crazy, or utterly unsound on some topic and still do good mathematics… but that is not the same as saying that there is no statistical tendency for people on crazy-train A to be more likely to be wrong about subject B. What do the data suggest?
west-hunter  scitariat  discussion  reflection  learning  thinking  neurons  intelligence  generalization  math  abstraction  truth  prudence  correlation  psychology  cog-psych  education  quotes  aphorism  foreign-lang  mediterranean  the-classics  contiguity-proximity 
6 weeks ago by nhaliday
Measures of cultural distance - Marginal REVOLUTION
A new paper with many authors — most prominently Joseph Henrich — tries to measure the cultural gaps between different countries.  I am reproducing a few of their results (see pp.36-37 for more), noting that higher numbers represent higher gaps:

...

Overall the numbers show much greater cultural distance of other nations from China than from the United States, a significant and under-discussed problem for China. For instance, the United States is about as culturally close to Hong Kong as China is.

[ed.: Japan is closer to the US than China. Interesting. I'd like to see some data based on something other than self-reported values though.]

the study:
Beyond WEIRD Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259613
We present a new tool that provides a means to measure the psychological and cultural distance between two societies and create a distance scale with any population as the point of comparison. Since psychological data is dominated by samples drawn from the United States or other WEIRD nations, this tool provides a “WEIRD scale” to assist researchers in systematically extending the existing database of psychological phenomena to more diverse and globally representative samples. As the extreme WEIRDness of the literature begins to dissolve, the tool will become more useful for designing, planning, and justifying a wide range of comparative psychological projects. We have made our code available and developed an online application for creating other scales (including the “Sino scale” also presented in this paper). We discuss regional diversity within nations showing the relative homogeneity of the United States. Finally, we use these scales to predict various psychological outcomes.
econotariat  marginal-rev  henrich  commentary  study  summary  list  data  measure  metrics  similarity  culture  cultural-dynamics  sociology  things  world  usa  anglo  anglosphere  china  asia  japan  sinosphere  russia  developing-world  canada  latin-america  MENA  europe  eastern-europe  germanic  comparison  great-powers  thucydides  foreign-policy  the-great-west-whale  generalization  anthropology  within-group  homo-hetero  moments  exploratory  phalanges  the-bones  🎩  🌞  broad-econ  cocktail  n-factor  measurement  expectancy  distribution  self-report  values  expression-survival  uniqueness 
7 weeks ago by nhaliday
Why is Google Translate so bad for Latin? A longish answer. : latin
hmm:
> All it does its correlate sequences of up to five consecutive words in texts that have been manually translated into two or more languages.
That sort of system ought to be perfect for a dead language, though. Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.

We're not exactly inundated with brand new Latin to translate.
--
> Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.
What makes you think that the Google folks haven't done so and used that to create the language models they use?
> That sort of system ought to be perfect for a dead language, though.
Perhaps. But it will be bad at translating novel English sentences to Latin.
foreign-lang  reddit  social  discussion  language  the-classics  literature  dataset  measurement  roots  traces  syntax  anglo  nlp  stackex  links  q-n-a  linguistics  lexical  deep-learning  sequential  hmm  project  arrows  generalization  state-of-art  apollonian-dionysian  machine-learning  google 
june 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
OSF | Near and Far Transfer in Cognitive Training: A Second-Order Meta- Analysis
In Models 1 (k = 99) and 2 (k = 119), we investigated the impact of working-memory training on near-transfer (i.e., memory) and far-transfer (e.g., reasoning, speed, and language) measures, respectively, and whether it is mediated by the type of population. Model 3 (k = 233) extended Model 2 by adding six meta-analyses assessing the far-transfer effects of other cognitive-training programs (video-games, music, chess, and exergames). Model 1 showed that working-memory training does induce near transfer, and that the size of this effect is moderated by the type of population. By contrast, Models 2 and 3 highlighted that far-transfer effects are small or null.
study  preprint  psychology  cog-psych  intelligence  generalization  dimensionality  psych-architecture  intervention  enhancement  practice 
february 2019 by nhaliday
Which benchmark programs are faster? | Computer Language Benchmarks Game
old:
https://salsa.debian.org/benchmarksgame-team/archive-alioth-benchmarksgame
https://web.archive.org/web/20170331153459/http://benchmarksgame.alioth.debian.org/
includes Scala

very outdated but more languages: https://web.archive.org/web/20110401183159/http://shootout.alioth.debian.org:80/

OCaml seems to offer the best tradeoff of performance vs parsimony (Haskell not so much :/)
https://blog.chewxy.com/2019/02/20/go-is-average/
http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependability-of.html
old official: https://web.archive.org/web/20130731195711/http://benchmarksgame.alioth.debian.org/u64q/code-used-time-used-shapes.php
https://web.archive.org/web/20121125103010/http://shootout.alioth.debian.org/u64q/code-used-time-used-shapes.php
Haskell does better here

other PL benchmarks:
https://github.com/kostya/benchmarks
BF 2.0:
Kotlin, C++ (GCC), Rust < Nim, D (GDC,LDC), Go, MLton < Crystal, Go (GCC), C# (.NET Core), Scala, Java, OCaml < D (DMD) < C# Mono < Javascript V8 < F# Mono, Javascript Node, Haskell (MArray) << LuaJIT << Python PyPy < Haskell < Racket <<< Python << Python3
mandel.b:
C++ (GCC) << Crystal < Rust, D (GDC), Go (GCC) < Nim, D (LDC) << C# (.NET Core) < MLton << Kotlin << OCaml << Scala, Java << D (DMD) << Go << C# Mono << Javascript Node << Haskell (MArray) << LuaJIT < Python PyPy << F# Mono <<< Racket
https://github.com/famzah/langs-performance
C++, Rust, Java w/ custom non-stdlib code < Python PyPy < C# .Net Core < Javscript Node < Go, unoptimized C++ (no -O2) << PHP << Java << Python3 << Python
comparison  pls  programming  performance  benchmarks  list  top-n  ranking  systems  time  multi  🖥  cost-benefit  tradeoffs  data  analysis  plots  visualization  measure  intricacy  parsimony  ocaml-sml  golang  rust  jvm  javascript  c(pp)  functional  haskell  backup  scala  realness  generalization  accuracy  techtariat  crosstab  database  repo  objektbuch  static-dynamic  gnu 
december 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Information Processing: Mathematical Theory of Deep Neural Networks (Princeton workshop)
"Recently, long-past-due theoretical results have begun to emerge. These results, and those that will follow in their wake, will begin to shed light on the properties of large, adaptive, distributed learning architectures, and stand to revolutionize how computer science and neuroscience understand these systems."
hsu  scitariat  commentary  links  research  research-program  workshop  events  princeton  sanjeev-arora  deep-learning  machine-learning  ai  generalization  explanans  off-convex  nibble  frontier  speedometer  state-of-art  big-surf  announcement 
january 2018 by nhaliday
Genome-wide association analysis identifies 30 new susceptibility loci for schizophrenia | Nature Genetics
We conducted a genome-wide association study (GWAS) with replication in 36,180 Chinese individuals and performed further transancestry meta-analyses with data from the Psychiatry Genomics Consortium (PGC2). Approximately 95% of the genome-wide significant (GWS) index alleles (or their proxies) from the PGC2 study were overrepresented in Chinese schizophrenia cases, including ∼50% that achieved nominal significance and ∼75% that continued to be GWS in the transancestry analysis. The Chinese-only analysis identified seven GWS loci; three of these also were GWS in the transancestry analyses, which identified 109 GWS loci, thus yielding a total of 113 GWS loci (30 novel) in at least one of these analyses. We observed improvements in the fine-mapping resolution at many susceptibility loci. Our results provide several lines of evidence supporting candidate genes at many loci and highlight some pathways for further research. Together, our findings provide novel insight into the genetic architecture and biological etiology of schizophrenia.
study  biodet  behavioral-gen  psychiatry  disease  GWAS  china  asia  race  generalization  genetics  replication 
november 2017 by nhaliday
The weirdest people in the world?
Abstract: Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.
pdf  study  microfoundations  anthropology  cultural-dynamics  sociology  psychology  social-psych  cog-psych  iq  biodet  behavioral-gen  variance-components  psychometrics  psych-architecture  visuo  spatial  morality  individualism-collectivism  n-factor  justice  egalitarianism-hierarchy  cooperate-defect  outliers  homo-hetero  evopsych  generalization  henrich  europe  the-great-west-whale  occident  organizing  🌞  universalism-particularism  applicability-prereqs  hari-seldon  extrema  comparison  GT-101  ecology  EGT  reinforcement  anglo  language  gavisti  heavy-industry  marginal  absolute-relative  reason  stylized-facts  nature  systematic-ad-hoc  analytical-holistic  science  modernity  behavioral-econ  s:*  illusion  cool  hmm  coordination  self-interest  social-norms  population  density  humanity  sapiens  farmers-and-foragers  free-riding  anglosphere  cost-benefit  china  asia  sinosphere  MENA  world  developing-world  neurons  theory-of-mind  network-structure  nordic  orient  signum  biases  usa  optimism  hypocrisy  humility  within-without  volo-avolo  domes 
november 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
trees are harlequins, words are harlequins — bayes: a kinda-sorta masterpost
lol, gwern: https://www.reddit.com/r/slatestarcodex/comments/6ghsxf/biweekly_rational_feed/diqr0rq/
> What sort of person thinks “oh yeah, my beliefs about these coefficients correspond to a Gaussian with variance 2.5″? And what if I do cross-validation, like I always do, and find that variance 200 works better for the problem? Was the other person wrong? But how could they have known?
> ...Even ignoring the mode vs. mean issue, I have never met anyone who could tell whether their beliefs were normally distributed vs. Laplace distributed. Have you?
I must have spent too much time in Bayesland because both those strike me as very easy and I often think them! My beliefs usually are Laplace distributed when it comes to things like genetics (it makes me very sad to see GWASes with flat priors), and my Gaussian coefficients are actually a variance of 0.70 (assuming standardized variables w.l.o.g.) as is consistent with field-wide meta-analyses indicating that d>1 is pretty rare.
ratty  ssc  core-rats  tumblr  social  explanation  init  philosophy  bayesian  thinking  probability  stats  frequentist  big-yud  lesswrong  synchrony  similarity  critique  intricacy  shalizi  scitariat  selection  mutation  evolution  priors-posteriors  regularization  bias-variance  gwern  reddit  commentary  GWAS  genetics  regression  spock  nitty-gritty  generalization  epistemic  🤖  rationality  poast  multi  best-practices  methodology  data-science 
august 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.

https://twitter.com/wwwojtekk/status/1190731344336293889
https://archive.is/EZu0h
Great (not completely new but still good to have it in one place) discussion of RCTs and inference in economics by Deaton, my favorite sentences (more general than just about RCT) below
Randomization in the tropics revisited: a theme and eleven variations: https://scholar.princeton.edu/sites/default/files/deaton/files/deaton_randomization_revisited_v3_2019.pdf
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated  garett-jones 
june 2017 by nhaliday
Validation is a Galilean enterprise
We contend that Frey's analyses actually have little bearing on the external validity of the PGG. Evidence from recent experiments using modified versions of the PGG and stringent comprehension checks indicate that individual differences in people's tendencies to contribute to the public good are better explained by individual differences in participants' comprehension of the game's payoff structure than by individual differences in cooperativeness (Burton-Chellew, El Mouden, & West, 2016). For example, only free riders reliably understand right away that complete defection maximizes one's own payoff, regardless of how much other participants contribute. This difference in comprehension alone explains the so-called free riders' low PGG contributions. These recent results also provide a new interpretation of why conditional cooperators often contribute generously in early rounds, and then less in later rounds (Fischbacher et al., 2001). Fischbacher et al. (2001) attribute the relatively high contributions in the early rounds to cooperativeness and the subsequent decline in contributions to conditional cooperators' frustration with free riders. In reality, the decline in cooperation observed over the course of PGGs occurs because so-called conditional cooperators initially believe that their payoff-maximizing decision depends on whether others contribute, but eventually learn that contributing never benefits the contributor (Burton-Chellew, Nax, & West, 2015). Because contributions in the PGG do not actually reflect cooperativeness, there is no real-world cooperative setting to which inferences about contributions in the PGG can generalize.
study  behavioral-econ  economics  psychology  social-psych  coordination  cooperate-defect  piracy  altruism  bounded-cognition  error  lol  pdf  map-territory  GT-101  realness  free-riding  public-goodish  decision-making  microfoundations  descriptive  values  interests  generalization  measurement  checking 
june 2017 by nhaliday
Genomic analysis of family data reveals additional genetic effects on intelligence and personality | bioRxiv
methodology:
Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1003520
Pedigree- and SNP-Associated Genetics and Recent Environment are the Major Contributors to Anthropometric and Cardiometabolic Trait Variation: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1005804

Missing Heritability – found?: https://westhunt.wordpress.com/2017/02/09/missing-heritability-found/
There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Some of the variants, the ones we find with GWAS, are fairly common and fitness-neutral: the variant that slightly increases IQ confers the same fitness (or very close to the same) as the one that slightly decreases IQ – presumably because of other effects it has. If this weren’t the case, it would be impossible for both of the variants to remain common.

The rare variants that affect IQ will generally decrease IQ – and since pleiotropy is the norm, usually they’ll be deleterious in other ways as well. Genetic load.

Happy families are all alike; every unhappy family is unhappy in its own way.: https://westhunt.wordpress.com/2017/06/06/happy-families-are-all-alike-every-unhappy-family-is-unhappy-in-its-own-way/
It now looks as if the majority of the genetic variance in IQ is the product of mutational load, and the same may be true for many psychological traits. To the extent this is the case, a lot of human psychological variation must be non-adaptive. Maybe some personality variation fulfills an evolutionary function, but a lot does not. Being a dumb asshole may be a bug, rather than a feature. More generally, this kind of analysis could show us whether particular low-fitness syndromes, like autism, were ever strategies – I suspect not.

It’s bad new news for medicine and psychiatry, though. It would suggest that what we call a given type of mental illness, like schizophrenia, is really a grab-bag of many different syndromes. The ultimate causes are extremely varied: at best, there may be shared intermediate causal factors. Not good news for drug development: individualized medicine is a threat, not a promise.

see also comment at: https://pinboard.in/u:nhaliday/b:a6ab4034b0d0

https://www.reddit.com/r/slatestarcodex/comments/5sldfa/genomic_analysis_of_family_data_reveals/
So the big implication here is that it's better than I had dared hope - like Yang/Visscher/Hsu have argued, the old GCTA estimate of ~0.3 is indeed a rather loose lower bound on additive genetic variants, and the rest of the missing heritability is just the relatively uncommon additive variants (ie <1% frequency), and so, like Yang demonstrated with height, using much more comprehensive imputation of SNP scores or using whole-genomes will be able to explain almost all of the genetic contribution. In other words, with better imputation panels, we can go back and squeeze out better polygenic scores from old GWASes, new GWASes will be able to reach and break the 0.3 upper bound, and eventually we can feasibly predict 0.5-0.8. Between the expanding sample sizes from biobanks, the still-falling price of whole genomes, the gradual development of better regression methods (informative priors, biological annotation information, networks, genetic correlations), and better imputation, the future of GWAS polygenic scores is bright. Which obviously will be extremely helpful for embryo selection/genome synthesis.

The argument that this supports mutation-selection balance is weaker but plausible. I hope that it's true, because if that's why there is so much genetic variation in intelligence, then that strongly encourages genetic engineering - there is no good reason or Chesterton fence for intelligence variants being non-fixed, it's just that evolution is too slow to purge the constantly-accumulating bad variants. And we can do better.
https://rubenarslan.github.io/generation_scotland_pedigree_gcta/

The surprising implications of familial association in disease risk: https://arxiv.org/abs/1707.00014
https://spottedtoad.wordpress.com/2017/06/09/personalized-medicine-wont-work-but-race-based-medicine-probably-will/
As Greg Cochran has pointed out, this probably isn’t going to work. There are a few genes like BRCA1 (which makes you more likely to get breast and ovarian cancer) that we can detect and might affect treatment, but an awful lot of disease turns out to be just the result of random chance and deleterious mutation. This means that you can’t easily tailor disease treatment to people’s genes, because everybody is fucked up in their own special way. If Johnny is schizophrenic because of 100 random errors in the genes that code for his neurons, and Jack is schizophrenic because of 100 other random errors, there’s very little way to test a drug to work for either of them- they’re the only one in the world, most likely, with that specific pattern of errors. This is, presumably why the incidence of schizophrenia and autism rises in populations when dads get older- more random errors in sperm formation mean more random errors in the baby’s genes, and more things that go wrong down the line.

The looming crisis in human genetics: http://www.economist.com/node/14742737
Some awkward news ahead
- Geoffrey Miller

Human geneticists have reached a private crisis of conscience, and it will become public knowledge in 2010. The crisis has depressing health implications and alarming political ones. In a nutshell: the new genetics will reveal much less than hoped about how to cure disease, and much more than feared about human evolution and inequality, including genetic differences between classes, ethnicities and races.

2009!
study  preprint  bio  biodet  behavioral-gen  GWAS  missing-heritability  QTL  🌞  scaling-up  replication  iq  education  spearhead  sib-study  multi  west-hunter  scitariat  genetic-load  mutation  medicine  meta:medicine  stylized-facts  ratty  unaffiliated  commentary  rhetoric  wonkish  genetics  genomics  race  pop-structure  poast  population-genetics  psychiatry  aphorism  homo-hetero  generalization  scale  state-of-art  ssc  reddit  social  summary  gwern  methodology  personality  britain  anglo  enhancement  roots  s:*  2017  data  visualization  database  let-me-see  bioinformatics  news  org:rec  org:anglo  org:biz  track-record  prediction  identity-politics  pop-diff  recent-selection  westminster  inequality  egalitarianism-hierarchy  high-dimension  applications  dimensionality  ideas  no-go  volo-avolo  magnitude  variance-components  GCTA  tradeoffs  counter-revolution  org:mat  dysgenics  paternal-age  distribution  chart  abortion-contraception-embryo 
june 2017 by nhaliday
Edge.org: 2017 : WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?
highlights:
- the genetic book of the dead [Dawkins]
- complementarity [Frank Wilczek]
- relative information
- effective theory [Lisa Randall]
- affordances [Dennett]
- spontaneous symmetry breaking
- relatedly, equipoise [Nicholas Christakis]
- case-based reasoning
- population reasoning (eg, common law)
- criticality [Cesar Hidalgo]
- Haldan's law of the right size (!SCALE!)
- polygenic scores
- non-ergodic
- ansatz
- state [Aaronson]: http://www.scottaaronson.com/blog/?p=3075
- transfer learning
- effect size
- satisficing
- scaling
- the breeder's equation [Greg Cochran]
- impedance matching

soft:
- reciprocal altruism
- life history [Plomin]
- intellectual honesty [Sam Harris]
- coalitional instinct (interesting claim: building coalitions around "rationality" actually makes it more difficult to update on new evidence as it makes you look like a bad person, eg, the Cathedral)
basically same: https://twitter.com/ortoiseortoise/status/903682354367143936

more: https://www.edge.org/conversation/john_tooby-coalitional-instincts

interesting timing. how woke is this dude?
org:edge  2017  technology  discussion  trends  list  expert  science  top-n  frontier  multi  big-picture  links  the-world-is-just-atoms  metameta  🔬  scitariat  conceptual-vocab  coalitions  q-n-a  psychology  social-psych  anthropology  instinct  coordination  duty  power  status  info-dynamics  cultural-dynamics  being-right  realness  cooperate-defect  westminster  chart  zeitgeist  rot  roots  epistemic  rationality  meta:science  analogy  physics  electromag  geoengineering  environment  atmosphere  climate-change  waves  information-theory  bits  marginal  quantum  metabuch  homo-hetero  thinking  sapiens  genetics  genomics  evolution  bio  GT-101  low-hanging  minimum-viable  dennett  philosophy  cog-psych  neurons  symmetry  humility  life-history  social-structure  GWAS  behavioral-gen  biodet  missing-heritability  ergodic  machine-learning  generalization  west-hunter  population-genetics  methodology  blowhards  spearhead  group-level  scale  magnitude  business  scaling-tech  tech  business-models  optimization  effect-size  aaronson  state  bare-hands  problem-solving  politics 
may 2017 by nhaliday
Faces in the Clouds | West Hunter
This was a typical Iraq story: somehow, we had developed an approach to intelligence that reliably produced fantastically wrong answers, at vast expense. What so special about Iraq? Nothing, probably – except that we acquired ground truth.

https://westhunt.wordpress.com/2013/06/19/faces-in-the-clouds/#comment-15397
Those weren’t leads, any more than there are really faces in the clouds. They were excuses to sell articles, raise money, and finally one extra argument in favor of a pointless war. Without a hard fact or two, it’s all vapor, useless.

Our tactical intelligence was fine in the Gulf War, but that doesn’t mean that the military, or worse yet the people who make and influence decisions had any sense, then or now.

For example, I have long had an amateur interest in these things, and I got the impression, in the summer of 1990, that Saddam Hussein was about to invade Kuwait. I was telling everyone at work that Saddam was about to invade, till they got bored with it. This was about two weeks before it actually happened. I remember thinking about making a few investments based on that possible event, but never got around to, partly because I was really sleepy, since we had a month-old baby girl at home.

As I recall, the “threat officer” at the CIA warned about this, but since the higher-ups ignored him, his being correct embarrassed them, so he was demoted.

The tactical situation was as favorable as it ever gets, and most of it was known. We had near-perfect intelligence:: satellite recon, JSTARS, etc Complete air domination, everything from Warthogs to F-15s. . Months to get ready. A huge qualitative weapons superiority. For example, our tanks outranged theirs by about a factor of two, had computer-controlled aiming, better armor, infrared sights, etc etc etc etc. I counted something like 13 separate war-winning advantages at the time, and that count was obviously incomplete.. And one more: Arabs make terrible soldiers, generally, and Iraqis were among the worst.

But I think that most of the decisionmakers didn’t realize how easy it would be – at all – and I’ve never seen any sign that Colin Powell did either. He’s a “C” student type – not smart. Schwartzkopf may have understood what was going on: for all I know he was another Manstein, but you can’t show how good you are when you beat a patzer.

https://westhunt.wordpress.com/2013/06/19/faces-in-the-clouds/#comment-15420
For me it was a hobby – I was doing adaptive optics at the time in Colorado Springs. All I knew about particular military moves was from the newspapers, but my reasoning went like this:

A. Kuwait had a lot of oil. Worth stealing, if you could get away with it.

B. Kuwait was militarily impotent and had no defense treaty with anyone. Most people found Kuwaitis annoying.

C. Iraq owed Kuwait something like 30 billion dollars, and was generally deep in debt due to the long conflict with Iran

D. I figured that there was a fair chance that the Iraqi accusations of Kuwaiti slant drilling were true

E. There were widely reported Iraqi troop movements towards Kuwait

F. Most important was my evaluation of Saddam, from watching the long war with Iran. I thought that Saddam was a particular combination of cocky and stupid, the sort of guy to do something like this. At the time I did not know about April Glaspie’s, shall we say, poorly chosen comments.
west-hunter  scitariat  discussion  MENA  iraq-syria  stories  intel  track-record  generalization  truth  error  wire-guided  priors-posteriors  info-dynamics  multi  poast  being-right  people  statesmen  usa  management  incentives  impetus  energy-resources  military  arms  analysis  roots  alien-character  ability-competence  cynicism-idealism 
april 2017 by nhaliday
Educational Romanticism & Economic Development | pseudoerasmus
https://twitter.com/GarettJones/status/852339296358940672
deleeted

https://twitter.com/GarettJones/status/943238170312929280
https://archive.is/p5hRA

Did Nations that Boosted Education Grow Faster?: http://econlog.econlib.org/archives/2012/10/did_nations_tha.html
On average, no relationship. The trendline points down slightly, but for the time being let's just call it a draw. It's a well-known fact that countries that started the 1960's with high education levels grew faster (example), but this graph is about something different. This graph shows that countries that increased their education levels did not grow faster.

Where has all the education gone?: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1016.2704&rep=rep1&type=pdf

https://twitter.com/GarettJones/status/948052794681966593
https://archive.is/kjxqp

https://twitter.com/GarettJones/status/950952412503822337
https://archive.is/3YPic

https://twitter.com/pseudoerasmus/status/862961420065001472
http://hanushek.stanford.edu/publications/schooling-educational-achievement-and-latin-american-growth-puzzle

The Case Against Education: What's Taking So Long, Bryan Caplan: http://econlog.econlib.org/archives/2015/03/the_case_agains_9.html

The World Might Be Better Off Without College for Everyone: https://www.theatlantic.com/magazine/archive/2018/01/whats-college-good-for/546590/
Students don't seem to be getting much out of higher education.
- Bryan Caplan

College: Capital or Signal?: http://www.economicmanblog.com/2017/02/25/college-capital-or-signal/
After his review of the literature, Caplan concludes that roughly 80% of the earnings effect from college comes from signalling, with only 20% the result of skill building. Put this together with his earlier observations about the private returns to college education, along with its exploding cost, and Caplan thinks that the social returns are negative. The policy implications of this will come as very bitter medicine for friends of Bernie Sanders.

Doubting the Null Hypothesis: http://www.arnoldkling.com/blog/doubting-the-null-hypothesis/

Is higher education/college in the US more about skill-building or about signaling?: https://www.quora.com/Is-higher-education-college-in-the-US-more-about-skill-building-or-about-signaling
ballpark: 50% signaling, 30% selection, 20% addition to human capital
more signaling in art history, more human capital in engineering, more selection in philosophy

Econ Duel! Is Education Signaling or Skill Building?: http://marginalrevolution.com/marginalrevolution/2016/03/econ-duel-is-education-signaling-or-skill-building.html
Marginal Revolution University has a brand new feature, Econ Duel! Our first Econ Duel features Tyler and me debating the question, Is education more about signaling or skill building?

Against Tulip Subsidies: https://slatestarcodex.com/2015/06/06/against-tulip-subsidies/

https://www.overcomingbias.com/2018/01/read-the-case-against-education.html

https://nintil.com/2018/02/05/notes-on-the-case-against-education/

https://www.nationalreview.com/magazine/2018-02-19-0000/bryan-caplan-case-against-education-review

https://spottedtoad.wordpress.com/2018/02/12/the-case-against-education/
Most American public school kids are low-income; about half are non-white; most are fairly low skilled academically. For most American kids, the majority of the waking hours they spend not engaged with electronic media are at school; the majority of their in-person relationships are at school; the most important relationships they have with an adult who is not their parent is with their teacher. For their parents, the most important in-person source of community is also their kids’ school. Young people need adult mirrors, models, mentors, and in an earlier era these might have been provided by extended families, but in our own era this all falls upon schools.

Caplan gestures towards work and earlier labor force participation as alternatives to school for many if not all kids. And I empathize: the years that I would point to as making me who I am were ones where I was working, not studying. But they were years spent working in schools, as a teacher or assistant. If schools did not exist, is there an alternative that we genuinely believe would arise to draw young people into the life of their community?

...

It is not an accident that the state that spends the least on education is Utah, where the LDS church can take up some of the slack for schools, while next door Wyoming spends almost the most of any state at $16,000 per student. Education is now the one surviving binding principle of the society as a whole, the one black box everyone will agree to, and so while you can press for less subsidization of education by government, and for privatization of costs, as Caplan does, there’s really nothing people can substitute for it. This is partially about signaling, sure, but it’s also because outside of schools and a few religious enclaves our society is but a darkling plain beset by winds.

This doesn’t mean that we should leave Caplan’s critique on the shelf. Much of education is focused on an insane, zero-sum race for finite rewards. Much of schooling does push kids, parents, schools, and school systems towards a solution ad absurdum, where anything less than 100 percent of kids headed to a doctorate and the big coding job in the sky is a sign of failure of everyone concerned.

But let’s approach this with an eye towards the limits of the possible and the reality of diminishing returns.

https://westhunt.wordpress.com/2018/01/27/poison-ivy-halls/
https://westhunt.wordpress.com/2018/01/27/poison-ivy-halls/#comment-101293
The real reason the left would support Moander: the usual reason. because he’s an enemy.

https://westhunt.wordpress.com/2018/02/01/bright-college-days-part-i/
I have a problem in thinking about education, since my preferences and personal educational experience are atypical, so I can’t just gut it out. On the other hand, knowing that puts me ahead of a lot of people that seem convinced that all real people, including all Arab cabdrivers, think and feel just as they do.

One important fact, relevant to this review. I don’t like Caplan. I think he doesn’t understand – can’t understand – human nature, and although that sometimes confers a different and interesting perspective, it’s not a royal road to truth. Nor would I want to share a foxhole with him: I don’t trust him. So if I say that I agree with some parts of this book, you should believe me.

...

Caplan doesn’t talk about possible ways of improving knowledge acquisition and retention. Maybe he thinks that’s impossible, and he may be right, at least within a conventional universe of possibilities. That’s a bit outside of his thesis, anyhow. Me it interests.

He dismisses objections from educational psychologists who claim that studying a subject improves you in subtle ways even after you forget all of it. I too find that hard to believe. On the other hand, it looks to me as if poorly-digested fragments of information picked up in college have some effect on public policy later in life: it is no coincidence that most prominent people in public life (at a given moment) share a lot of the same ideas. People are vaguely remembering the same crap from the same sources, or related sources. It’s correlated crap, which has a much stronger effect than random crap.

These widespread new ideas are usually wrong. They come from somewhere – in part, from higher education. Along this line, Caplan thinks that college has only a weak ideological effect on students. I don’t believe he is correct. In part, this is because most people use a shifting standard: what’s liberal or conservative gets redefined over time. At any given time a population is roughly half left and half right – but the content of those labels changes a lot. There’s a shift.

https://westhunt.wordpress.com/2018/02/01/bright-college-days-part-i/#comment-101492
I put it this way, a while ago: “When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”
--
You just explained the Credo quia absurdum doctrine. I always wondered if it was nonsense. It is not.
--
Someone on twitter caught it first – got all the way to “sliding down the razor blade of life”. Which I explained is now called “transitioning”

What Catholics believe: https://theweek.com/articles/781925/what-catholics-believe
We believe all of these things, fantastical as they may sound, and we believe them for what we consider good reasons, well attested by history, consistent with the most exacting standards of logic. We will profess them in this place of wrath and tears until the extraordinary event referenced above, for which men and women have hoped and prayed for nearly 2,000 years, comes to pass.

https://westhunt.wordpress.com/2018/02/05/bright-college-days-part-ii/
According to Caplan, employers are looking for conformity, conscientiousness, and intelligence. They use completion of high school, or completion of college as a sign of conformity and conscientiousness. College certainly looks as if it’s mostly signaling, and it’s hugely expensive signaling, in terms of college costs and foregone earnings.

But inserting conformity into the merit function is tricky: things become important signals… because they’re important signals. Otherwise useful actions are contraindicated because they’re “not done”. For example, test scores convey useful information. They could help show that an applicant is smart even though he attended a mediocre school – the same role they play in college admissions. But employers seldom request test scores, and although applicants may provide them, few do. Caplan says ” The word on the street… [more]
econotariat  pseudoE  broad-econ  economics  econometrics  growth-econ  education  human-capital  labor  correlation  null-result  world  developing-world  commentary  spearhead  garett-jones  twitter  social  pic  discussion  econ-metrics  rindermann-thompson  causation  endo-exo  biodet  data  chart  knowledge  article  wealth-of-nations  latin-america  study  path-dependence  divergence  🎩  curvature  microfoundations  multi  convexity-curvature  nonlinearity  hanushek  volo-avolo  endogenous-exogenous  backup  pdf  people  policy  monetary-fiscal  wonkish  cracker-econ  news  org:mag  local-global  higher-ed  impetus  signaling  rhetoric  contrarianism  domestication  propaganda  ratty  hanson  books  review  recommendations  distribution  externalities  cost-benefit  summary  natural-experiment  critique  rent-seeking  mobility  supply-demand  intervention  shift  social-choice  government  incentives  interests  q-n-a  street-fighting  objektbuch  X-not-about-Y  marginal-rev  c:***  qra  info-econ  info-dynamics  org:econlib  yvain  ssc  politics  medicine  stories 
april 2017 by nhaliday
How Universal Is the Big Five? Testing the Five-Factor Model of Personality Variation Among Forager–Farmers in the Bolivian Amazon
We failed to find robust support for the FFM, based on tests of (a) internal consistency of items expected to segregate into the Big Five factors, (b) response stability of the Big Five, (c) external validity of the Big Five with respect to observed behavior, (d) factor structure according to exploratory and confirmatory factor analysis, and (e) similarity with a U.S. target structure based on Procrustes rotation analysis.

...

We argue that Tsimane personality variation displays 2 principal factors that may reflect socioecological characteristics common to small-scale societies. We offer evolutionary perspectives on why the structure of personality variation may not be invariant across human societies.

Niche diversity can explain cross-cultural differences in personality structure: https://www.nature.com/articles/s41562-019-0730-3.epdf?author_access_token=OePuGOtdzdnQNlUm-C2oidRgN0jAjWel9jnR3ZoTv0PAovoNXZmNaZE03-rNo0RKOI7i7PG10G8tISp-_6W5yDqI3sDx0WdZZuk2ekMJbzGZtJ7_XsMUy0k4UGpsNDt9NHMarkg3dmAWt-Ttawxu1g%3D%3D
Cross-cultural studies have challenged this view, finding that less-complex societies exhibit stronger covaria-tion among behavioural characteristics, resulting in fewer derived personality factors. To explain these results, we propose the niche diversity hypothesis, in which a greater diversity of social and ecological niches elicits a broader range of multi-variate behavioural profiles and, hence, lower trait covariance in a population.
...
This work provides a general explanation for population differences in personality structure in both humans and other animals and suggests a substantial reimagining of personality research: instead of reifying statistical descriptions of manifest personality structures, research should focus more on modelling their underlying causes.

sounds obvious but actually kinda interesting
pdf  study  psychology  cog-psych  society  embedded-cognition  personality  metrics  generalization  methodology  farmers-and-foragers  latin-america  context  homo-hetero  info-dynamics  water  psychometrics  exploratory  things  phalanges  dimensionality  anthropology  universalism-particularism  applicability-prereqs  multi  sapiens  cultural-dynamics  social-psych  evopsych  psych-architecture  org:nat  🌞  roots  explanans  causation  pop-diff  cybernetics  ecology  scale  moments  large-factor 
february 2017 by nhaliday
Difference between off-policy and on-policy learning - Cross Validated
The reason that Q-learning is off-policy is that it updates its Q-values using the Q-value of the next state s′ and the greedy action a′. In other words, it estimates the return (total discounted future reward) for state-action pairs assuming a greedy policy were followed despite the fact that it's not following a greedy policy.

The reason that SARSA is on-policy is that it updates its Q-values using the Q-value of the next state s′ and the current policy's action a″. It estimates the return for state-action pairs assuming the current policy continues to be followed.

The distinction disappears if the current policy is a greedy policy. However, such an agent would not be good since it never explores.
q-n-a  overflow  machine-learning  acm  reinforcement  confusion  jargon  generalization  nibble  definition  greedy  comparison 
february 2017 by nhaliday
teaching - Intuitive explanation for dividing by $n-1$ when calculating standard deviation? - Cross Validated
The standard deviation calculated with a divisor of n-1 is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn. Because the observed values fall, on average, closer to the sample mean than to the population mean, the standard deviation which is calculated using deviations from the sample mean underestimates the desired standard deviation of the population. Using n-1 instead of n as the divisor corrects for that by making the result a little bit bigger.

Note that the correction has a larger proportional effect when n is small than when it is large, which is what we want because when n is larger the sample mean is likely to be a good estimator of the population mean.

...

A common one is that the definition of variance (of a distribution) is the second moment recentered around a known, definite mean, whereas the estimator uses an estimated mean. This loss of a degree of freedom (given the mean, you can reconstitute the dataset with knowledge of just n−1 of the data values) requires the use of n−1 rather than nn to "adjust" the result.
q-n-a  overflow  stats  acm  intuition  explanation  bias-variance  methodology  moments  nibble  degrees-of-freedom  sampling-bias  generalization  dimensionality  ground-up  intricacy 
january 2017 by nhaliday
Effects of cognitive training on the structure of intelligence
Targeted cognitive training, such as n-back or speed of processing training, in the hopes of raising intelligence is of great theoretical and practical importance. The most important theoretical contribution, however, is not about the malleability of intelligence. Instead, I argue the most important and novel theoretical contribution is understanding the causal structure of intelligence. The structure of intelligence, most often taken as a hierarchical factor structure, necessarily prohibits transfer from subfactors back up to intelligence. If this is the true structure, targeted cognitive training interventions will fail to increase intelligence not because intelligence is immutable, but simply because there is no causal connection between, say, working memory and intelligence. Seeing the structure of intelligence for what it is, a causal measurement model, allows us to focus testing on the presence and absence of causal links. If we can increase subfactors without transfer to other facets, we may be confirming the correct causal structure more than testing malleability. Such a blending into experimental psychometrics is a strong theoretical pursuit.
pdf  study  psychology  cog-psych  iq  psychometrics  generalization  intelligence  🌞  psych-architecture  chart 
january 2017 by nhaliday
Spatial Ability for STEM Domains: Aligning Over 50 Years of Cumulative Psychological Knowledge Solidifies Its Importance
https://www.psychologytoday.com/blog/finding-the-next-einstein/201105/is-spatial-intelligence-essential-innovation-and-can-we
1. "Compared to students in the control group, students in the training group showed larger improvements in spatial skills despite extremely high spatial skills prior to training."
2. "We found large gender differences in spatial skills prior to training, as many other researchers have. However, these gender differences were narrowed after training."
3. "Students in the training group had one-third of a letter grade higher GPA in a challenging calculus-based physics course."
4. "None of these training improvements lasted over eight to ten months."

I wonder if continuous training could be useful at all and provide any transfer

What Innovations Have We Already Lost?: The Importance of Identifying and Developing Spatial Talent: http://link.springer.com/chapter/10.1007/978-3-319-44385-0_6

Technical innovation and spatial ability: http://infoproc.blogspot.com/2013/07/technical-innovation-and-spatial-ability.html
The blobs in the figure above (click for larger version) represent subgroups of individuals who have published peer reviewed work in STEM, Humanities or Biomedical research, or (separately) have been awarded a patent. Units in the figure are SDs within the SMPY population.

Early spatial reasoning predicts later creativity and innovation, especially in STEM fields: https://www.sciencedaily.com/releases/2013/07/130715070347.htm
Confirming previous research, the data revealed that participants' mathematical and verbal reasoning scores on the SAT at age 13 predicted their scholarly publications and patents 30 years later.

But spatial ability at 13 yielded additional predictive power, suggesting that early spatial ability contributes in a unique way to later creative and scholarly outcomes, especially in STEM domains.
pdf  study  psychology  cog-psych  psychometrics  spatial  iq  psych-architecture  multi  news  org:lite  generalization  longitudinal  summary  gender  diversity  gender-diff  pop-diff  chart  scitariat  org:sci  intervention  null-result  effect-size  rhetoric  education  innovation  🔬  hsu  success  data  visualization  s-factor  science  creative  biodet  behavioral-gen  human-capital  intellectual-property 
december 2016 by nhaliday
The History of the Cross Section of Stock Returns
bad methodology (data snooping) generating fake market failures

Using accounting data spanning the 20th century, we show that most accounting-based return anomalies are spurious. When we take anomalies out-of-sample by moving either backwards or forwards in time, their average returns decrease and volatilities increase. These patterns emerge because data-snooping works through t-values, and an anomaly’s t-value is high if its average return is high or volatility low. The average anomaly’s in-sample Sharpe ratio is biased upwards by a factor of three. The data-snooping problem is so severe that we would expect to reject even the true asset pricing model when tested using in-sample data. Our results suggest that asset pricing models should be tested using out-of-sample data or, if not not feasible, that the correct standard by which to judge a model is its ability to explain half of the in-sample alpha.
study  economics  finance  investing  methodology  replication  pdf  preprint  market-failure  error  🎩  econometrics  longitudinal  generalization  s:*  securities  ORFE 
december 2016 by nhaliday
Science Policy | West Hunter
If my 23andme profile revealed that I was the last of the Plantagenets (as some suspect), and therefore rightfully King of the United States and Defender of Mexico, and I asked you for a general view of the right approach to science and technology – where the most promise is, what should be done, etc – what would you say?

genetically personalized medicine: https://westhunt.wordpress.com/2016/12/08/science-policy/#comment-85698
I have no idea how personalized medicine is supposed to work. Suppose that we sequence your entire genome, and then we intend to tailor a therapeutic approach to your genome.

How do we test it? By trying it on a bunch of genetically similar people? The more genetic details we take into account, the smaller that class is. It could easily become so small that it would be difficult to recruit enough people for a reasonable statistical trial. Second, the more details we take into account, the smaller the class that benefits from the whole testing process – which as far as I can see, is just as expensive as conventional Phasei/II etc trials.

What am I missing?

Now if you are a forethoughtful trillionaire, sure: you manufacture lots of clones just to test therapies you might someday need, and cost is no object.

I think I can see ways you could make it work tho [edit: what did I mean by this?...damnit]
west-hunter  discussion  politics  government  policy  science  technology  the-world-is-just-atoms  🔬  scitariat  meta:science  proposal  genetics  genomics  medicine  meta:medicine  multi  ideas  counter-revolution  poast  homo-hetero  generalization  scale  antidemos  alt-inst  applications  dimensionality  high-dimension  bioinformatics  no-go  volo-avolo  magnitude  trump  2016-election  questions 
december 2016 by nhaliday
Bottoming Out – arg min blog
Now, I’ve been hammering the point in my previous posts that saddle points are not what makes non-convex optimization difficult. Here, when specializing to deep learning, even local minima are not getting in my way. Deep neural nets are just very easy to minimize.
machine-learning  deep-learning  optimization  rhetoric  speculation  research  hmm  research-program  acmtariat  generalization  metabuch  local-global  off-convex  ben-recht  extrema  org:bleg  nibble  sparsity  curvature  ideas  aphorism  convexity-curvature  explanans  volo-avolo  hardness 
june 2016 by nhaliday

bundles : abstractacademeacmpatterns

related tags

2016-election  80000-hours  :/  aaronson  ability-competence  abortion-contraception-embryo  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  advanced  adversarial  advertising  africa  agri-mindset  ai  ai-control  algorithms  alien-character  alignment  alt-inst  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anthropology  antidemos  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  arbitrage  arms  arrows  article  asia  assembly  atmosphere  audio  authoritarianism  autism  auto-learning  automation  axelrod  axioms  backup  bangbang  bare-hands  bayesian  behavioral-econ  behavioral-gen  being-right  ben-recht  benchmarks  berkeley  best-practices  bias-variance  biases  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  biotech  bits  blowhards  bonferroni  books  bootstraps  bostrom  bounded-cognition  branches  britain  broad-econ  browser  build-packaging  business  business-models  c(pp)  c:***  caching  canada  canon  capitalism  cardio  career  carmack  causation  characterization  charity  chart  cheatsheet  checking  checklists  china  christianity  civic  civil-liberty  civilization  class  classification  clever-rats  climate-change  cliometrics  coalitions  cocktail  cog-psych  cohesion  coming-apart  commentary  communication  communism  community  comparison  competition  compilers  complex-systems  complexity  composition-decomposition  compressed-sensing  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  conference  confidence  confounding  confusion  conquest-empire  constraint-satisfaction  context  contiguity-proximity  contracts  contradiction  contrarianism  control  convexity-curvature  cool  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  cost-benefit  counter-revolution  counterexample  counterfactual  coupling-cohesion  course  cracker-econ  cracker-prog  creative  crime  criminal-justice  criminology  critique  crooked  crosstab  crux  cs  cultural-dynamics  culture  culture-war  current-events  curvature  cybernetics  cycles  cynicism-idealism  dan-luu  darwinian  data  data-science  database  dataset  dataviz  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  dennett  density  descriptive  desktop  detail-architecture  deterrence  developing-world  developmental  differential-privacy  dignity  dimensionality  diogenes  direction  dirty-hands  discovery  discussion  disease  distribution  divergence  diversity  domestication  douthatish  drama  DSL  duplication  duty  dysgenics  early-modern  earth  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  eden-heaven  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  elections  electromag  elegance  elite  email  embedded-cognition  embedding  embeddings  embodied  emergent  empirical  ems  endo-exo  endogenous-exogenous  energy-resources  engineering  enhancement  ensembles  environment  epistemic  equilibrium  ergodic  error  essay  estimate  ethics  europe  events  evidence-based  evolution  evopsych  existence  expanders  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  explore-exploit  exposition  expression-survival  externalities  extrema  facebook  faq  farmers-and-foragers  fashun  features  fermi  fiction  field-study  finance  fisher  fitness  flexibility  flux-stasis  foreign-lang  foreign-policy  formal-methods  formal-values  fourier  frameworks  free-riding  frequency  frequentist  frontier  functional  futurism  games  garett-jones  gavisti  GCTA  gelman  gender  gender-diff  generalization  generative  genetic-correlation  genetic-load  genetics  genomics  geoengineering  germanic  giants  gibbon  github  gnon  gnosis-logos  gnu  golang  good-evil  google  gotchas  government  grad-school  gradient-descent  graphics  gray-econ  great-powers  greedy  gregory-clark  grokkability  grokkability-clarity  ground-up  group-level  group-selection  growth-econ  GT-101  guide  GWAS  gwern  haidt  hanson  hanushek  hard-tech  hardness  hardware  hari-seldon  harvard  hashing  haskell  health  heavy-industry  heavyweights  henrich  heterodox  heuristic  hi-order-bits  hidden-motives  high-dimension  higher-ed  history  hive-mind  hmm  hn  homo-hetero  housing  howto  hsu  huge-data-the-biggest  human-capital  human-ml  humanity  humility  hypocrisy  hypothesis-testing  ideas  identity-politics  ideology  idk  iidness  illusion  impact  impetus  incentives  increase-decrease  individualism-collectivism  industrial-org  inequality  inference  info-dynamics  info-econ  infographic  information-theory  init  innovation  input-output  insight  instinct  institutions  integrity  intel  intellectual-property  intelligence  interdisciplinary  interests  internet  interpretability  intervention  interview  intricacy  intuition  investing  iq  iraq-syria  iron-age  is-ought  israel  iteration-recursion  japan  jargon  javascript  journos-pundits  judgement  julia  justice  jvm  kernels  kinship  knowledge  kumbaya-kult  labor  language  large-factor  latent-variables  latin-america  learning  learning-theory  lecture-notes  left-wing  legacy  legibility  len:long  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifts-projections  linear-algebra  linear-models  linear-programming  liner-notes  linguistics  links  linux  list  literature  local-global  lol  long-short-run  longitudinal  low-hanging  lower-bounds  machine-learning  macro  magnitude  malaise  malthus  management  map-territory  marginal  marginal-rev  market-failure  markets  markov  matching  math  math.DS  matrix-factorization  maxim-gun  meaningness  measure  measurement  medicine  mediterranean  MENA  mena4  mendel-randomization  meta-analysis  meta:medicine  meta:prediction  meta:rhetoric  meta:science  metabuch  metal-to-virtual  metameta  methodology  metrics  microfoundations  microsoft  migration  mihai  military  minimum-viable  miri-cfar  missing-heritability  mit  ML-MAP-E  mobility  model-class  model-organism  model-selection  models  modernity  moloch  moments  monetary-fiscal  money  monte-carlo  mooc  morality  mostly-modern  move-fast-(and-break-things)  mrtz  multi  multiplicative  mutation  mystic  n-factor  nascent-state  nationalism-globalism  natural-experiment  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nihil  nips  nitty-gritty  nlp  no-go  noble-lie  noise-structure  nonlinearity  nootropics  nordic  norms  novelty  null-result  number  objektbuch  ocaml-sml  occam  occident  off-convex  old-anglo  online-learning  openai  opioids  optimate  optimism  optimization  order-disorder  ORFE  org:anglo  org:biz  org:bleg  org:econlib  org:edge  org:edu  org:inst  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:popup  org:rec  org:sci  organization  organizing  orient  orwellian  os  oscillation  oss  osx  outliers  overflow  p:***  PAC  papers  paradox  parenting  parsimony  paternal-age  path-dependence  patho-altruism  pdf  peace-violence  people  performance  personality  persuasion  perturbation  pessimism  phalanges  phase-transition  phd  philosophy  physics  pic  piketty  piracy  planning  plots  pls  poast  podcast  polarization  policy  polisci  politics  pop-diff  pop-structure  popsci  population  population-genetics  power  practice  pragmatic  prediction  preference-falsification  preimage  prepping  preprint  presentation  princeton  priors-posteriors  privacy  probability  problem-solving  prof  programming  project  propaganda  properties  proposal  protestant-catholic  prudence  pseudoE  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  publishing  q-n-a  qra  QTL  quality  quantitative-qualitative  quantum  questions  quixotic  quotes  race  random  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  realness  reason  recent-selection  recommendations  recruiting  reddit  reference  reflection  regression  regularization  regularizer  regulation  reinforcement  religion  rent-seeking  replication  repo  research  research-program  retention  review  rhetoric  right-wing  rigor  rindermann-thompson  risk  robotics  robust  roots  rot  russia  rust  s-factor  s:*  s:***  sample-complexity  sampling  sampling-bias  sanctity-degradation  sanjeev-arora  sapiens  scala  scale  scaling-tech  scaling-up  schelling  science  scifi-fantasy  scitariat  search  sebastien-bubeck  securities  selection  self-interest  self-report  sensitivity  sequential  sex  shalizi  shannon  shift  sib-study  signal-noise  signaling  signum  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  slides  slippery-slope  smoothness  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  sociality  society  sociology  software  solid-study  sparsity  spatial  spearhead  spectral  speculation  speed  speedometer  spock  sports  ssc  stackex  stanford  stat-mech  stat-power  state  state-of-art  statesmen  static-dynamic  stats  status  steel-man  stereotypes  stories  straussian  street-fighting  stress  structure  study  studying  stylized-facts  subculture  subjective-objective  sublinear  success  sulla  summary  supply-demand  survey  sv  symmetry  synchrony  syntax  synthesis  system-design  systematic-ad-hoc  systems  tactics  tails  tainter  talks  tcs  tcstariat  tech  tech-infrastructure  technology  techtariat  telos-atelos  terminal  the-basilisk  the-bones  the-classics  the-great-west-whale  the-self  the-trenches  the-watchers  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thick-thin  things  thinking  threat-modeling  thucydides  tim-roughgarden  time  time-preference  time-series  toolkit  top-n  toxoplasmosis  traces  track-record  trade  tradeoffs  tradition  transportation  trends  tribalism  trivia  troll  trump  truth  tumblr  turing  tutorial  twitter  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unix  unsupervised  urban  urban-rural  us-them  usa  valiant  values  vampire-squid  variance-components  vc-dimension  video  visual-understanding  visualization  visuo  volo-avolo  war  water  waves  wealth  wealth-of-nations  webapp  weird  west-hunter  westminster  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-group  within-without  wonkish  workshop  world  world-war  worrydream  worse-is-better/the-right-thing  writing  X-not-about-Y  yak-shaving  yoga  yvain  zeitgeist  zero-positive-sum  🌞  🎓  🎩  🐸  👳  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: