nhaliday + state-of-art   54

Information Processing: Mathematical Theory of Deep Neural Networks (Princeton workshop)
"Recently, long-past-due theoretical results have begun to emerge. These results, and those that will follow in their wake, will begin to shed light on the properties of large, adaptive, distributed learning architectures, and stand to revolutionize how computer science and neuroscience understand these systems."
hsu  scitariat  commentary  links  research  research-program  workshop  events  princeton  sanjeev-arora  deep-learning  machine-learning  ai  generalization  explanans  off-convex  nibble  frontier  speedometer  state-of-art  big-surf  announcement 
january 2018 by nhaliday
Frontiers | Can We Validate the Results of Twin Studies? A Census-Based Study on the Heritability of Educational Achievement | Genetics
As for most phenotypes, the amount of variance in educational achievement explained by SNPs is lower than the amount of additive genetic variance estimated in twin studies. Twin-based estimates may however be biased because of self-selection and differences in cognitive ability between twins and the rest of the population. Here we compare twin registry based estimates with a census-based heritability estimate, sampling from the same Dutch birth cohort population and using the same standardized measure for educational achievement. Including important covariates (i.e., sex, migration status, school denomination, SES, and group size), we analyzed 893,127 scores from primary school children from the years 2008–2014. For genetic inference, we used pedigree information to construct an additive genetic relationship matrix. Corrected for the covariates, this resulted in an estimate of 85%, which is even higher than based on twin studies using the same cohort and same measure. We therefore conclude that the genetic variance not tagged by SNPs is not an artifact of the twin method itself.
study  biodet  behavioral-gen  iq  psychometrics  psychology  cog-psych  twin-study  methodology  variance-components  state-of-art  🌞  developmental  age-generation  missing-heritability  biases  measurement  sampling-bias  sib-study 
december 2017 by nhaliday
Genome Editing
This collection of articles from the Nature Research journals provides an overview of current progress in developing targeted genome editing technologies. A selection of protocols for using and adapting these tools in your own lab is also included.
news  org:sci  org:nat  list  links  aggregator  chart  info-foraging  frontier  technology  CRISPR  biotech  🌞  survey  state-of-art  article  study  genetics  genomics  speedometer 
october 2017 by nhaliday
Biopolitics | West Hunter
I have said before that no currently popular ideology acknowledges well-established results of behavioral genetics, quantitative genetics, or psychometrics. Or evolutionary psychology.

What if some ideology or political tradition did? what could they do? What problems could they solve, what capabilities would they have?

Various past societies knew a few things along these lines. They knew that there were significant physical and behavioral differences between the sexes, which is forbidden knowledge in modern academia. Some knew that close inbreeding had negative consequences, which knowledge is on its way to the forbidden zone as I speak. Some cultures with wide enough geographical experience had realistic notions of average cognitive differences between populations. Some people had a rough idea about regression to the mean [ in dynasties], and the Ottomans came up with a highly unpleasant solution – the law of fratricide. The Romans, during the Principate, dealt with the same problem through imperial adoption. The Chinese exam system is in part aimed at the same problem.

...

At least some past societies avoided the social patterns leading to the nasty dysgenic trends we are experiencing today, but for the most part that is due to the anthropic principle: if they’d done something else you wouldn’t be reading this. Also to between-group competition: if you fuck your self up when others don’t, you may be well be replaced. Which is still the case.

If you were designing an ideology from scratch you could make use of all of these facts – not that thinking about genetics and selection hands you the solution to every problem, but you’d have more strings to your bow. And, off the top of your head, you’d understand certain trends that are behind the mountains of Estcarp, for our current ruling classes : invisible and unthinkable, That Which Must Not Be Named. .

https://westhunt.wordpress.com/2017/10/08/biopolitics/#comment-96613
“The closest…s the sort of libertarianism promulgated by Charles Murray”
Not very close..
A government that was fully aware of the implications and possibilities of human genetics, one that had the usual kind of state goals [ like persistence and increased power] , would not necessarily be particularly libertarian.

https://westhunt.wordpress.com/2017/10/08/biopolitics/#comment-96797
And giving tax breaks to college-educated liberals to have babies wouldn’t appeal much to Trump voters, methinks.

It might be worth making a reasonably comprehensive of the facts and preferences that a good liberal is supposed to embrace and seem to believe. You would have to be fairly quick about it, before it changes. Then you could evaluate about the social impact of having more of them.

Rise and Fall: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/
Every society selects for something: generally it looks as if the direction of selection pressue is more or less an accident. Although nations and empires in the past could have decided to select men for bravery or intelligence, there’s not much sign that anyone actually did this. I mean, they would have known how, if they’d wanted to, just as they knew how to select for destriers, coursers, and palfreys. It was still possible to know such things in the Middle Ages, because Harvard did not yet exist.

A rising empire needs quality human capital, which implies that at minimum that budding imperial society must not have been strongly dysgenic. At least not in the beginning. But winning changes many things, possibly including selective pressures. Imagine an empire with substantial urbanization, one in which talented guys routinely end up living in cities – cities that were demographic sinks. That might change things. Or try to imagine an empire in which survival challenges are greatly reduced, at least for elites, so that people have nothing to keep their minds off their minds and up worshiping Magna Mater. Imagine that an empire that conquers a rival with interesting local pathogens and brings some of them home. Or one that uses up a lot of its manpower conquering less-talented subjects and importing masses of those losers into the imperial heartland.

If any of those scenarios happened valid, they might eventually result in imperial decline – decline due to decreased biological capital.

Right now this is speculation. If we knew enough about the GWAS hits for intelligence, and had enough ancient DNA, we might be able to observe that rise and fall, just as we see dysgenic trends in contemporary populations. But that won’t happen for a long time. Say, a year.

hmm: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/#comment-100350
“Although nations and empires in the past could have decided to select men for bravery or intelligence, there’s not much sign that anyone actually did this.”

Maybe the Chinese imperial examination could effectively have been a selection for intelligence.
--
Nope. I’ve modelled it: the fraction of winners is far too small to have much effect, while there were likely fitness costs from the arduous preparation. Moreover, there’s a recent
paper [Detecting polygenic adaptation in admixture graphs] that looks for indications of when selection for IQ hit northeast Asia: quite a while ago. Obvious though, since Japan has similar scores without ever having had that kind of examination system.

decline of British Empire and utility of different components: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/#comment-100390
Once upon a time, India was a money maker for the British, mainly because they appropriate Bengali tax revenue, rather than trade. The rest of the Empire was not worth much: it didn’t materially boost British per-capita income or military potential. Silesia was worth more to Germany, conferred more war-making power, than Africa was to Britain.
--
If you get even a little local opposition, a colony won’t pay for itself. I seem to remember that there was some, in Palestine.
--
Angels from on high paid for the Boer War.

You know, someone in the 50’s asked for the numbers – how much various colonies cost and how much they paid.

Turned out that no one had ever asked. The Colonial Office had no idea.
west-hunter  scitariat  discussion  ideas  politics  polisci  sociology  anthropology  cultural-dynamics  social-structure  social-science  evopsych  agri-mindset  pop-diff  kinship  regression-to-mean  anthropic  selection  group-selection  impact  gender  gender-diff  conquest-empire  MENA  history  iron-age  mediterranean  the-classics  china  asia  sinosphere  technocracy  scifi-fantasy  aphorism  alt-inst  recruiting  applications  medieval  early-modern  institutions  broad-econ  biodet  behavioral-gen  gnon  civilization  tradition  leviathan  elite  competition  cocktail  🌞  insight  sapiens  arbitrage  paying-rent  realness  kumbaya-kult  war  slippery-slope  unintended-consequences  deep-materialism  inequality  malthus  dysgenics  multi  murray  poast  speculation  randy-ayndy  authoritarianism  time-preference  patience  long-short-run  leadership  coalitions  ideology  rant  westminster  truth  flux-stasis  new-religion  identity-politics  left-wing  counter-revolution  fertility  signaling  status  darwinian  orwellian  ability-competence  organizing 
october 2017 by nhaliday
[1709.06560] Deep Reinforcement Learning that Matters
https://twitter.com/WAWilsonIV/status/912505885565452288
I’ve been experimenting w/ various kinds of value function approaches to RL lately, and its striking how primitive and bad things seem to be
At first I thought it was just that my code sucks, but then I played with the OpenAI baselines and nope, it’s the children that are wrong.
And now, what comes across my desk but this fantastic paper: (link: https://arxiv.org/abs/1709.06560) arxiv.org/abs/1709.06560 How long until the replication crisis hits AI?

https://twitter.com/WAWilsonIV/status/911318326504153088
Seriously I’m not blown away by the PhDs’ records over the last 30 years. I bet you’d get better payoff funding eccentrics and amateurs.
There are essentially zero fundamentally new ideas in AI, the papers are all grotesquely hyperparameter tuned, nobody knows why it works.

Deep Reinforcement Learning Doesn't Work Yet: https://www.alexirpan.com/2018/02/14/rl-hard.html
Once, on Facebook, I made the following claim.

Whenever someone asks me if reinforcement learning can solve their problem, I tell them it can’t. I think this is right at least 70% of the time.
papers  preprint  machine-learning  acm  frontier  speedometer  deep-learning  realness  replication  state-of-art  survey  reinforcement  multi  twitter  social  discussion  techtariat  ai  nibble  org:mat  unaffiliated  ratty  acmtariat  liner-notes  critique  sample-complexity  cost-benefit  todo 
september 2017 by nhaliday
Accurate Genomic Prediction Of Human Height | bioRxiv
Stephen Hsu's compressed sensing application paper

We construct genomic predictors for heritable and extremely complex human quantitative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, ~40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate ~0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction.

https://infoproc.blogspot.com/2017/09/accurate-genomic-prediction-of-human.html

http://infoproc.blogspot.com/2017/11/23andme.html
I'm in Mountain View to give a talk at 23andMe. Their latest funding round was $250M on a (reported) valuation of $1.5B. If I just add up the Crunchbase numbers it looks like almost half a billion invested at this point...

Slides: Genomic Prediction of Complex Traits

Here's how people + robots handle your spit sample to produce a SNP genotype:

https://drive.google.com/file/d/1e_zuIPJr1hgQupYAxkcbgEVxmrDHAYRj/view
study  bio  preprint  GWAS  state-of-art  embodied  genetics  genomics  compressed-sensing  high-dimension  machine-learning  missing-heritability  hsu  scitariat  education  🌞  frontier  britain  regression  data  visualization  correlation  phase-transition  multi  commentary  summary  pdf  slides  brands  skunkworks  hard-tech  presentation  talks  methodology  intricacy  bioinformatics  scaling-up  stat-power  sparsity  norms  nibble  speedometer  stats  linear-models  2017  biodet 
september 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? | Scientific Reports
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest – and hope – that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited – in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
study  org:nat  papers  machine-learning  chemistry  measurement  volo-avolo  lower-bounds  analysis  realness  speedometer  nibble  🔬  applications  frontier  state-of-art  no-go  accuracy  interdisciplinary 
july 2017 by nhaliday
A combined analysis of genetically correlated traits identifies 107 loci associated with intelligence | bioRxiv
We apply MTAG to three large GWAS: Sniekers et al (2017) on intelligence, Okbay et al. (2016) on Educational attainment, and Hill et al. (2016) on household income. By combining these three samples our functional sample size increased from 78 308 participants to 147 194. We found 107 independent loci associated with intelligence, implicating 233 genes, using both SNP-based and gene-based GWAS. We find evidence that neurogenesis may explain some of the biological differences in intelligence as well as genes expressed in the synapse and those involved in the regulation of the nervous system.

...

Finally, using an independent sample of 6 844 individuals we were able to predict 7% of intelligence using SNP data alone.
study  bio  preprint  biodet  behavioral-gen  GWAS  genetics  iq  education  compensation  composition-decomposition  🌞  gwern  meta-analysis  genetic-correlation  scaling-up  methodology  correlation  state-of-art  neuro  neuro-nitgrit  dimensionality 
july 2017 by nhaliday
Genomic analysis of family data reveals additional genetic effects on intelligence and personality | bioRxiv
methodology:
Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1003520
Pedigree- and SNP-Associated Genetics and Recent Environment are the Major Contributors to Anthropometric and Cardiometabolic Trait Variation: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1005804

Missing Heritability – found?: https://westhunt.wordpress.com/2017/02/09/missing-heritability-found/
There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Some of the variants, the ones we find with GWAS, are fairly common and fitness-neutral: the variant that slightly increases IQ confers the same fitness (or very close to the same) as the one that slightly decreases IQ – presumably because of other effects it has. If this weren’t the case, it would be impossible for both of the variants to remain common.

The rare variants that affect IQ will generally decrease IQ – and since pleiotropy is the norm, usually they’ll be deleterious in other ways as well. Genetic load.

Happy families are all alike; every unhappy family is unhappy in its own way.: https://westhunt.wordpress.com/2017/06/06/happy-families-are-all-alike-every-unhappy-family-is-unhappy-in-its-own-way/
It now looks as if the majority of the genetic variance in IQ is the product of mutational load, and the same may be true for many psychological traits. To the extent this is the case, a lot of human psychological variation must be non-adaptive. Maybe some personality variation fulfills an evolutionary function, but a lot does not. Being a dumb asshole may be a bug, rather than a feature. More generally, this kind of analysis could show us whether particular low-fitness syndromes, like autism, were ever strategies – I suspect not.

It’s bad new news for medicine and psychiatry, though. It would suggest that what we call a given type of mental illness, like schizophrenia, is really a grab-bag of many different syndromes. The ultimate causes are extremely varied: at best, there may be shared intermediate causal factors. Not good news for drug development: individualized medicine is a threat, not a promise.

see also comment at: https://pinboard.in/u:nhaliday/b:a6ab4034b0d0

https://www.reddit.com/r/slatestarcodex/comments/5sldfa/genomic_analysis_of_family_data_reveals/
So the big implication here is that it's better than I had dared hope - like Yang/Visscher/Hsu have argued, the old GCTA estimate of ~0.3 is indeed a rather loose lower bound on additive genetic variants, and the rest of the missing heritability is just the relatively uncommon additive variants (ie <1% frequency), and so, like Yang demonstrated with height, using much more comprehensive imputation of SNP scores or using whole-genomes will be able to explain almost all of the genetic contribution. In other words, with better imputation panels, we can go back and squeeze out better polygenic scores from old GWASes, new GWASes will be able to reach and break the 0.3 upper bound, and eventually we can feasibly predict 0.5-0.8. Between the expanding sample sizes from biobanks, the still-falling price of whole genomes, the gradual development of better regression methods (informative priors, biological annotation information, networks, genetic correlations), and better imputation, the future of GWAS polygenic scores is bright. Which obviously will be extremely helpful for embryo selection/genome synthesis.

The argument that this supports mutation-selection balance is weaker but plausible. I hope that it's true, because if that's why there is so much genetic variation in intelligence, then that strongly encourages genetic engineering - there is no good reason or Chesterton fence for intelligence variants being non-fixed, it's just that evolution is too slow to purge the constantly-accumulating bad variants. And we can do better.
https://rubenarslan.github.io/generation_scotland_pedigree_gcta/

The surprising implications of familial association in disease risk: https://arxiv.org/abs/1707.00014
https://spottedtoad.wordpress.com/2017/06/09/personalized-medicine-wont-work-but-race-based-medicine-probably-will/
As Greg Cochran has pointed out, this probably isn’t going to work. There are a few genes like BRCA1 (which makes you more likely to get breast and ovarian cancer) that we can detect and might affect treatment, but an awful lot of disease turns out to be just the result of random chance and deleterious mutation. This means that you can’t easily tailor disease treatment to people’s genes, because everybody is fucked up in their own special way. If Johnny is schizophrenic because of 100 random errors in the genes that code for his neurons, and Jack is schizophrenic because of 100 other random errors, there’s very little way to test a drug to work for either of them- they’re the only one in the world, most likely, with that specific pattern of errors. This is, presumably why the incidence of schizophrenia and autism rises in populations when dads get older- more random errors in sperm formation mean more random errors in the baby’s genes, and more things that go wrong down the line.

The looming crisis in human genetics: http://www.economist.com/node/14742737
Some awkward news ahead
- Geoffrey Miller

Human geneticists have reached a private crisis of conscience, and it will become public knowledge in 2010. The crisis has depressing health implications and alarming political ones. In a nutshell: the new genetics will reveal much less than hoped about how to cure disease, and much more than feared about human evolution and inequality, including genetic differences between classes, ethnicities and races.

2009!
study  preprint  bio  biodet  behavioral-gen  GWAS  missing-heritability  QTL  🌞  scaling-up  replication  iq  education  spearhead  sib-study  multi  west-hunter  scitariat  genetic-load  mutation  medicine  meta:medicine  stylized-facts  ratty  unaffiliated  commentary  rhetoric  wonkish  genetics  genomics  race  pop-structure  poast  population-genetics  psychiatry  aphorism  homo-hetero  generalization  scale  state-of-art  ssc  reddit  social  summary  gwern  methodology  personality  britain  anglo  enhancement  roots  s:*  2017  data  visualization  database  let-me-see  bioinformatics  news  org:rec  org:anglo  org:biz  track-record  prediction  identity-politics  pop-diff  recent-selection  westminster  inequality  egalitarianism-hierarchy  high-dimension  applications  dimensionality  ideas  no-go  volo-avolo  magnitude  variance-components  GCTA  tradeoffs  counter-revolution  org:mat  dysgenics  paternal-age  distribution  chart  abortion-contraception-embryo 
june 2017 by nhaliday
Overview of current development in electrical energy storage technologies and the application potential in power system operation
- An overview of the state-of-the-art in Electrical Energy Storage (EES) is provided.
- A comprehensive analysis of various EES technologies is carried out.
- An application potential analysis of the reviewed EES technologies is presented.
- The presented synthesis to EES technologies can be used to support future R&D and deployment.

Prospects and Limits of Energy Storage in Batteries: http://pubs.acs.org/doi/abs/10.1021/jz5026273
study  survey  state-of-art  energy-resources  heavy-industry  chemistry  applications  electromag  stock-flow  wonkish  frontier  technology  biophysical-econ  the-world-is-just-atoms  🔬  phys-energy  ideas  speedometer  dirty-hands  multi 
april 2017 by nhaliday
Genetics and educational attainment | npj Science of Learning
Figure 1 is quite good
Sibling Correlations for Behavioral Traits. This figure displays sibling correlations for five traits measured in a large sample of Swedish brother pairs born 1951–1970. All outcomes except years of schooling are measured at conscription, around the age of 18.

correlations for IQ/EA for adoptees are actually nontrivial in adulthood, hmm

Figure 2 has GWAS R^2s through 2016 (in-sample, I guess?)
study  org:nat  biodet  education  methodology  essay  survey  genetics  GWAS  variance-components  init  causation  🌞  metrics  population-genetics  explanation  unit  nibble  len:short  big-picture  behavioral-gen  state-of-art  iq  embodied  correlation  twin-study  sib-study  summary  europe  nordic  data  visualization  s:*  tip-of-tongue  spearhead  bioinformatics 
february 2017 by nhaliday
Performance Trends in AI | Otium
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?

In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.

In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.

In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.

In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.

In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.

...

The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

http://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/
ratty  core-rats  summary  prediction  trends  analysis  spock  ai  deep-learning  state-of-art  🤖  deepgoog  games  nlp  computer-vision  nibble  reinforcement  model-class  faq  org:bleg  shift  chart  technology  language  audio  accuracy  speaking  foreign-lang  definite-planning  china  asia  microsoft  google  ideas  article  speedometer  whiggish-hegelian  yvain  ssc  smoothness  data  hsu  scitariat  genetics  iq  enhancement  genetic-load  neuro  neuro-nitgrit  brain-scan  time-series  multiplicative  iteration-recursion  additive  multi 
january 2017 by nhaliday
J. Intell. | Free Full-Text | Zeroing in on the Genetics of Intelligence
Rare variants and mutations of large effect do not appear to play a main role beyond intellectual disability. Common variants can account for about half the heritability of intelligence and show promise that collaborative efforts will identify more causal genetic variants. Gene–gene interactions may explain some of the remainder, but are only starting to be tapped. Evolutionarily, stabilizing selection and selective (near)-neutrality are consistent with the facts known so far.

Idiot Proof: https://westhunt.wordpress.com/2016/01/07/idiot-proof/
I was looking at a recent survey of current knowledge in psychological genetics. The gist is that common variants – which can’t have decreased fitness much in the average past, since they’re common – are the main story in the genetic architecture of intelligence. Genetic load doesn’t seem very important, except at the low end. Big-effect deleterious mutations can certainly leave you retarded, but moderate differences in the number of slightly-deleterious mutations don’t have any observable effect – except possibly in the extremely intelligent, but that’s uncertain at this point. Not what I expected, but that’s how things look right now. It would seem that brain development is robust to small tweaks, although there must be some limit. The results with older fathers apparently fit this pattern: they have more kids with something seriously wrong, but although there should be extra mild mutations in their kids as well as the occasional serious one, the kids without obvious serious problems don’t have depressed IQ.
study  genetics  iq  QTL  🌞  survey  equilibrium  evolution  biodet  missing-heritability  nibble  roots  big-picture  s:*  behavioral-gen  chart  state-of-art  multi  west-hunter  sapiens  summary  neuro  intelligence  commentary  robust  paternal-age  sensitivity  perturbation  epidemiology  stylized-facts  scitariat  rot 
december 2016 by nhaliday
The Evolutionary Genetics of Personality Revisited
While mutations clearly affect the very low end of the intelligence continuum, individual differences in the normal intelligence range seem to be surprisingly robust against mutations, suggesting that they might have been canalized to withstand such perturbations. Most personality traits, by contrast, seem to be neither neutral to selection nor under consistent directional or stabilizing selection. Instead evidence is in line with balancing selection acting on personality traits, likely supported by human tendencies to seek out, construct and adapt to fitting environments.

shorter copy: http://www.larspenke.eu/pdfs/Penke_&_Jokela_2016_-_Evolutionary_Genetics_of_Personality_Revisited.pdf

The Evolutionary Genetics of Personality: http://www.larspenke.eu/pdfs/Penke_et_al_2007_-_Evolutionary_genetics_of_personality_target.pdf
Based on evolutionary genetic theory and empirical results from behaviour genetics and personality psychology, we conclude that selective neutrality is largely irrelevant, that mutation-selection balance seems best at explaining genetic variance in intelligence, and that balancing selection by environmental heterogeneity seems best at explaining genetic variance in personality traits. We propose a general model of heritable personality differences that conceptualises intelligence as fitness components and personality traits as individual reaction norms of genotypes across environments, with different fitness consequences in different environmental niches. We also discuss the place of mental health in the model.
study  spearhead  models  genetics  iq  personality  🌞  evopsych  evolution  sapiens  eden  pdf  explanation  survey  population-genetics  red-queen  metabuch  multi  EEA  essay  equilibrium  robust  big-picture  biodet  unit  QTL  len:long  sensitivity  perturbation  roots  EGT  deep-materialism  s:*  behavioral-gen  chart  intelligence  article  speculation  psychology  cog-psych  state-of-art 
december 2016 by nhaliday
Information Processing: Thought vectors and the dimensionality of the space of concepts
If we trained a deep net to translate sentences about Physics from Martian to English, we could (roughly) estimate the "conceptual depth" of the subject. We could even compare two different subjects, such as Physics versus Art History.
hsu  ai  deep-learning  google  speculation  commentary  news  language  embeddings  neurons  thinking  papers  summary  scitariat  dimensionality  conceptual-vocab  vague  nlp  nibble  state-of-art  features 
december 2016 by nhaliday
The Uniqueness of Italian Internal Divergence | Notes On Liberty
Measuring Productivity Dispersion: Lessons From Counting One-Hundred Million Ballots: http://cepr.org/active/publications/discussion_papers/dp.php?dpno=12273
We measure output per worker in nearly 8,000 municipalities in the Italian electoral process using ballot counting times in the 2013 general election and two referenda in 2016. We document large productivity dispersion across provinces in this very uniform and low-skill task that involves nearly no technology and requires limited physical capital. Using a development accounting framework, this measure explains up to half of the firm-level productivity dispersion across Italian provinces and more than half the north-south productivity gap in Italy. We explore potential drivers of our measure of labor efficiency and find that its association with measures of work ethic and trust is particularly robust.

Interregional Migration, Human Capital Externalities and Unemployment Dynamics: Evidence from Italian Provinces: https://www.econstor.eu/bitstream/10419/168560/1/Econstor.pdf
Using longitudinal data over the years 2002-2011 for 103 NUTS-3 Italian regions, we document that net outflows of human capital from the South to the North have increased the unemployment rate in the South, while it did not affect the unemployment rate in the North. Our analysis contributes to the literature on interregional human capital mobility suggesting that reducing human capital flight from Southern regions should be a priority

EXPLAINING ITALY’S NORTH-SOUTH DIVIDE: Experimental evidence of large differences in social norms of cooperation: http://www.res.org.uk/details/mediabrief/9633311/EXPLAINING-ITALYS-NORTH-SOUTH-DIVIDE-Experimental-evidence-of-large-differences-.html
Amoral Familism, Social Capital, or Trust? The Behavioural Foundations of the Italian North-South Divide: http://conference.iza.org/conference_files/CognitiveSkills_2014/casari_m8572.pdf

At the root of the North‐South cooperation gap in Italy Preferences or beliefs?: https://onlinelibrary.wiley.com/doi/abs/10.1111/ecoj.12608
Southerners share the same pro‐social preferences, but differ both in their belief about cooperativeness and in the aversion to social risk ‐ respectively more pessimistic and stronger among Southerners.

Past dominations, current institutions and the Italian regional economic performance: http://www.siecon.org/online/wp-content/uploads/2012/08/DiLiberto-Sideri.pdf
We study the connection between economic performance and the quality of government institutions for the sample of 103 Italian NUTS3 regions, including new measures of institutional quality calculated using data on the provision of four areas of public service: health, educational infrastructures, environment and energy. In order to address likely endogeneity problems, we use the histories of the different foreign dominations that ruled Italian regions between the 16th and 17th century and over seven hundred years before the creation of the unified Italian State. Our results suggest a significant role of past historical institutions on the current public administration efficiency and show that the latter makes a difference to the economic performance of regions. Overall, our analysis confirms that informal institutions matter for development, and that history can be used to find suitable instruments

Figure 1 – Institutional quality: territorial distribution

Figure 5: Italy during the period 1560-1659 (part A) and corresponding current provinces (part B)

Figure 6 –Former Spanish provinces

Italy’s North-South divide (1861-2011): the state of art: https://mpra.ub.uni-muenchen.de/62209/1/MPRA_paper_62209.pdf
My main argument is summed up in the conclusions: there was a socio-institutional divide between the North and the South of the peninsula, that pre-exists Unification, in some respects grows stronger with it and is never bridged throughout the history of post-unification Italy. Admittedly, some socio-institutional convergence took place in the last decades, but this went in a direction opposite to the desirable one − that is, the North and Italy as a whole have begun to look similar to the South, rather than vice versa.

La cartina dell’ISTAT che mostra dove si leggono più libri in Italia: http://www.ilpost.it/flashes/istat-lettori-regioni-italiane/
ISTAT map showing where more books are read in Italy
data  mediterranean  europe  economics  growth-econ  maps  econotariat  pseudoE  history  divergence  econ-metrics  early-modern  mostly-modern  shift  broad-econ  article  wealth-of-nations  within-group  multi  econ-productivity  discipline  microfoundations  trust  cohesion  labor  natural-experiment  field-study  elections  study  behavioral-econ  GT-101  coordination  putnam-like  🎩  outcome-risk  roots  endo-exo  social-capital  social-norms  summary  cultural-dynamics  pdf  incentives  values  n-factor  efficiency  migration  longitudinal  human-capital  mobility  s-factor  econometrics  institutions  path-dependence  conquest-empire  cliometrics  survey  state-of-art  wealth  geography  input-output  endogenous-exogenous  medieval  leviathan  studying  chart  hari-seldon  descriptive 
december 2016 by nhaliday
predictive models - Is this the state of art regression methodology? - Cross Validated
I've been following Kaggle competitions for a long time and I come to realize that many winning strategies involve using at least one of the "big threes": bagging, boosting and stacking.

For regressions, rather than focusing on building one best possible regression model, building multiple regression models such as (Generalized) linear regression, random forest, KNN, NN, and SVM regression models and blending the results into one in a reasonable way seems to out-perform each individual method a lot of times.
q-n-a  state-of-art  machine-learning  acm  data-science  atoms  overflow  soft-question  regression  ensembles  nibble  oly 
november 2016 by nhaliday
Information Processing: Search results for compressed sensing
https://www.unz.com/jthompson/the-hsu-boundary/
http://infoproc.blogspot.com/2017/09/phase-transitions-and-genomic.html
Added: Here are comments from "Donoho-Student":
Donoho-Student says:
September 14, 2017 at 8:27 pm GMT • 100 Words

The Donoho-Tanner transition describes the noise-free (h2=1) case, which has a direct analog in the geometry of polytopes.

The n = 30s result from Hsu et al. (specifically the value of the coefficient, 30, when p is the appropriate number of SNPs on an array and h2 = 0.5) is obtained via simulation using actual genome matrices, and is original to them. (There is no simple formula that gives this number.) The D-T transition had only been established in the past for certain classes of matrices, like random matrices with specific distributions. Those results cannot be immediately applied to genomes.

The estimate that s is (order of magnitude) 10k is also a key input.

I think Hsu refers to n = 1 million instead of 30 * 10k = 300k because the effective SNP heritability of IQ might be less than h2 = 0.5 — there is noise in the phenotype measurement, etc.

Donoho-Student says:
September 15, 2017 at 11:27 am GMT • 200 Words

Lasso is a common statistical method but most people who use it are not familiar with the mathematical theorems from compressed sensing. These results give performance guarantees and describe phase transition behavior, but because they are rigorous theorems they only apply to specific classes of sensor matrices, such as simple random matrices. Genomes have correlation structure, so the theorems do not directly apply to the real world case of interest, as is often true.

What the Hsu paper shows is that the exact D-T phase transition appears in the noiseless (h2 = 1) problem using genome matrices, and a smoothed version appears in the problem with realistic h2. These are new results, as is the prediction for how much data is required to cross the boundary. I don’t think most gwas people are familiar with these results. If they did understand the results they would fund/design adequately powered studies capable of solving lots of complex phenotypes, medical conditions as well as IQ, that have significant h2.

Most people who use lasso, as opposed to people who prove theorems, are not even aware of the D-T transition. Even most people who prove theorems have followed the Candes-Tao line of attack (restricted isometry property) and don’t think much about D-T. Although D eventually proved some things about the phase transition using high dimensional geometry, it was initially discovered via simulation using simple random matrices.
hsu  list  stream  genomics  genetics  concept  stats  methodology  scaling-up  scitariat  sparsity  regression  biodet  bioinformatics  norms  nibble  compressed-sensing  applications  search  ideas  multi  albion  behavioral-gen  iq  state-of-art  commentary  explanation  phase-transition  measurement  volo-avolo  regularization  levers  novelty  the-trenches  liner-notes  clarity  random-matrices  innovation  high-dimension  linear-models 
november 2016 by nhaliday

bundles : academeacmprops

related tags

:/  ability-competence  abortion-contraception-embryo  abstraction  accuracy  acm  acmtariat  additive  aDNA  adversarial  africa  age-generation  aggregator  agri-mindset  ai  ai-control  albion  algorithms  alt-inst  analogy  analysis  anglo  anglosphere  announcement  anthropic  anthropology  aphorism  applications  approximation  arbitrage  article  asia  atoms  attention  audio  authoritarianism  autism  auto-learning  automation  axelrod  bandits  bare-hands  bayesian  behavioral-econ  behavioral-gen  biases  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  books  bots  bounded-cognition  brain-scan  brands  britain  broad-econ  business  candidate-gene  canon  causation  chan  chart  chemistry  china  civil-liberty  civilization  clarity  classification  clever-rats  cliometrics  coalitions  coarse-fine  cocktail  cog-psych  cohesion  commentary  communism  comparison  compensation  competition  composition-decomposition  compressed-sensing  computer-vision  concept  conceptual-vocab  confounding  conquest-empire  cool  coordination  core-rats  corporation  correlation  cost-benefit  counter-revolution  counterexample  courage  criminal-justice  criminology  CRISPR  critique  crux  crypto  cultural-dynamics  cycles  darwinian  data  data-science  database  dataset  death  debate  deep-learning  deep-materialism  deepgoog  definite-planning  degrees-of-freedom  dementia  demographics  descriptive  detail-architecture  developmental  differential-privacy  dimensionality  direction  dirty-hands  discipline  discrete  discussion  disease  distribution  divergence  domestication  dropbox  duty  dysgenics  early-modern  eastern-europe  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  education  EEA  efficiency  egalitarianism-hierarchy  EGT  elections  electromag  elite  embeddings  embodied  empirical  endo-exo  endogenous-exogenous  energy-resources  engineering  enhancement  ensembles  epidemiology  equilibrium  error  essay  ethical-algorithms  europe  events  evidence-based  evolution  evopsych  expansionism  experiment  expert  expert-experience  explanans  explanation  extratricky  facebook  faq  features  fertility  field-study  fixed-point  flux-stasis  food  foreign-lang  fourier  frequency  frontier  futurism  games  GCTA  gender  gender-diff  generalization  generative  genetic-correlation  genetic-load  genetics  genomics  geography  germanic  gibbon  gnon  gnosis-logos  gnxp  google  government  gradient-descent  graphical-models  graphs  group-selection  growth-econ  GT-101  GWAS  gwern  hard-tech  hardware  hari-seldon  harvard  heavy-industry  heuristic  high-dimension  history  hmm  homo-hetero  hsu  huge-data-the-biggest  human-capital  humanity  hypothesis-testing  ideas  identity  identity-politics  ideology  impact  impetus  incentives  india  industrial-org  inequality  info-dynamics  info-foraging  init  innovation  input-output  insight  institutions  intel  intelligence  interdisciplinary  intervention  interview  intricacy  ioannidis  iq  iron-age  iteration-recursion  japan  kinship  knowledge  kumbaya-kult  labor  land  language  latent-variables  leadership  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  levers  leviathan  linear-models  linearity  liner-notes  links  list  literature  long-short-run  longevity  longitudinal  lower-bounds  machine-learning  magnitude  malthus  managerial-state  maps  market-power  markets  matching  measurement  medicine  medieval  mediterranean  MENA  mendel-randomization  meta-analysis  meta:medicine  meta:science  metabuch  methodology  metrics  michael-nielsen  microfoundations  microsoft  migration  miri-cfar  missing-heritability  mobility  model-class  models  mostly-modern  multi  multiplicative  murray  mutation  n-factor  natural-experiment  nature  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nitty-gritty  nlp  no-go  nonlinearity  nordic  norms  novelty  number  off-convex  oly  online-learning  openai  org:anglo  org:biz  org:bleg  org:inst  org:mag  org:mat  org:nat  org:rec  org:sci  organizing  orwellian  oscillation  outcome-risk  overflow  papers  parasites-microbiome  paternal-age  path-dependence  patience  paying-rent  pdf  performance  personality  perturbation  phase-transition  phys-energy  plots  poast  polisci  politics  pop-diff  pop-structure  popsci  population  population-genetics  pre-ww2  prediction  preprint  presentation  princeton  privacy  project  proposal  protocol  prudence  pseudoE  psychiatry  psychology  psychometrics  putnam-like  q-n-a  qra  QTL  quality  race  random  random-matrices  randy-ayndy  rant  ratty  realness  recent-selection  recommendations  recruiting  red-queen  reddit  reduction  reflection  regression  regression-to-mean  regularization  regulation  reinforcement  religion  replication  research  research-program  rhetoric  risk  robust  roots  rot  russia  s-factor  s:*  sample-complexity  sampling  sampling-bias  sanjeev-arora  sapiens  scale  scaling-up  science  scifi-fantasy  scitariat  search  security  selection  sensitivity  sequential  shift  sib-study  signaling  similarity  simulation  singularity  sinosphere  skunkworks  slides  slippery-slope  smoothness  social  social-capital  social-norms  social-science  social-structure  sociology  soft-question  software  sparsity  speaking  spearhead  speculation  speedometer  spock  spreading  ssc  stat-power  state-of-art  stats  status  stock-flow  straussian  stream  structure  study  studying  stylized-facts  summary  supply-demand  survey  tails  talks  taxes  technocracy  technology  techtariat  the-classics  the-trenches  the-watchers  the-world-is-just-atoms  theos  thinking  time  time-preference  time-series  tip-of-tongue  todo  tools  track-record  tradeoffs  tradition  trees  trends  trivia  trust  truth  twin-study  twitter  ui  unaffiliated  unintended-consequences  unit  universalism-particularism  unsupervised  urban-rural  us-them  usa  vague  values  variance-components  visualization  vitality  volo-avolo  war  waves  wealth  wealth-of-nations  west-hunter  westminster  whiggish-hegelian  whole-partial-many  within-group  wonkish  workshop  world-war  yvain  🌞  🎩  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: