nhaliday + applications   87

 « earlier
GPS and Relativity
The nominal GPS configuration consists of a network of 24 satellites in high orbits around the Earth, but up to 30 or so satellites may be on station at any given time. Each satellite in the GPS constellation orbits at an altitude of about 20,000 km from the ground, and has an orbital speed of about 14,000 km/hour (the orbital period is roughly 12 hours - contrary to popular belief, GPS satellites are not in geosynchronous or geostationary orbits). The satellite orbits are distributed so that at least 4 satellites are always visible from any point on the Earth at any given instant (with up to 12 visible at one time). Each satellite carries with it an atomic clock that "ticks" with a nominal accuracy of 1 nanosecond (1 billionth of a second). A GPS receiver in an airplane determines its current position and course by comparing the time signals it receives from the currently visible GPS satellites (usually 6 to 12) and trilaterating on the known positions of each satellite[1]. The precision achieved is remarkable: even a simple hand-held GPS receiver can determine your absolute position on the surface of the Earth to within 5 to 10 meters in only a few seconds. A GPS receiver in a car can give accurate readings of position, speed, and course in real-time!

More sophisticated techniques, like Differential GPS (DGPS) and Real-Time Kinematic (RTK) methods, deliver centimeter-level positions with a few minutes of measurement. Such methods allow use of GPS and related satellite navigation system data to be used for high-precision surveying, autonomous driving, and other applications requiring greater real-time position accuracy than can be achieved with standard GPS receivers.

To achieve this level of precision, the clock ticks from the GPS satellites must be known to an accuracy of 20-30 nanoseconds. However, because the satellites are constantly moving relative to observers on the Earth, effects predicted by the Special and General theories of Relativity must be taken into account to achieve the desired 20-30 nanosecond accuracy.

Because an observer on the ground sees the satellites in motion relative to them, Special Relativity predicts that we should see their clocks ticking more slowly (see the Special Relativity lecture). Special Relativity predicts that the on-board atomic clocks on the satellites should fall behind clocks on the ground by about 7 microseconds per day because of the slower ticking rate due to the time dilation effect of their relative motion [2].

Further, the satellites are in orbits high above the Earth, where the curvature of spacetime due to the Earth's mass is less than it is at the Earth's surface. A prediction of General Relativity is that clocks closer to a massive object will seem to tick more slowly than those located further away (see the Black Holes lecture). As such, when viewed from the surface of the Earth, the clocks on the satellites appear to be ticking faster than identical clocks on the ground. A calculation using General Relativity predicts that the clocks in each GPS satellite should get ahead of ground-based clocks by 45 microseconds per day.

The combination of these two relativitic effects means that the clocks on-board each satellite should tick faster than identical clocks on the ground by about 38 microseconds per day (45-7=38)! This sounds small, but the high-precision required of the GPS system requires nanosecond accuracy, and 38 microseconds is 38,000 nanoseconds. If these effects were not properly taken into account, a navigational fix based on the GPS constellation would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day! The whole system would be utterly worthless for navigation in a very short time.
nibble  org:junk  org:edu  explanation  trivia  cocktail  physics  gravity  relativity  applications  time  synchrony  speed  space  navigation  technology
november 2017 by nhaliday
Biopolitics | West Hunter
I have said before that no currently popular ideology acknowledges well-established results of behavioral genetics, quantitative genetics, or psychometrics. Or evolutionary psychology.

What if some ideology or political tradition did? what could they do? What problems could they solve, what capabilities would they have?

Various past societies knew a few things along these lines. They knew that there were significant physical and behavioral differences between the sexes, which is forbidden knowledge in modern academia. Some knew that close inbreeding had negative consequences, which knowledge is on its way to the forbidden zone as I speak. Some cultures with wide enough geographical experience had realistic notions of average cognitive differences between populations. Some people had a rough idea about regression to the mean [ in dynasties], and the Ottomans came up with a highly unpleasant solution – the law of fratricide. The Romans, during the Principate, dealt with the same problem through imperial adoption. The Chinese exam system is in part aimed at the same problem.

...

At least some past societies avoided the social patterns leading to the nasty dysgenic trends we are experiencing today, but for the most part that is due to the anthropic principle: if they’d done something else you wouldn’t be reading this. Also to between-group competition: if you fuck your self up when others don’t, you may be well be replaced. Which is still the case.

If you were designing an ideology from scratch you could make use of all of these facts – not that thinking about genetics and selection hands you the solution to every problem, but you’d have more strings to your bow. And, off the top of your head, you’d understand certain trends that are behind the mountains of Estcarp, for our current ruling classes : invisible and unthinkable, That Which Must Not Be Named. .

https://westhunt.wordpress.com/2017/10/08/biopolitics/#comment-96613
“The closest…s the sort of libertarianism promulgated by Charles Murray”
Not very close..
A government that was fully aware of the implications and possibilities of human genetics, one that had the usual kind of state goals [ like persistence and increased power] , would not necessarily be particularly libertarian.

https://westhunt.wordpress.com/2017/10/08/biopolitics/#comment-96797
And giving tax breaks to college-educated liberals to have babies wouldn’t appeal much to Trump voters, methinks.

It might be worth making a reasonably comprehensive of the facts and preferences that a good liberal is supposed to embrace and seem to believe. You would have to be fairly quick about it, before it changes. Then you could evaluate about the social impact of having more of them.

Rise and Fall: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/
Every society selects for something: generally it looks as if the direction of selection pressue is more or less an accident. Although nations and empires in the past could have decided to select men for bravery or intelligence, there’s not much sign that anyone actually did this. I mean, they would have known how, if they’d wanted to, just as they knew how to select for destriers, coursers, and palfreys. It was still possible to know such things in the Middle Ages, because Harvard did not yet exist.

A rising empire needs quality human capital, which implies that at minimum that budding imperial society must not have been strongly dysgenic. At least not in the beginning. But winning changes many things, possibly including selective pressures. Imagine an empire with substantial urbanization, one in which talented guys routinely end up living in cities – cities that were demographic sinks. That might change things. Or try to imagine an empire in which survival challenges are greatly reduced, at least for elites, so that people have nothing to keep their minds off their minds and up worshiping Magna Mater. Imagine that an empire that conquers a rival with interesting local pathogens and brings some of them home. Or one that uses up a lot of its manpower conquering less-talented subjects and importing masses of those losers into the imperial heartland.

If any of those scenarios happened valid, they might eventually result in imperial decline – decline due to decreased biological capital.

Right now this is speculation. If we knew enough about the GWAS hits for intelligence, and had enough ancient DNA, we might be able to observe that rise and fall, just as we see dysgenic trends in contemporary populations. But that won’t happen for a long time. Say, a year.

hmm: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/#comment-100350
“Although nations and empires in the past could have decided to select men for bravery or intelligence, there’s not much sign that anyone actually did this.”

Maybe the Chinese imperial examination could effectively have been a selection for intelligence.
--
Nope. I’ve modelled it: the fraction of winners is far too small to have much effect, while there were likely fitness costs from the arduous preparation. Moreover, there’s a recent
paper [Detecting polygenic adaptation in admixture graphs] that looks for indications of when selection for IQ hit northeast Asia: quite a while ago. Obvious though, since Japan has similar scores without ever having had that kind of examination system.

decline of British Empire and utility of different components: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/#comment-100390
Once upon a time, India was a money maker for the British, mainly because they appropriate Bengali tax revenue, rather than trade. The rest of the Empire was not worth much: it didn’t materially boost British per-capita income or military potential. Silesia was worth more to Germany, conferred more war-making power, than Africa was to Britain.
--
If you get even a little local opposition, a colony won’t pay for itself. I seem to remember that there was some, in Palestine.
--
Angels from on high paid for the Boer War.

You know, someone in the 50’s asked for the numbers – how much various colonies cost and how much they paid.

west-hunter  scitariat  discussion  ideas  politics  polisci  sociology  anthropology  cultural-dynamics  social-structure  social-science  evopsych  agri-mindset  pop-diff  kinship  regression-to-mean  anthropic  selection  group-selection  impact  gender  gender-diff  conquest-empire  MENA  history  iron-age  mediterranean  the-classics  china  asia  sinosphere  technocracy  scifi-fantasy  aphorism  alt-inst  recruiting  applications  medieval  early-modern  institutions  broad-econ  biodet  behavioral-gen  gnon  civilization  tradition  leviathan  elite  competition  cocktail  🌞  insight  sapiens  arbitrage  paying-rent  realness  kumbaya-kult  war  slippery-slope  unintended-consequences  deep-materialism  inequality  malthus  dysgenics  multi  murray  poast  speculation  randy-ayndy  authoritarianism  time-preference  patience  long-short-run  leadership  coalitions  ideology  rant  westminster  truth  flux-stasis  new-religion  identity-politics  left-wing  counter-revolution  fertility  signaling  status  darwinian  orwellian  ability-competence  organizing
october 2017 by nhaliday
Controversial New Theory Suggests Life Wasn't a Fluke of Biology—It Was Physics | WIRED
First Support for a Physics Theory of Life: https://www.quantamagazine.org/first-support-for-a-physics-theory-of-life-20170726/
Take chemistry, add energy, get life. The first tests of Jeremy England’s provocative origin-of-life hypothesis are in, and they appear to show how order can arise from nothing.
news  org:mag  profile  popsci  bio  xenobio  deep-materialism  roots  eden  physics  interdisciplinary  applications  ideas  thermo  complex-systems  cybernetics  entropy-like  order-disorder  arrows  phys-energy  emergent  empirical  org:sci  org:inst  nibble  chemistry  fixed-point  wild-ideas
august 2017 by nhaliday
Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? | Scientific Reports
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest – and hope – that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited – in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
study  org:nat  papers  machine-learning  chemistry  measurement  volo-avolo  lower-bounds  analysis  realness  speedometer  nibble  🔬  applications  frontier  state-of-art  no-go  accuracy  interdisciplinary
july 2017 by nhaliday
Genomic analysis of family data reveals additional genetic effects on intelligence and personality | bioRxiv
methodology:
Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1003520
Pedigree- and SNP-Associated Genetics and Recent Environment are the Major Contributors to Anthropometric and Cardiometabolic Trait Variation: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1005804

Missing Heritability – found?: https://westhunt.wordpress.com/2017/02/09/missing-heritability-found/
There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Some of the variants, the ones we find with GWAS, are fairly common and fitness-neutral: the variant that slightly increases IQ confers the same fitness (or very close to the same) as the one that slightly decreases IQ – presumably because of other effects it has. If this weren’t the case, it would be impossible for both of the variants to remain common.

The rare variants that affect IQ will generally decrease IQ – and since pleiotropy is the norm, usually they’ll be deleterious in other ways as well. Genetic load.

Happy families are all alike; every unhappy family is unhappy in its own way.: https://westhunt.wordpress.com/2017/06/06/happy-families-are-all-alike-every-unhappy-family-is-unhappy-in-its-own-way/
It now looks as if the majority of the genetic variance in IQ is the product of mutational load, and the same may be true for many psychological traits. To the extent this is the case, a lot of human psychological variation must be non-adaptive. Maybe some personality variation fulfills an evolutionary function, but a lot does not. Being a dumb asshole may be a bug, rather than a feature. More generally, this kind of analysis could show us whether particular low-fitness syndromes, like autism, were ever strategies – I suspect not.

It’s bad new news for medicine and psychiatry, though. It would suggest that what we call a given type of mental illness, like schizophrenia, is really a grab-bag of many different syndromes. The ultimate causes are extremely varied: at best, there may be shared intermediate causal factors. Not good news for drug development: individualized medicine is a threat, not a promise.

So the big implication here is that it's better than I had dared hope - like Yang/Visscher/Hsu have argued, the old GCTA estimate of ~0.3 is indeed a rather loose lower bound on additive genetic variants, and the rest of the missing heritability is just the relatively uncommon additive variants (ie <1% frequency), and so, like Yang demonstrated with height, using much more comprehensive imputation of SNP scores or using whole-genomes will be able to explain almost all of the genetic contribution. In other words, with better imputation panels, we can go back and squeeze out better polygenic scores from old GWASes, new GWASes will be able to reach and break the 0.3 upper bound, and eventually we can feasibly predict 0.5-0.8. Between the expanding sample sizes from biobanks, the still-falling price of whole genomes, the gradual development of better regression methods (informative priors, biological annotation information, networks, genetic correlations), and better imputation, the future of GWAS polygenic scores is bright. Which obviously will be extremely helpful for embryo selection/genome synthesis.

The argument that this supports mutation-selection balance is weaker but plausible. I hope that it's true, because if that's why there is so much genetic variation in intelligence, then that strongly encourages genetic engineering - there is no good reason or Chesterton fence for intelligence variants being non-fixed, it's just that evolution is too slow to purge the constantly-accumulating bad variants. And we can do better.
https://rubenarslan.github.io/generation_scotland_pedigree_gcta/

The surprising implications of familial association in disease risk: https://arxiv.org/abs/1707.00014
As Greg Cochran has pointed out, this probably isn’t going to work. There are a few genes like BRCA1 (which makes you more likely to get breast and ovarian cancer) that we can detect and might affect treatment, but an awful lot of disease turns out to be just the result of random chance and deleterious mutation. This means that you can’t easily tailor disease treatment to people’s genes, because everybody is fucked up in their own special way. If Johnny is schizophrenic because of 100 random errors in the genes that code for his neurons, and Jack is schizophrenic because of 100 other random errors, there’s very little way to test a drug to work for either of them- they’re the only one in the world, most likely, with that specific pattern of errors. This is, presumably why the incidence of schizophrenia and autism rises in populations when dads get older- more random errors in sperm formation mean more random errors in the baby’s genes, and more things that go wrong down the line.

The looming crisis in human genetics: http://www.economist.com/node/14742737
- Geoffrey Miller

Human geneticists have reached a private crisis of conscience, and it will become public knowledge in 2010. The crisis has depressing health implications and alarming political ones. In a nutshell: the new genetics will reveal much less than hoped about how to cure disease, and much more than feared about human evolution and inequality, including genetic differences between classes, ethnicities and races.

2009!
study  preprint  bio  biodet  behavioral-gen  GWAS  missing-heritability  QTL  🌞  scaling-up  replication  iq  education  spearhead  sib-study  multi  west-hunter  scitariat  genetic-load  mutation  medicine  meta:medicine  stylized-facts  ratty  unaffiliated  commentary  rhetoric  wonkish  genetics  genomics  race  pop-structure  poast  population-genetics  psychiatry  aphorism  homo-hetero  generalization  scale  state-of-art  ssc  reddit  social  summary  gwern  methodology  personality  britain  anglo  enhancement  roots  s:*  2017  data  visualization  database  let-me-see  bioinformatics  news  org:rec  org:anglo  org:biz  track-record  prediction  identity-politics  pop-diff  recent-selection  westminster  inequality  egalitarianism-hierarchy  high-dimension  applications  dimensionality  ideas  no-go  volo-avolo  magnitude  variance-components  GCTA  tradeoffs  counter-revolution  org:mat  dysgenics  paternal-age  distribution  chart  abortion-contraception-embryo
june 2017 by nhaliday
Chinese innovations | West Hunter
I’m interested in hearing about significant innovations out of contemporary China. Good ones. Ideas, inventions, devices, dreams. Throw in Outer China (Taiwan, Hong Kong, Singapore).

super nationalistic dude ("IC") in the comments section (wish his videos had subtitles):
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91378
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91378
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91382
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91292
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91315

on the carrier-killer missiles: https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91280
You could take out a carrier task force with a nuke 60 years ago.
--
Then the other side can nuke something and point to the sunk carrier group saying “they started first”.

Hypersonic anti-ship cruise missiles, or the mysterious anti-ship ballistic missiles China has avoid that.
--
They avoid that because the law of physics no longer allow radar.

https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91340
I was thinking about the period in which the United States was experiencing rapid industrial growth, on its way to becoming the most powerful industrial nation. At first not much science, buts lots and lots of technological innovation. I’m not aware of a corresponding efflorescence of innovative Chinese technology today, but then I don’t know everything: so I asked.

I’m still not aware of it. So maybe the answer is ‘no’.

hmm: https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91389
I would say that a lot of the most intelligent faction is being siphoned over into government work, and thus not focused in technological innovation. We should expect to see societal/political innovation rather than technological if my thesis is true.

There’s some evidence of that.
west-hunter  scitariat  discussion  china  asia  sinosphere  technology  innovation  frontier  novelty  🔬  discovery  cultural-dynamics  geoengineering  applications  ideas  list  zeitgeist  trends  the-bones  expansionism  diaspora  scale  wealth-of-nations  science  orient  chart  great-powers  questions  speedometer  n-factor  microfoundations  the-world-is-just-atoms  the-trenches  dirty-hands  arms  oceans  sky  government  leviathan  alt-inst  authoritarianism  antidemos  multi  poast  nuclear  regularizer  hmm  track-record  survey  institutions  corruption
may 2017 by nhaliday
Talks
Quantum Supremacy: Office of Science and Technology Policy QIS Forum, Eisenhower Executive Office Building, White House Complex, Washington DC, October 18, 2016. Another version at UTCS Faculty Lunch, October 26, 2016. Another version at UT Austin Physics Colloquium, Austin, TX, November 9, 2016.

Complexity-Theoretic Foundations of Quantum Supremacy Experiments: Quantum Algorithms Workshop, Aspen Center for Physics, Aspen, CO, March 25, 2016

When Exactly Do Quantum Computers Provide A Speedup?: Yale Quantum Institute Seminar, Yale University, New Haven, CT, October 10, 2014. Another version at UT Austin Physics Colloquium, Austin, TX, November 19, 2014; Applied and Interdisciplinary Mathematics Seminar, Northeastern University, Boston, MA, November 25, 2014; Hebrew University Physics Colloquium, Jerusalem, Israel, January 5, 2015; Computer Science Colloquium, Technion, Haifa, Israel, January 8, 2015; Stanford University Physics Colloquium, January 27, 2015
tcstariat  aaronson  tcs  complexity  quantum  quantum-info  talks  list  slides  accretion  algorithms  applications  physics  nibble  frontier  computation  volo-avolo  speedometer  questions
may 2017 by nhaliday
Overview of current development in electrical energy storage technologies and the application potential in power system operation
- An overview of the state-of-the-art in Electrical Energy Storage (EES) is provided.
- A comprehensive analysis of various EES technologies is carried out.
- An application potential analysis of the reviewed EES technologies is presented.
- The presented synthesis to EES technologies can be used to support future R&D and deployment.

Prospects and Limits of Energy Storage in Batteries: http://pubs.acs.org/doi/abs/10.1021/jz5026273
study  survey  state-of-art  energy-resources  heavy-industry  chemistry  applications  electromag  stock-flow  wonkish  frontier  technology  biophysical-econ  the-world-is-just-atoms  🔬  phys-energy  ideas  speedometer  dirty-hands  multi
april 2017 by nhaliday
Futuristic Physicists? | Do the Math
interesting comment: https://westhunt.wordpress.com/2014/03/05/outliers/#comment-23087
referring to timelines? or maybe also the jetpack+flying car (doesn't seem physically impossible; at most impossible for useful trip lengths)?

Topic Mean % pessim. median disposition
1. Autopilot Cars 1.4 (125 yr) 4 likely within 50 years
15. Real Robots 2.2 (800 yr) 10 likely within 500 years
13. Fusion Power 2.4 (1300 yr) 8 likely within 500 years
10. Lunar Colony 3.2 18 likely within 5000 years
16. Cloaking Devices 3.5 32 likely within 5000 years
20. 200 Year Lifetime 3.3 16 maybe within 5000 years
11. Martian Colony 3.4 22 probably eventually (>5000 yr)
12. Terraforming 4.1 40 probably eventually (> 5000 yr)
18. Alien Dialog 4.2 42 probably eventually (> 5000 yr)
19. Alien Visit 4.3 50 on the fence
2. Jetpack 4.1 64 unlikely ever
14. Synthesized Food 4.2 52 unlikely ever
8. Roving Astrophysics 4.6 64 unlikely ever
3. Flying “Cars” 3.9 60 unlikely ever
7. Visit Black Hole 5.1 74 forget about it
9. Artificial Gravity 5.3 84 forget about it
4. Teleportation 5.3 85 forget about it
5. Warp Drive 5.5 92 forget about it
6. Wormhole Travel 5.5 96 forget about it
17. Time Travel 5.7 92 forget about it
org:bleg  nibble  data  poll  academia  higher-ed  prediction  speculation  physics  technology  gravity  geoengineering  space  frontier  automation  transportation  energy-resources  org:edu  expert  scitariat  science  no-go  big-picture  wild-ideas  the-world-is-just-atoms  applications  multi  west-hunter  optimism  pessimism  objektbuch  regularizer  s:*  c:**  🔬  poast  ideas  speedometer  whiggish-hegelian  scifi-fantasy  expert-experience  expansionism
march 2017 by nhaliday
Which one would be easier to terraform: Venus or Mars? - Quora
what Greg Cochran was suggesting:
First, alternatives to terraforming. It would be possible to live on Venus in the high atmosphere, in giant floating cities. Using a standard space-station atmospheric mix at about half an earth atmosphere, a pressurized geodesic sphere would float naturally somewhere above the bulk of the clouds of sulfuric acid. Atmospheric motions would likely lead to some rotation about the polar areas, where inhabitants would experience a near-perpetual sunset. Floating cities could be mechanically rotated to provide a day-night cycle for on-board agriculture. The Venusian atmosphere is rich in carbon, oxygen, sulfur, and has trace quantities of water. These could be mined for building materials, while rarer elements could be mined from the surface with long scoops or imported from other places with space-plane shuttles.
q-n-a  qra  physics  space  geoengineering  caltech  phys-energy  magnitude  fermi  analysis  data  the-world-is-just-atoms  new-religion  technology  comparison  sky  atmosphere  thermo  gravity  electromag  applications  frontier  west-hunter  wild-ideas  🔬  scitariat  definite-planning  ideas  expansionism
february 2017 by nhaliday
Energy of Seawater Desalination
0.66 kcal / liter is the minimum energy required to desalination of one liter of seawater, regardless of the technology applied to the process.
infrastructure  explanation  physics  thermo  objektbuch  data  lower-bounds  chemistry  the-world-is-just-atoms  geoengineering  phys-energy  nibble  oceans  h2o  applications  estimate  🔬  energy-resources  biophysical-econ  stylized-facts  ideas  fluid  volo-avolo
february 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Logicians on safari
So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

the sequel: http://www.scottaaronson.com/blog/?p=153
tcstariat  aaronson  tcs  computation  complexity  aphorism  examples  list  reflection  philosophy  multi  summary  synthesis  hi-order-bits  interdisciplinary  lens  big-picture  survey  nibble  org:bleg  applications  big-surf  s:*  p:whenever  ideas
january 2017 by nhaliday
The infinitesimal model | bioRxiv
Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. We first review the long history of the infinitesimal model in quantitative genetics. Then we provide a definition of the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, ... We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. In each case, by conditioning on the pedigree relating individuals in the population, we incorporate arbitrary selection and population structure. We suppose that we can observe the pedigree up to the present generation, together with all the ancestral traits, and we show, in particular, that the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order M^{-1/2}. Simulations suggest that in particular cases the convergence may be as fast as 1/M.

published version:
The infinitesimal model: Definition, derivation, and implications: https://sci-hub.tw/10.1016/j.tpb.2017.06.001

Commentary: Fisher’s infinitesimal model: A story for the ages: http://www.sciencedirect.com/science/article/pii/S0040580917301508?via%3Dihub
This commentary distinguishes three nested approximations, referred to as “infinitesimal genetics,” “Gaussian descendants” and “Gaussian population,” each plausibly called “the infinitesimal model.” The first and most basic is Fisher’s “infinitesimal” approximation of the underlying genetics – namely, many loci, each making a small contribution to the total variance. As Barton et al. (2017) show, in the limit as the number of loci increases (with enough additivity), the distribution of genotypic values for descendants approaches a multivariate Gaussian, whose variance–covariance structure depends only on the relatedness, not the phenotypes, of the parents (or whether their population experiences selection or other processes such as mutation and migration). Barton et al. (2017) call this rigorously defensible “Gaussian descendants” approximation “the infinitesimal model.” However, it is widely assumed that Fisher’s genetic assumptions yield another Gaussian approximation, in which the distribution of breeding values in a population follows a Gaussian — even if the population is subject to non-Gaussian selection. This third “Gaussian population” approximation, is also described as the “infinitesimal model.” Unlike the “Gaussian descendants” approximation, this third approximation cannot be rigorously justified, except in a weak-selection limit, even for a purely additive model. Nevertheless, it underlies the two most widely used descriptions of selection-induced changes in trait means and genetic variances, the “breeder’s equation” and the “Bulmer effect.” Future generations may understand why the “infinitesimal model” provides such useful approximations in the face of epistasis, linkage, linkage disequilibrium and strong selection.
study  exposition  bio  evolution  population-genetics  genetics  methodology  QTL  preprint  models  unit  len:long  nibble  linearity  nonlinearity  concentration-of-measure  limits  applications  🌞  biodet  oscillation  fisher  perturbation  stylized-facts  chart  ideas  article  pop-structure  multi  pdf  piracy  intricacy  map-territory  kinship  distribution  simulation  ground-up  linear-models  applicability-prereqs  bioinformatics
january 2017 by nhaliday
Science Policy | West Hunter
If my 23andme profile revealed that I was the last of the Plantagenets (as some suspect), and therefore rightfully King of the United States and Defender of Mexico, and I asked you for a general view of the right approach to science and technology – where the most promise is, what should be done, etc – what would you say?

genetically personalized medicine: https://westhunt.wordpress.com/2016/12/08/science-policy/#comment-85698
I have no idea how personalized medicine is supposed to work. Suppose that we sequence your entire genome, and then we intend to tailor a therapeutic approach to your genome.

How do we test it? By trying it on a bunch of genetically similar people? The more genetic details we take into account, the smaller that class is. It could easily become so small that it would be difficult to recruit enough people for a reasonable statistical trial. Second, the more details we take into account, the smaller the class that benefits from the whole testing process – which as far as I can see, is just as expensive as conventional Phasei/II etc trials.

What am I missing?

Now if you are a forethoughtful trillionaire, sure: you manufacture lots of clones just to test therapies you might someday need, and cost is no object.

I think I can see ways you could make it work tho [edit: what did I mean by this?...damnit]
west-hunter  discussion  politics  government  policy  science  technology  the-world-is-just-atoms  🔬  scitariat  meta:science  proposal  genetics  genomics  medicine  meta:medicine  multi  ideas  counter-revolution  poast  homo-hetero  generalization  scale  antidemos  alt-inst  applications  dimensionality  high-dimension  bioinformatics  no-go  volo-avolo  magnitude  trump  2016-election  questions
december 2016 by nhaliday
Information Processing: Search results for compressed sensing
https://www.unz.com/jthompson/the-hsu-boundary/
http://infoproc.blogspot.com/2017/09/phase-transitions-and-genomic.html
Donoho-Student says:
September 14, 2017 at 8:27 pm GMT • 100 Words

The Donoho-Tanner transition describes the noise-free (h2=1) case, which has a direct analog in the geometry of polytopes.

The n = 30s result from Hsu et al. (specifically the value of the coefficient, 30, when p is the appropriate number of SNPs on an array and h2 = 0.5) is obtained via simulation using actual genome matrices, and is original to them. (There is no simple formula that gives this number.) The D-T transition had only been established in the past for certain classes of matrices, like random matrices with specific distributions. Those results cannot be immediately applied to genomes.

The estimate that s is (order of magnitude) 10k is also a key input.

I think Hsu refers to n = 1 million instead of 30 * 10k = 300k because the effective SNP heritability of IQ might be less than h2 = 0.5 — there is noise in the phenotype measurement, etc.

Donoho-Student says:
September 15, 2017 at 11:27 am GMT • 200 Words

Lasso is a common statistical method but most people who use it are not familiar with the mathematical theorems from compressed sensing. These results give performance guarantees and describe phase transition behavior, but because they are rigorous theorems they only apply to specific classes of sensor matrices, such as simple random matrices. Genomes have correlation structure, so the theorems do not directly apply to the real world case of interest, as is often true.

What the Hsu paper shows is that the exact D-T phase transition appears in the noiseless (h2 = 1) problem using genome matrices, and a smoothed version appears in the problem with realistic h2. These are new results, as is the prediction for how much data is required to cross the boundary. I don’t think most gwas people are familiar with these results. If they did understand the results they would fund/design adequately powered studies capable of solving lots of complex phenotypes, medical conditions as well as IQ, that have significant h2.

Most people who use lasso, as opposed to people who prove theorems, are not even aware of the D-T transition. Even most people who prove theorems have followed the Candes-Tao line of attack (restricted isometry property) and don’t think much about D-T. Although D eventually proved some things about the phase transition using high dimensional geometry, it was initially discovered via simulation using simple random matrices.
hsu  list  stream  genomics  genetics  concept  stats  methodology  scaling-up  scitariat  sparsity  regression  biodet  bioinformatics  norms  nibble  compressed-sensing  applications  search  ideas  multi  albion  behavioral-gen  iq  state-of-art  commentary  explanation  phase-transition  measurement  volo-avolo  regularization  levers  novelty  the-trenches  liner-notes  clarity  random-matrices  innovation  high-dimension  linear-models
november 2016 by nhaliday
Wizard War | West Hunter
Some of his successes were classically thin, as when he correctly analyzed the German two-beam navigation system (Knickebein). He realize that the area of overlap of two beams could be narrow, far narrower than suggested by the Rayleigh criterion.

During the early struggle with the Germans, the “Battle of the Beams”, he personally read all the relevant Enigma messages. They piled up on his desk, but he could almost always pull out the relevant message, since he remembered the date, which typewriter it had been typed on, and the kind of typewriter ribbon or carbon. When asked, he could usually pick out the message in question in seconds. This system was deliberate: Jones believed that the larger the field any one man could cover, the greater the chance of one brain connecting two facts – the classic approach to a ‘thick’ problem, not that anyone seems to know that anymore.

All that information churning in his head produced results, enough so that his bureaucratic rivals concluded that he had some special unshared source of information. They made at least three attempts to infiltrate his Section to locate this great undisclosed source. An officer from Bletchley Park was offered on a part-time basis with that secret objective. After a month or so he was called back, and assured his superiors that there was no trace of anything other than what they already knew. When someone asked ‘Then how does Jones do it? ‘ he replied ‘Well, I suppose, Sir, he thinks!’
west-hunter  books  review  history  stories  problem-solving  frontier  thick-thin  intel  mostly-modern  the-trenches  complex-systems  applications  scitariat  info-dynamics  world-war  theory-practice  intersection-connectedness  quotes  alt-inst  inference  apollonian-dionysian  consilience
november 2016 by nhaliday
Thick and thin | West Hunter
There is a spectrum of problem-solving, ranging from, at one extreme, simplicity and clear chains of logical reasoning (sometimes long chains) and, at the other, building a picture by sifting through a vast mass of evidence of varying quality. I will give some examples. Just the other day, when I was conferring, conversing and otherwise hobnobbing with my fellow physicists, I mentioned high-altitude lighting, sprites and elves and blue jets. I said that you could think of a thundercloud as a vertical dipole, with an electric field that decreased as the cube of altitude, while the breakdown voltage varied with air pressure, which declines exponentially with altitude. At which point the prof I was talking to said ” and so the curves must cross!”. That’s how physicists think, and it can be very effective. The amount of information required to solve the problem is not very large. I call this a ‘thin’ problem’.

...

In another example at the messy end of the spectrum, Joe Rochefort, running Hypo in the spring of 1942, needed to figure out Japanese plans. He had an an ever-growing mass of Japanese radio intercepts, some of which were partially decrypted – say, one word of five, with luck. He had data from radio direction-finding; his people were beginning to be able to recognize particular Japanese radio operators by their ‘fist’. He’d studied in Japan, knew the Japanese well. He had plenty of Navy experience – knew what was possible. I would call this a classic ‘thick’ problem, one in which an analyst needs to deal with an enormous amount of data of varying quality. Being smart is necessary but not sufficient: you also need to know lots of stuff.

...

Nimitz believed Rochefort – who was correct. Because of that, we managed to prevail at Midway, losing one carrier and one destroyer while the the Japanese lost four carriers and a heavy cruiser*. As so often happens, OP-20-G won the bureaucratic war: Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.

The usual explanation of Joe Rochefort’s fall argues that John Redman’s ( head of OP-20-G, the Navy’s main signals intelligence and cryptanalysis group) geographical proximity to Navy headquarters was a key factor in winning the bureaucratic struggle, along with his brother’s influence (Rear Admiral Joseph Redman). That and being a shameless liar.

Personally, I wonder if part of the problem is the great difficulty of explaining the analysis of a thick problem to someone without a similar depth of knowledge. At best, they believe you because you’ve been right in the past. Or, sometimes, once you have developed the answer, there is a ‘thin’ way of confirming your answer – as when Rochefort took Jasper Holmes’s suggestion and had Midway broadcast an uncoded complaint about the failure of their distillation system – soon followed by a Japanese report that ‘AF’ was short of water.

Most problems in the social sciences are ‘thick’, and unfortunately, almost all of the researchers are as well. There are a lot more Redmans than Rocheforts.
west-hunter  thinking  things  science  social-science  rant  problem-solving  innovation  pre-2013  metabuch  frontier  thick-thin  stories  intel  mostly-modern  history  flexibility  rigidity  complex-systems  metameta  s:*  noise-structure  discovery  applications  scitariat  info-dynamics  world-war  analytical-holistic  the-trenches  creative  theory-practice  being-right  management  track-record  alien-character  darwinian  old-anglo  giants  magnitude  intersection-connectedness  knowledge  alt-inst  sky  physics  electromag  oceans  military  statesmen  big-peeps  organizing  communication  fire  inference  apollonian-dionysian  consilience  bio  evolution
november 2016 by nhaliday
Son of low-hanging fruit | West Hunter
You see, you can think of the thunderstorm, after a ground discharge, as a vertical dipole. Its electrical field drops as the cube of altitude. The threshold voltage for atmospheric breakdown is proportional to pressure, while pressure drops exponentially with altitude: and as everyone knows, a negative exponential drops faster than any power.

The curves must cross. Electrical breakdown occurs. Weird lightning, way above the clouds.

As I said, people reported sprites at least a hundred years ago, and they have probably been observed occasionally since the dawn of time. However, they’re far easier to see if you’re above the clouds – pilots often do.

Pilots also learned not to talk about it, because nobody listened. Military and commercial pilots have to pass periodic medical exams known as ‘flight physicals’, and there was a suspicion that reporting glowing red cephalopods in the sky might interfere with that. Generally, you had to see the things that were officially real (whether they were really real or not), and only those things.

Sprites became real when someone recorded one by accident on a fast camera in 1989. Since then it’s turned into a real subject, full of strangeness: turns out that thunderstorms sometimes generate gamma-rays and even antimatter.
west-hunter  physics  cocktail  stories  history  thick-thin  low-hanging  applications  bounded-cognition  error  epistemic  management  scitariat  info-dynamics  ideas  discovery  the-trenches  alt-inst  trivia  theory-practice  is-ought  being-right  magnitude  intersection-connectedness  sky  electromag  fire  inference  apollonian-dionysian  consilience
november 2016 by nhaliday
per page:    204080120160