nhaliday + applications   100

Two Performance Aesthetics: Never Miss a Frame and Do Almost Nothing - Tristan Hume
I’ve noticed when I think about performance nowadays that I think in terms of two different aesthetics. One aesthetic, which I’ll call Never Miss a Frame, comes from the world of game development and is focused on writing code that has good worst case performance by making good use of the hardware. The other aesthetic, which I’ll call Do Almost Nothing comes from a more academic world and is focused on algorithmically minimizing the work that needs to be done to the extent that there’s barely any work left, paying attention to the performance at all scales.

[ed.: Neither of these exactly matches TCS performance PoV but latter is closer (the focus on diffs is kinda weird).]

...

Never Miss a Frame

In game development the most important performance criteria is that your game doesn’t miss frame deadlines. You have a target frame rate and if you miss the deadline for the screen to draw a new frame your users will notice the jank. This leads to focusing on the worst case scenario and often having fixed maximum limits for various quantities. This property can also be important in areas other than game development, like other graphical applications, real-time audio, safety-critical systems and many embedded systems. A similar dynamic occurs in distributed systems where one server needs to query 100 others and combine the results, you’ll wait for the slowest of the 100 every time so speeding up some of them doesn’t make the query faster, and queries occasionally taking longer (e.g because of garbage collection) will impact almost every request!

...

In this kind of domain you’ll often run into situations where in the worst case you can’t avoid processing a huge number of things. This means you need to focus your effort on making the best use of the hardware by writing code at a low level and paying attention to properties like cache size and memory bandwidth.

Projects with inviolable deadlines need to adjust different factors than speed if the code runs too slow. For example a game might decrease the size of a level or use a more efficient but less pretty rendering technique.

Aesthetically: Data should be tightly packed, fixed size, and linear. Transcoding data to and from different formats is wasteful. Strings and their variable lengths and inefficient operations must be avoided. Only use tools that allow you to work at a low level, even if they’re annoying, because that’s the only way you can avoid piles of fixed costs making everything slow. Understand the machine and what your code does to it.

Personally I identify this aesthetic most with Jonathan Blow. He has a very strong personality and I’ve watched enough of videos of him that I find imagining “What would Jonathan Blow say?” as a good way to tap into this aesthetic. My favourite articles about designs following this aesthetic are on the Our Machinery Blog.

...

Do Almost Nothing

Sometimes, it’s important to be as fast as you can in all cases and not just orient around one deadline. The most common case is when you simply have to do something that’s going to take an amount of time noticeable to a human, and if you can make that time shorter in some situations that’s great. Alternatively each operation could be fast but you may run a server that runs tons of them and you’ll save on server costs if you can decrease the load of some requests. Another important case is when you care about power use, for example your text editor not rapidly draining a laptop’s battery, in this case you want to do the least work you possibly can.

A key technique for this approach is to never recompute something from scratch when it’s possible to re-use or patch an old result. This often involves caching: keeping a store of recent results in case the same computation is requested again.

The ultimate realization of this aesthetic is for the entire system to deal only in differences between the new state and the previous state, updating data structures with only the newly needed data and discarding data that’s no longer needed. This way each part of the system does almost no work because ideally the difference from the previous state is very small.

Aesthetically: Data must be in whatever structure scales best for the way it is accessed, lots of trees and hash maps. Computations are graphs of inputs and results so we can use all our favourite graph algorithms to optimize them! Designing optimal systems is hard so you should use whatever tools you can to make it easier, any fixed cost they incur will be made negligible when you optimize away all the work they need to do.

Personally I identify this aesthetic most with my friend Raph Levien and his articles about the design of the Xi text editor, although Raph also appreciates the other aesthetic and taps into it himself sometimes.

...

_I’m conflating the axes of deadline-oriented vs time-oriented and low-level vs algorithmic optimization, but part of my point is that while they are different, I think these axes are highly correlated._

...

Text Editors

Sublime Text is a text editor that mostly follows the Never Miss a Frame approach. ...

The Xi Editor is designed to solve this problem by being designed from the ground up to grapple with the fact that some operations, especially those interacting with slow compilers written by other people, can’t be made instantaneous. It does this using a fancy asynchronous plugin model and lots of fancy data structures.
...

...

Compilers

Jonathan Blow’s Jai compiler is clearly designed with the Never Miss a Frame aesthetic. It’s written to be extremely fast at every level, and the language doesn’t have any features that necessarily lead to slow compiles. The LLVM backend wasn’t fast enough to hit his performance goals so he wrote an alternative backend that directly writes x86 code to a buffer without doing any optimizations. Jai compiles something like 100,000 lines of code per second. Designing both the language and compiler to not do anything slow lead to clean build performance 10-100x faster than other commonly-used compilers. Jai is so fast that its clean builds are faster than most compilers incremental builds on common project sizes, due to limitations in how incremental the other compilers are.

However, Jai’s compiler is still O(n) in the codebase size where incremental compilers can be O(n) in the size of the change. Some compilers like the work-in-progress rust-analyzer and I think also Roslyn for C# take a different approach and focus incredibly hard on making everything fully incremental. For small changes (the common case) this can let them beat Jai and respond in milliseconds on arbitrarily large projects, even if they’re slower on clean builds.

Conclusion
I find both of these aesthetics appealing, but I also think there’s real trade-offs that incentivize leaning one way or the other for a given project. I think people having different performance aesthetics, often because one aesthetic really is better suited for their domain, is the source of a lot of online arguments about making fast systems. The different aesthetics also require different bases of knowledge to pursue, like knowledge of data-oriented programming in C++ vs knowledge of abstractions for incrementality like Adapton, so different people may find that one approach seems way easier and better for them than the other.

I try to choose how to dedicate my effort to pursuing each aesthetics on a per project basis by trying to predict how effort in each direction would help. Some projects I know if I code it efficiently it will always hit the performance deadline, others I know a way to drastically cut down on work by investing time in algorithmic design, some projects need a mix of both. Personally I find it helpful to think of different programmers where I have a good sense of their aesthetic and ask myself how they’d solve the problem. One reason I like Rust is that it can do both low-level optimization and also has a good ecosystem and type system for algorithmic optimization, so I can more easily mix approaches in one project. In the end the best approach to follow depends not only on the task, but your skills or the skills of the team working on it, as well as how much time you have to work towards an ambitious design that may take longer for a better result.
techtariat  reflection  things  comparison  lens  programming  engineering  cracker-prog  carmack  games  performance  big-picture  system-design  constraint-satisfaction  metrics  telos-atelos  distributed  incentives  concurrency  cost-benefit  tradeoffs  systems  metal-to-virtual  latency-throughput  abstraction  marginal  caching  editors  strings  ideas  ui  common-case  examples  applications  flux-stasis  nitty-gritty  ends-means  thinking  summary  correlation  degrees-of-freedom  c(pp)  rust  interface  integration-extension  aesthetics  interface-compatibility  efficiency  adversarial 
9 weeks ago by nhaliday
Shuffling - Wikipedia
The Gilbert–Shannon–Reeds model provides a mathematical model of the random outcomes of riffling, that has been shown experimentally to be a good fit to human shuffling[2] and that forms the basis for a recommendation that card decks be riffled seven times in order to randomize them thoroughly.[3] Later, mathematicians Lloyd M. Trefethen and Lloyd N. Trefethen authored a paper using a tweaked version of the Gilbert-Shannon-Reeds model showing that the minimum number of riffles for total randomization could also be 5, if the method of defining randomness is changed.[4][5]
nibble  tidbits  trivia  cocktail  wiki  reference  games  howto  random  models  math  applications  probability  math.CO  mixing  markov  sampling  best-practices  acm 
11 weeks ago by nhaliday
GPS and Relativity
The nominal GPS configuration consists of a network of 24 satellites in high orbits around the Earth, but up to 30 or so satellites may be on station at any given time. Each satellite in the GPS constellation orbits at an altitude of about 20,000 km from the ground, and has an orbital speed of about 14,000 km/hour (the orbital period is roughly 12 hours - contrary to popular belief, GPS satellites are not in geosynchronous or geostationary orbits). The satellite orbits are distributed so that at least 4 satellites are always visible from any point on the Earth at any given instant (with up to 12 visible at one time). Each satellite carries with it an atomic clock that "ticks" with a nominal accuracy of 1 nanosecond (1 billionth of a second). A GPS receiver in an airplane determines its current position and course by comparing the time signals it receives from the currently visible GPS satellites (usually 6 to 12) and trilaterating on the known positions of each satellite[1]. The precision achieved is remarkable: even a simple hand-held GPS receiver can determine your absolute position on the surface of the Earth to within 5 to 10 meters in only a few seconds. A GPS receiver in a car can give accurate readings of position, speed, and course in real-time!

More sophisticated techniques, like Differential GPS (DGPS) and Real-Time Kinematic (RTK) methods, deliver centimeter-level positions with a few minutes of measurement. Such methods allow use of GPS and related satellite navigation system data to be used for high-precision surveying, autonomous driving, and other applications requiring greater real-time position accuracy than can be achieved with standard GPS receivers.

To achieve this level of precision, the clock ticks from the GPS satellites must be known to an accuracy of 20-30 nanoseconds. However, because the satellites are constantly moving relative to observers on the Earth, effects predicted by the Special and General theories of Relativity must be taken into account to achieve the desired 20-30 nanosecond accuracy.

Because an observer on the ground sees the satellites in motion relative to them, Special Relativity predicts that we should see their clocks ticking more slowly (see the Special Relativity lecture). Special Relativity predicts that the on-board atomic clocks on the satellites should fall behind clocks on the ground by about 7 microseconds per day because of the slower ticking rate due to the time dilation effect of their relative motion [2].

Further, the satellites are in orbits high above the Earth, where the curvature of spacetime due to the Earth's mass is less than it is at the Earth's surface. A prediction of General Relativity is that clocks closer to a massive object will seem to tick more slowly than those located further away (see the Black Holes lecture). As such, when viewed from the surface of the Earth, the clocks on the satellites appear to be ticking faster than identical clocks on the ground. A calculation using General Relativity predicts that the clocks in each GPS satellite should get ahead of ground-based clocks by 45 microseconds per day.

The combination of these two relativitic effects means that the clocks on-board each satellite should tick faster than identical clocks on the ground by about 38 microseconds per day (45-7=38)! This sounds small, but the high-precision required of the GPS system requires nanosecond accuracy, and 38 microseconds is 38,000 nanoseconds. If these effects were not properly taken into account, a navigational fix based on the GPS constellation would be false after only 2 minutes, and errors in global positions would continue to accumulate at a rate of about 10 kilometers each day! The whole system would be utterly worthless for navigation in a very short time.
nibble  org:junk  org:edu  explanation  trivia  cocktail  physics  gravity  relativity  applications  time  synchrony  speed  space  navigation  technology 
november 2017 by nhaliday
Biopolitics | West Hunter
I have said before that no currently popular ideology acknowledges well-established results of behavioral genetics, quantitative genetics, or psychometrics. Or evolutionary psychology.

What if some ideology or political tradition did? what could they do? What problems could they solve, what capabilities would they have?

Various past societies knew a few things along these lines. They knew that there were significant physical and behavioral differences between the sexes, which is forbidden knowledge in modern academia. Some knew that close inbreeding had negative consequences, which knowledge is on its way to the forbidden zone as I speak. Some cultures with wide enough geographical experience had realistic notions of average cognitive differences between populations. Some people had a rough idea about regression to the mean [ in dynasties], and the Ottomans came up with a highly unpleasant solution – the law of fratricide. The Romans, during the Principate, dealt with the same problem through imperial adoption. The Chinese exam system is in part aimed at the same problem.

...

At least some past societies avoided the social patterns leading to the nasty dysgenic trends we are experiencing today, but for the most part that is due to the anthropic principle: if they’d done something else you wouldn’t be reading this. Also to between-group competition: if you fuck your self up when others don’t, you may be well be replaced. Which is still the case.

If you were designing an ideology from scratch you could make use of all of these facts – not that thinking about genetics and selection hands you the solution to every problem, but you’d have more strings to your bow. And, off the top of your head, you’d understand certain trends that are behind the mountains of Estcarp, for our current ruling classes : invisible and unthinkable, That Which Must Not Be Named. .

https://westhunt.wordpress.com/2017/10/08/biopolitics/#comment-96613
“The closest…s the sort of libertarianism promulgated by Charles Murray”
Not very close..
A government that was fully aware of the implications and possibilities of human genetics, one that had the usual kind of state goals [ like persistence and increased power] , would not necessarily be particularly libertarian.

https://westhunt.wordpress.com/2017/10/08/biopolitics/#comment-96797
And giving tax breaks to college-educated liberals to have babies wouldn’t appeal much to Trump voters, methinks.

It might be worth making a reasonably comprehensive of the facts and preferences that a good liberal is supposed to embrace and seem to believe. You would have to be fairly quick about it, before it changes. Then you could evaluate about the social impact of having more of them.

Rise and Fall: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/
Every society selects for something: generally it looks as if the direction of selection pressue is more or less an accident. Although nations and empires in the past could have decided to select men for bravery or intelligence, there’s not much sign that anyone actually did this. I mean, they would have known how, if they’d wanted to, just as they knew how to select for destriers, coursers, and palfreys. It was still possible to know such things in the Middle Ages, because Harvard did not yet exist.

A rising empire needs quality human capital, which implies that at minimum that budding imperial society must not have been strongly dysgenic. At least not in the beginning. But winning changes many things, possibly including selective pressures. Imagine an empire with substantial urbanization, one in which talented guys routinely end up living in cities – cities that were demographic sinks. That might change things. Or try to imagine an empire in which survival challenges are greatly reduced, at least for elites, so that people have nothing to keep their minds off their minds and up worshiping Magna Mater. Imagine that an empire that conquers a rival with interesting local pathogens and brings some of them home. Or one that uses up a lot of its manpower conquering less-talented subjects and importing masses of those losers into the imperial heartland.

If any of those scenarios happened valid, they might eventually result in imperial decline – decline due to decreased biological capital.

Right now this is speculation. If we knew enough about the GWAS hits for intelligence, and had enough ancient DNA, we might be able to observe that rise and fall, just as we see dysgenic trends in contemporary populations. But that won’t happen for a long time. Say, a year.

hmm: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/#comment-100350
“Although nations and empires in the past could have decided to select men for bravery or intelligence, there’s not much sign that anyone actually did this.”

Maybe the Chinese imperial examination could effectively have been a selection for intelligence.
--
Nope. I’ve modelled it: the fraction of winners is far too small to have much effect, while there were likely fitness costs from the arduous preparation. Moreover, there’s a recent
paper [Detecting polygenic adaptation in admixture graphs] that looks for indications of when selection for IQ hit northeast Asia: quite a while ago. Obvious though, since Japan has similar scores without ever having had that kind of examination system.

decline of British Empire and utility of different components: https://westhunt.wordpress.com/2018/01/18/rise-and-fall/#comment-100390
Once upon a time, India was a money maker for the British, mainly because they appropriate Bengali tax revenue, rather than trade. The rest of the Empire was not worth much: it didn’t materially boost British per-capita income or military potential. Silesia was worth more to Germany, conferred more war-making power, than Africa was to Britain.
--
If you get even a little local opposition, a colony won’t pay for itself. I seem to remember that there was some, in Palestine.
--
Angels from on high paid for the Boer War.

You know, someone in the 50’s asked for the numbers – how much various colonies cost and how much they paid.

Turned out that no one had ever asked. The Colonial Office had no idea.
west-hunter  scitariat  discussion  ideas  politics  polisci  sociology  anthropology  cultural-dynamics  social-structure  social-science  evopsych  agri-mindset  pop-diff  kinship  regression-to-mean  anthropic  selection  group-selection  impact  gender  gender-diff  conquest-empire  MENA  history  iron-age  mediterranean  the-classics  china  asia  sinosphere  technocracy  scifi-fantasy  aphorism  alt-inst  recruiting  applications  medieval  early-modern  institutions  broad-econ  biodet  behavioral-gen  gnon  civilization  tradition  leviathan  elite  competition  cocktail  🌞  insight  sapiens  arbitrage  paying-rent  realness  kumbaya-kult  war  slippery-slope  unintended-consequences  deep-materialism  inequality  malthus  dysgenics  multi  murray  poast  speculation  randy-ayndy  authoritarianism  time-preference  patience  long-short-run  leadership  coalitions  ideology  rant  westminster  truth  flux-stasis  new-religion  identity-politics  left-wing  counter-revolution  fertility  signaling  status  darwinian  orwellian  ability-competence  organizing 
october 2017 by nhaliday
Anatomy of an SQL Index: What is an SQL Index
“An index makes the query fast” is the most basic explanation of an index I have ever seen. Although it describes the most important aspect of an index very well, it is—unfortunately—not sufficient for this book. This chapter describes the index structure in a less superficial way but doesn't dive too deeply into details. It provides just enough insight for one to understand the SQL performance aspects discussed throughout the book.

B-trees, etc.
techtariat  tutorial  explanation  performance  programming  engineering  dbs  trees  data-structures  nibble  caching  metal-to-virtual  abstraction  applications 
september 2017 by nhaliday
Controversial New Theory Suggests Life Wasn't a Fluke of Biology—It Was Physics | WIRED
First Support for a Physics Theory of Life: https://www.quantamagazine.org/first-support-for-a-physics-theory-of-life-20170726/
Take chemistry, add energy, get life. The first tests of Jeremy England’s provocative origin-of-life hypothesis are in, and they appear to show how order can arise from nothing.
news  org:mag  profile  popsci  bio  xenobio  deep-materialism  roots  eden  physics  interdisciplinary  applications  ideas  thermo  complex-systems  cybernetics  entropy-like  order-disorder  arrows  phys-energy  emergent  empirical  org:sci  org:inst  nibble  chemistry  fixed-point  wild-ideas  multi 
august 2017 by nhaliday
Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? | Scientific Reports
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest – and hope – that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited – in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
study  org:nat  papers  machine-learning  chemistry  measurement  volo-avolo  lower-bounds  analysis  realness  speedometer  nibble  🔬  applications  frontier  state-of-art  no-go  accuracy  interdisciplinary 
july 2017 by nhaliday
Genomic analysis of family data reveals additional genetic effects on intelligence and personality | bioRxiv
methodology:
Using Extended Genealogy to Estimate Components of Heritability for 23 Quantitative and Dichotomous Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1003520
Pedigree- and SNP-Associated Genetics and Recent Environment are the Major Contributors to Anthropometric and Cardiometabolic Trait Variation: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1005804

Missing Heritability – found?: https://westhunt.wordpress.com/2017/02/09/missing-heritability-found/
There is an interesting new paper out on genetics and IQ. The claim is that they have found the missing heritability – in rare variants, generally different in each family.

Some of the variants, the ones we find with GWAS, are fairly common and fitness-neutral: the variant that slightly increases IQ confers the same fitness (or very close to the same) as the one that slightly decreases IQ – presumably because of other effects it has. If this weren’t the case, it would be impossible for both of the variants to remain common.

The rare variants that affect IQ will generally decrease IQ – and since pleiotropy is the norm, usually they’ll be deleterious in other ways as well. Genetic load.

Happy families are all alike; every unhappy family is unhappy in its own way.: https://westhunt.wordpress.com/2017/06/06/happy-families-are-all-alike-every-unhappy-family-is-unhappy-in-its-own-way/
It now looks as if the majority of the genetic variance in IQ is the product of mutational load, and the same may be true for many psychological traits. To the extent this is the case, a lot of human psychological variation must be non-adaptive. Maybe some personality variation fulfills an evolutionary function, but a lot does not. Being a dumb asshole may be a bug, rather than a feature. More generally, this kind of analysis could show us whether particular low-fitness syndromes, like autism, were ever strategies – I suspect not.

It’s bad new news for medicine and psychiatry, though. It would suggest that what we call a given type of mental illness, like schizophrenia, is really a grab-bag of many different syndromes. The ultimate causes are extremely varied: at best, there may be shared intermediate causal factors. Not good news for drug development: individualized medicine is a threat, not a promise.

see also comment at: https://pinboard.in/u:nhaliday/b:a6ab4034b0d0

https://www.reddit.com/r/slatestarcodex/comments/5sldfa/genomic_analysis_of_family_data_reveals/
So the big implication here is that it's better than I had dared hope - like Yang/Visscher/Hsu have argued, the old GCTA estimate of ~0.3 is indeed a rather loose lower bound on additive genetic variants, and the rest of the missing heritability is just the relatively uncommon additive variants (ie <1% frequency), and so, like Yang demonstrated with height, using much more comprehensive imputation of SNP scores or using whole-genomes will be able to explain almost all of the genetic contribution. In other words, with better imputation panels, we can go back and squeeze out better polygenic scores from old GWASes, new GWASes will be able to reach and break the 0.3 upper bound, and eventually we can feasibly predict 0.5-0.8. Between the expanding sample sizes from biobanks, the still-falling price of whole genomes, the gradual development of better regression methods (informative priors, biological annotation information, networks, genetic correlations), and better imputation, the future of GWAS polygenic scores is bright. Which obviously will be extremely helpful for embryo selection/genome synthesis.

The argument that this supports mutation-selection balance is weaker but plausible. I hope that it's true, because if that's why there is so much genetic variation in intelligence, then that strongly encourages genetic engineering - there is no good reason or Chesterton fence for intelligence variants being non-fixed, it's just that evolution is too slow to purge the constantly-accumulating bad variants. And we can do better.
https://rubenarslan.github.io/generation_scotland_pedigree_gcta/

The surprising implications of familial association in disease risk: https://arxiv.org/abs/1707.00014
https://spottedtoad.wordpress.com/2017/06/09/personalized-medicine-wont-work-but-race-based-medicine-probably-will/
As Greg Cochran has pointed out, this probably isn’t going to work. There are a few genes like BRCA1 (which makes you more likely to get breast and ovarian cancer) that we can detect and might affect treatment, but an awful lot of disease turns out to be just the result of random chance and deleterious mutation. This means that you can’t easily tailor disease treatment to people’s genes, because everybody is fucked up in their own special way. If Johnny is schizophrenic because of 100 random errors in the genes that code for his neurons, and Jack is schizophrenic because of 100 other random errors, there’s very little way to test a drug to work for either of them- they’re the only one in the world, most likely, with that specific pattern of errors. This is, presumably why the incidence of schizophrenia and autism rises in populations when dads get older- more random errors in sperm formation mean more random errors in the baby’s genes, and more things that go wrong down the line.

The looming crisis in human genetics: http://www.economist.com/node/14742737
Some awkward news ahead
- Geoffrey Miller

Human geneticists have reached a private crisis of conscience, and it will become public knowledge in 2010. The crisis has depressing health implications and alarming political ones. In a nutshell: the new genetics will reveal much less than hoped about how to cure disease, and much more than feared about human evolution and inequality, including genetic differences between classes, ethnicities and races.

2009!
study  preprint  bio  biodet  behavioral-gen  GWAS  missing-heritability  QTL  🌞  scaling-up  replication  iq  education  spearhead  sib-study  multi  west-hunter  scitariat  genetic-load  mutation  medicine  meta:medicine  stylized-facts  ratty  unaffiliated  commentary  rhetoric  wonkish  genetics  genomics  race  pop-structure  poast  population-genetics  psychiatry  aphorism  homo-hetero  generalization  scale  state-of-art  ssc  reddit  social  summary  gwern  methodology  personality  britain  anglo  enhancement  roots  s:*  2017  data  visualization  database  let-me-see  bioinformatics  news  org:rec  org:anglo  org:biz  track-record  prediction  identity-politics  pop-diff  recent-selection  westminster  inequality  egalitarianism-hierarchy  high-dimension  applications  dimensionality  ideas  no-go  volo-avolo  magnitude  variance-components  GCTA  tradeoffs  counter-revolution  org:mat  dysgenics  paternal-age  distribution  chart  abortion-contraception-embryo 
june 2017 by nhaliday
Chinese innovations | West Hunter
I’m interested in hearing about significant innovations out of contemporary China. Good ones. Ideas, inventions, devices, dreams. Throw in Outer China (Taiwan, Hong Kong, Singapore).

super nationalistic dude ("IC") in the comments section (wish his videos had subtitles):
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91378
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91378
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91382
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91292
https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91315

on the carrier-killer missiles: https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91280
You could take out a carrier task force with a nuke 60 years ago.
--
Then the other side can nuke something and point to the sunk carrier group saying “they started first”.

Hypersonic anti-ship cruise missiles, or the mysterious anti-ship ballistic missiles China has avoid that.
--
They avoid that because the law of physics no longer allow radar.

https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91340
I was thinking about the period in which the United States was experiencing rapid industrial growth, on its way to becoming the most powerful industrial nation. At first not much science, buts lots and lots of technological innovation. I’m not aware of a corresponding efflorescence of innovative Chinese technology today, but then I don’t know everything: so I asked.

I’m still not aware of it. So maybe the answer is ‘no’.

hmm: https://westhunt.wordpress.com/2017/05/10/chinese-innovations/#comment-91389
I would say that a lot of the most intelligent faction is being siphoned over into government work, and thus not focused in technological innovation. We should expect to see societal/political innovation rather than technological if my thesis is true.

There’s some evidence of that.
west-hunter  scitariat  discussion  china  asia  sinosphere  technology  innovation  frontier  novelty  🔬  discovery  cultural-dynamics  geoengineering  applications  ideas  list  zeitgeist  trends  the-bones  expansionism  diaspora  scale  wealth-of-nations  science  orient  chart  great-powers  questions  speedometer  n-factor  microfoundations  the-world-is-just-atoms  the-trenches  dirty-hands  arms  oceans  sky  government  leviathan  alt-inst  authoritarianism  antidemos  multi  poast  nuclear  regularizer  hmm  track-record  survey  institutions  corruption  military 
may 2017 by nhaliday
Talks
Quantum Supremacy: Office of Science and Technology Policy QIS Forum, Eisenhower Executive Office Building, White House Complex, Washington DC, October 18, 2016. Another version at UTCS Faculty Lunch, October 26, 2016. Another version at UT Austin Physics Colloquium, Austin, TX, November 9, 2016.

Complexity-Theoretic Foundations of Quantum Supremacy Experiments: Quantum Algorithms Workshop, Aspen Center for Physics, Aspen, CO, March 25, 2016

When Exactly Do Quantum Computers Provide A Speedup?: Yale Quantum Institute Seminar, Yale University, New Haven, CT, October 10, 2014. Another version at UT Austin Physics Colloquium, Austin, TX, November 19, 2014; Applied and Interdisciplinary Mathematics Seminar, Northeastern University, Boston, MA, November 25, 2014; Hebrew University Physics Colloquium, Jerusalem, Israel, January 5, 2015; Computer Science Colloquium, Technion, Haifa, Israel, January 8, 2015; Stanford University Physics Colloquium, January 27, 2015
tcstariat  aaronson  tcs  complexity  quantum  quantum-info  talks  list  slides  accretion  algorithms  applications  physics  nibble  frontier  computation  volo-avolo  speedometer  questions 
may 2017 by nhaliday
Overview of current development in electrical energy storage technologies and the application potential in power system operation
- An overview of the state-of-the-art in Electrical Energy Storage (EES) is provided.
- A comprehensive analysis of various EES technologies is carried out.
- An application potential analysis of the reviewed EES technologies is presented.
- The presented synthesis to EES technologies can be used to support future R&D and deployment.

Prospects and Limits of Energy Storage in Batteries: http://pubs.acs.org/doi/abs/10.1021/jz5026273
study  survey  state-of-art  energy-resources  heavy-industry  chemistry  applications  electromag  stock-flow  wonkish  frontier  technology  biophysical-econ  the-world-is-just-atoms  🔬  phys-energy  ideas  speedometer  dirty-hands  multi 
april 2017 by nhaliday
Futuristic Physicists? | Do the Math
interesting comment: https://westhunt.wordpress.com/2014/03/05/outliers/#comment-23087
referring to timelines? or maybe also the jetpack+flying car (doesn't seem physically impossible; at most impossible for useful trip lengths)?

Topic Mean % pessim. median disposition
1. Autopilot Cars 1.4 (125 yr) 4 likely within 50 years
15. Real Robots 2.2 (800 yr) 10 likely within 500 years
13. Fusion Power 2.4 (1300 yr) 8 likely within 500 years
10. Lunar Colony 3.2 18 likely within 5000 years
16. Cloaking Devices 3.5 32 likely within 5000 years
20. 200 Year Lifetime 3.3 16 maybe within 5000 years
11. Martian Colony 3.4 22 probably eventually (>5000 yr)
12. Terraforming 4.1 40 probably eventually (> 5000 yr)
18. Alien Dialog 4.2 42 probably eventually (> 5000 yr)
19. Alien Visit 4.3 50 on the fence
2. Jetpack 4.1 64 unlikely ever
14. Synthesized Food 4.2 52 unlikely ever
8. Roving Astrophysics 4.6 64 unlikely ever
3. Flying “Cars” 3.9 60 unlikely ever
7. Visit Black Hole 5.1 74 forget about it
9. Artificial Gravity 5.3 84 forget about it
4. Teleportation 5.3 85 forget about it
5. Warp Drive 5.5 92 forget about it
6. Wormhole Travel 5.5 96 forget about it
17. Time Travel 5.7 92 forget about it
org:bleg  nibble  data  poll  academia  higher-ed  prediction  speculation  physics  technology  gravity  geoengineering  space  frontier  automation  transportation  energy-resources  org:edu  expert  scitariat  science  no-go  big-picture  wild-ideas  the-world-is-just-atoms  applications  multi  west-hunter  optimism  pessimism  objektbuch  regularizer  s:*  c:**  🔬  poast  ideas  speedometer  whiggish-hegelian  scifi-fantasy  expert-experience  expansionism 
march 2017 by nhaliday
Which one would be easier to terraform: Venus or Mars? - Quora
what Greg Cochran was suggesting:
First, alternatives to terraforming. It would be possible to live on Venus in the high atmosphere, in giant floating cities. Using a standard space-station atmospheric mix at about half an earth atmosphere, a pressurized geodesic sphere would float naturally somewhere above the bulk of the clouds of sulfuric acid. Atmospheric motions would likely lead to some rotation about the polar areas, where inhabitants would experience a near-perpetual sunset. Floating cities could be mechanically rotated to provide a day-night cycle for on-board agriculture. The Venusian atmosphere is rich in carbon, oxygen, sulfur, and has trace quantities of water. These could be mined for building materials, while rarer elements could be mined from the surface with long scoops or imported from other places with space-plane shuttles.
q-n-a  qra  physics  space  geoengineering  caltech  phys-energy  magnitude  fermi  analysis  data  the-world-is-just-atoms  new-religion  technology  comparison  sky  atmosphere  thermo  gravity  electromag  applications  frontier  west-hunter  wild-ideas  🔬  scitariat  definite-planning  ideas  expansionism 
february 2017 by nhaliday
Energy of Seawater Desalination
0.66 kcal / liter is the minimum energy required to desalination of one liter of seawater, regardless of the technology applied to the process.
infrastructure  explanation  physics  thermo  objektbuch  data  lower-bounds  chemistry  the-world-is-just-atoms  geoengineering  phys-energy  nibble  oceans  h2o  applications  estimate  🔬  energy-resources  biophysical-econ  stylized-facts  ideas  fluid  volo-avolo 
february 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Logicians on safari
So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

the sequel: http://www.scottaaronson.com/blog/?p=153
tcstariat  aaronson  tcs  computation  complexity  aphorism  examples  list  reflection  philosophy  multi  summary  synthesis  hi-order-bits  interdisciplinary  lens  big-picture  survey  nibble  org:bleg  applications  big-surf  s:*  p:whenever  ideas  elegance 
january 2017 by nhaliday
The infinitesimal model | bioRxiv
Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. We first review the long history of the infinitesimal model in quantitative genetics. Then we provide a definition of the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, ... We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. In each case, by conditioning on the pedigree relating individuals in the population, we incorporate arbitrary selection and population structure. We suppose that we can observe the pedigree up to the present generation, together with all the ancestral traits, and we show, in particular, that the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order M^{-1/2}. Simulations suggest that in particular cases the convergence may be as fast as 1/M.

published version:
The infinitesimal model: Definition, derivation, and implications: https://sci-hub.tw/10.1016/j.tpb.2017.06.001

Commentary: Fisher’s infinitesimal model: A story for the ages: http://www.sciencedirect.com/science/article/pii/S0040580917301508?via%3Dihub
This commentary distinguishes three nested approximations, referred to as “infinitesimal genetics,” “Gaussian descendants” and “Gaussian population,” each plausibly called “the infinitesimal model.” The first and most basic is Fisher’s “infinitesimal” approximation of the underlying genetics – namely, many loci, each making a small contribution to the total variance. As Barton et al. (2017) show, in the limit as the number of loci increases (with enough additivity), the distribution of genotypic values for descendants approaches a multivariate Gaussian, whose variance–covariance structure depends only on the relatedness, not the phenotypes, of the parents (or whether their population experiences selection or other processes such as mutation and migration). Barton et al. (2017) call this rigorously defensible “Gaussian descendants” approximation “the infinitesimal model.” However, it is widely assumed that Fisher’s genetic assumptions yield another Gaussian approximation, in which the distribution of breeding values in a population follows a Gaussian — even if the population is subject to non-Gaussian selection. This third “Gaussian population” approximation, is also described as the “infinitesimal model.” Unlike the “Gaussian descendants” approximation, this third approximation cannot be rigorously justified, except in a weak-selection limit, even for a purely additive model. Nevertheless, it underlies the two most widely used descriptions of selection-induced changes in trait means and genetic variances, the “breeder’s equation” and the “Bulmer effect.” Future generations may understand why the “infinitesimal model” provides such useful approximations in the face of epistasis, linkage, linkage disequilibrium and strong selection.
study  exposition  bio  evolution  population-genetics  genetics  methodology  QTL  preprint  models  unit  len:long  nibble  linearity  nonlinearity  concentration-of-measure  limits  applications  🌞  biodet  oscillation  fisher  perturbation  stylized-facts  chart  ideas  article  pop-structure  multi  pdf  piracy  intricacy  map-territory  kinship  distribution  simulation  ground-up  linear-models  applicability-prereqs  bioinformatics 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : academemetasci

related tags

2016-election  aaronson  ability-competence  abortion-contraception-embryo  abstraction  academia  accretion  accuracy  acm  acmtariat  aDNA  advanced  adversarial  advice  aesthetics  africa  age-generation  age-of-discovery  aggregator  aging  agri-mindset  ai  ai-control  albion  algebra  algorithmic-econ  algorithms  alien-character  alt-inst  ama  analysis  analytical-holistic  anglo  anglosphere  anonymity  anthropic  anthropology  antidemos  aphorism  apollonian-dionysian  app  applicability-prereqs  applications  approximation  arbitrage  arms  arrows  art  article  asia  atmosphere  atoms  attention  audio  authoritarianism  auto-learning  automata-languages  automation  average-case  axelrod  backup  baseball  bayesian  behavioral-econ  behavioral-gen  being-right  benchmarks  berkeley  best-practices  better-explained  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blog  books  bounded-cognition  branches  bret-victor  britain  broad-econ  business  c(pp)  c:**  caching  calculation  caltech  canon  career  carmack  cartoons  causation  certificates-recognition  chart  checklists  chemistry  china  christianity  civilization  clarity  classification  clever-rats  climate-change  coalitions  cocktail  code-dive  code-organizing  coding-theory  cog-psych  cohesion  commentary  common-case  communication  communism  community  comparison  competition  complement-substitute  complex-systems  complexity  composition-decomposition  compressed-sensing  compression  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confluence  confounding  conquest-empire  consilience  constraint-satisfaction  contrarianism  convergence  convexity-curvature  cool  cooperate-defect  coordination  correlation  corruption  cost-benefit  counter-revolution  counterfactual  coupling-cohesion  courage  course  cracker-econ  cracker-prog  creative  criminal-justice  critique  crux  crypto-anarchy  cs  cultural-dynamics  culture  curiosity  current-events  curvature  cybernetics  cycles  dan-luu  dark-arts  darwinian  data  data-science  data-structures  database  dataset  dbs  death  debate  debt  decision-theory  deep-learning  deep-materialism  deepgoog  definite-planning  degrees-of-freedom  demographics  dennett  descriptive  desktop  detail-architecture  diaspora  differential  dimensionality  direct-indirect  direction  dirty-hands  discovery  discrete  discussion  disease  distributed  distribution  divergence  documentary  domestication  DP  duplication  duty  dysgenics  early-modern  earth  eastern-europe  econ-metrics  economics  econotariat  eden  editors  education  efficiency  egalitarianism-hierarchy  electromag  elegance  elite  embeddings  embodied  emergent  empirical  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entropy-like  environment  epistemic  equilibrium  erik-demaine  error  essay  estimate  ethical-algorithms  europe  events  evolution  evopsych  examples  expansionism  expert  expert-experience  explanans  explanation  explore-exploit  exposition  externalities  extrema  facebook  faq  features  fermi  fertility  feynman  finance  finiteness  fire  fisher  fixed-point  flexibility  fluid  flux-stasis  food  foreign-lang  form-design  fourier  frontier  functional  gallic  game-theory  games  garett-jones  GCTA  gender  gender-diff  generalization  generative  genetic-load  genetics  genomics  geoengineering  geography  geometry  germanic  giants  gibbon  gnon  gnosis-logos  google  government  gowers  gradient-descent  graph-theory  graphical-models  graphics  graphs  gravity  great-powers  grokkability-clarity  ground-up  group-selection  growth-econ  guide  GWAS  gwern  h2o  hanson  hard-tech  hardware  hari-seldon  harvard  hashing  heavy-industry  heavyweights  heterodox  heuristic  hi-order-bits  high-dimension  higher-ed  history  hmm  homo-hetero  homogeneity  housing  howto  hsu  human-capital  ideas  identification-equivalence  identity-politics  ideology  IEEE  iidness  illusion  impact  impetus  incentives  india  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  information-theory  infrastructure  init  innovation  insight  institutions  integral  integration-extension  intel  intellectual-property  intelligence  interdisciplinary  interface  interface-compatibility  internet  interpretation  intersection-connectedness  intervention  interview  intricacy  investing  iq  iron-age  is-ought  iteration-recursion  japan  jargon  jobs  journos-pundits  jvm  kinship  knowledge  kumbaya-kult  language  latency-throughput  latent-variables  lattice  law  leadership  learning  learning-theory  lecture-notes  lectures  left-wing  legacy  len:long  lens  let-me-see  levers  leviathan  libraries  limits  linear-algebra  linear-models  linearity  liner-notes  links  linux  list  literature  local-global  long-short-run  low-hanging  lower-bounds  machine-learning  macro  magnitude  malthus  management  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  markets  markov  martingale  matching  math  math.AT  math.CA  math.CO  math.CV  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  measure  measurement  mechanics  mechanism-design  medicine  medieval  mediterranean  MENA  mental-math  meta:math  meta:medicine  meta:prediction  meta:reading  meta:science  meta:war  metabuch  metal-to-virtual  metameta  methodology  metric-space  metrics  micro  microfoundations  microsoft  migration  military  minimum-viable  missing-heritability  mit  mixing  model-class  models  moloch  moments  monte-carlo  mostly-modern  motivation  multi  murray  mutation  n-factor  narrative  natural-experiment  navigation  near-far  necessity-sufficiency  network-structure  networking  neuro  neurons  new-religion  news  nibble  nihil  nitty-gritty  nlp  no-go  noahpinion  noise-structure  nonlinearity  norms  northeast  novelty  nuclear  nyc  objektbuch  oceans  ocw  old-anglo  oly  open-problems  opsec  optimate  optimism  optimization  order-disorder  ORFE  org:anglo  org:biz  org:bleg  org:bv  org:com  org:edu  org:inst  org:junk  org:mag  org:mat  org:med  org:nat  org:popup  org:rec  org:sci  organization  organizing  orient  orwellian  os  oscillation  oss  outcome-risk  overflow  p:***  p:null  p:someday  p:whenever  PAC  papers  paradox  parallax  parasites-microbiome  paternal-age  patience  paul-romer  paying-rent  pdf  people  percolation  performance  personality  perturbation  pessimism  phase-transition  philosophy  photography  phys-energy  physics  pigeonhole-markov  piracy  poast  policy  polisci  politics  poll  pop-diff  pop-structure  popsci  population-genetics  postmortem  pragmatic  pre-2013  pre-ww2  prediction  prediction-markets  preprint  presentation  prioritizing  priors-posteriors  privacy  pro-rata  probability  problem-solving  productivity  profile  programming  proofs  property-rights  proposal  protestant-catholic  prudence  pseudoE  psychiatry  psychology  public-goodish  puzzles  python  q-n-a  qra  QTL  quality  quantum  quantum-info  questions  quixotic  quotes  race  rand-approx  random  random-matrices  random-networks  randy-ayndy  ranking  rant  rat-pack  ratty  reading  realness  rec-math  recent-selection  recommendations  recruiting  reddit  reduction  reference  reflection  regression  regression-to-mean  regularization  regularizer  regulation  reinforcement  relativity  religion  replication  repo  research  research-program  retention  retrofit  review  revolution  rhetoric  rigidity  rigor  risk  roadmap  roots  rot  russia  rust  s:*  s:**  s:***  saas  sampling  sapiens  scale  scaling-up  scholar  scholar-pack  science  science-anxiety  scifi-fantasy  scitariat  search  security  selection  sequential  series  shannon  sib-study  SIGGRAPH  signal-noise  signaling  similarity  simulation  sinosphere  skeleton  skunkworks  sky  sleuthin  slides  slippery-slope  smoothness  social  social-choice  social-science  social-structure  sociology  soft-question  software  space  sparsity  spatial  spearhead  spectral  speculation  speed  speedometer  spock  sports  spreading  ssc  stackex  stanford  startups  stat-mech  state-of-art  statesmen  stats  status  stochastic-processes  stock-flow  stories  strategy  straussian  stream  street-fighting  strings  structure  study  studying  stylized-facts  sublinear  summary  supply-demand  survey  sv  synchrony  synthesis  system-design  systems  tails  talks  taxes  tcs  tcstariat  teaching  tech  technocracy  technology  techtariat  telos-atelos  tetlock  the-bones  the-classics  the-great-west-whale  the-self  the-trenches  the-west  the-world-is-just-atoms  theory-practice  theos  thermo  thick-thin  things  thinking  threat-modeling  tidbits  time  time-complexity  time-preference  time-series  todo  toolkit  tools  top-n  topology  traces  track-record  tradecraft  tradeoffs  tradition  transportation  trees  trends  tricki  trivia  trump  truth  turing  tutorial  twitter  ui  unaffiliated  unintended-consequences  unit  unix  unsupervised  urban-rural  us-them  usa  vague  valiant  variance-components  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  war  waves  wealth  wealth-of-nations  web  west-hunter  westminster  whiggish-hegelian  white-paper  wiki  wild-ideas  wonkish  working-stiff  workshop  world-war  wormholes  worrydream  writing  xenobio  yoga  yvain  zeitgeist  zooming  🌞  🎓  🎩  👳  🔬  🖥 

Copy this bookmark:



description:


tags: