nhaliday + methodology   489

When to use margin vs padding in CSS - Stack Overflow
TL;DR: By default I use margin everywhere, except when I have a border or background and want to increase the space inside that visible box.

To me, the biggest difference between padding and margin is that vertical margins auto-collapse, and padding doesn't.

https://stackoverflow.com/questions/5958699/difference-between-margin-and-padding
One key thing that is missing in the answers here:

Top/Bottom margins are collapsible.

So if you have a 20px margin at the bottom of an element and a 30px margin at the top of the next element, the margin between the two elements will be 30px rather than 50px. This does not apply to left/right margin or padding.
--
Note that there are very specific circumstances in which vertical margins collapse - not just any two vertical margins will do so. Which just makes it all the more confusing (unless you're very familiar with the box model).

[ed.: roughly, separation = padding(A) + padding(B) + max{margin(A), margin(B)}, border in between padding and margin]
q-n-a  stackex  comparison  explanation  summary  best-practices  form-design  DSL  web  frontend  stylized-facts  methodology  programming  multi 
28 days ago by nhaliday
CppCon 2015: Chandler Carruth "Tuning C++: Benchmarks, and CPUs, and Compilers! Oh My!" - YouTube
- very basics of benchmarking
- Q: why does preemptive reserve speed up push_back by 10x?
- favorite tool is Linux perf
- callgraph profiling
- important option: -fomit-frame-pointer
- perf has nice interface ('a' = "annotate") for reading assembly (good display of branches/jumps)
- A: optimized to no-op
- how to turn off optimizer
- profilers aren't infallible. a lot of the time samples are misattributed to neighboring ops
- fast mod example
- branch prediction hints (#define UNLIKELY(x), __builtin_expected, etc)
video  presentation  c(pp)  pls  programming  unix  heavyweights  cracker-prog  benchmarks  engineering  best-practices  working-stiff  systems  expert-experience  google  llvm  common-case  stories  libraries  measurement  linux  performance  traces  graphs  static-dynamic  ui  assembly  compilers  methodology 
4 weeks ago by nhaliday
CppCon 2014: Chandler Carruth "Efficiency with Algorithms, Performance with Data Structures" - YouTube
- idk how I feel about this
- makes a distinction between efficiency (basically asymptotic complexity, "doing less work") and performance ("doing that work faster"). idiosyncratic terminology but similar to the "two performance aesthetics" described here: https://pinboard.in/u:nhaliday/b:913a284640c5
- some bikeshedding about vector::reserve and references
- "discontiguous data structures are the root of all evil" (cache-locality, don't use linked lists, etc)
- stacks? queues? just use vector. also suggests circular buffers. says std::deque is really bad
- std::map is bad too (for real SWE, not oly-programming). if you want ordered associative container, just binary search in vector
- std::unordered_map is poorly implemented, unfortunately (due to requirement for buckets in API)
- good implementation of hash table uses open addressing and local (linear?) probing
video  presentation  performance  nitty-gritty  best-practices  working-stiff  programming  c(pp)  systems  data-structures  algorithms  jvm  pls  metal-to-virtual  stylized-facts  rhetoric  expert-experience  google  llvm  efficiency  time-complexity  mobile  computer-memory  caching  oly-programming  common-case  hashing  multi  energy-resources  methodology  trees 
4 weeks ago by nhaliday
Friends with malefit. The effects of keeping dogs and cats, sustaining animal-related injuries and Toxoplasma infection on health and quality of life | bioRxiv
The main problem of many studies was the autoselection – participants were informed about the aims of the study during recruitment and later likely described their health and wellbeing according to their personal beliefs and wishes, not according to their real status. To avoid this source of bias, we did not mention pets during participant recruitment and hid the pet-related questions among many hundreds of questions in an 80-minute Internet questionnaire. Results of our study performed on a sample of on 10,858 subjects showed that liking cats and dogs has a weak positive association with quality of life. However, keeping pets, especially cats, and even more being injured by pets, were strongly negatively associated with many facets of quality of life. Our data also confirmed that infection by the cat parasite Toxoplasma had a very strong negative effect on quality of life, especially on mental health. However, the infection was not responsible for the observed negative effects of keeping pets, as these effects were much stronger in 1,527 Toxoplasma-free subjects than in the whole population. Any cross-sectional study cannot discriminate between a cause and an effect. However, because of the large and still growing popularity of keeping pets, the existence and nature of the reverse pet phenomenon deserve the outmost attention.
study  bio  preprint  wut  psychology  social-psych  nature  regularizer  cost-benefit  emotion  sentiment  poll  methodology  sampling-bias  confounding  happy-sad  intervention  sociology  disease  parasites-microbiome  correlation  contrarianism  branches  increase-decrease  measurement  internet 
11 weeks ago by nhaliday
Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record  ubiquity 
june 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
Why is Software Engineering so difficult? - James Miller
basic message: No silver bullet!

most interesting nuggets:
Scale and Complexity
- Windows 7 > 50 million LOC
Expect a staggering number of bugs.

Bugs?
- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.
- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.

Bug removal
- Testing typically exercises only half the code.

Better bug removal?
- There are better ways to do testing that do produce fantastic programs.”
- Are we sure about this fact?
* No, its only an opinion!
* In general Software Engineering has ....
NO FACTS!

So why not do this?
- The costs are unbelievable.
- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.
pdf  slides  engineering  nitty-gritty  programming  best-practices  roots  comparison  cost-benefit  software  systematic-ad-hoc  structure  error  frontier  debugging  checking  formal-methods  context  detail-architecture  intricacy  big-picture  system-design  correctness  scale  scaling-tech  shipping  money  data  stylized-facts  street-fighting  objektbuch  pro-rata  estimate  pessimism  degrees-of-freedom  volo-avolo  no-go  things  thinking  summary  quality  density  methodology 
may 2019 by nhaliday
AFL + QuickCheck = ?
Adventures in fuzzing. Also differences between testing culture in software and hardware.
techtariat  dan-luu  programming  engineering  checking  random  haskell  path-dependence  span-cover  heuristic  libraries  links  tools  devtools  software  hardware  culture  formal-methods  local-global  golang  correctness  methodology 
may 2019 by nhaliday
Teach debugging
A friend of mine and I couldn't understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed.

1. Write down the problem
2. Think real hard
3. Write down the solution

The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we'd been asked to tackle before. On the day he assigned the project, the professor exhorted us to begin early. Over the next few weeks, we heard rumors that some of our classmates worked day and night without making progress.

...

And then, just after midnight, a number of our newfound buddies from dinner reported successes. Half of those who started from scratch had working designs. Others were despondent, because their design was still broken in some subtle, non-obvious way. As I talked with one of those students, I began poring over his design. And after a few minutes, I realized that the Feynman method wasn't the only way forward: it should be possible to systematically apply a mechanical technique repeatedly to find the source of our problems. Beneath all the abstractions, our projects consisted purely of NAND gates (woe to those who dug around our toolbox enough to uncover dynamic logic), which outputs a 0 only when both inputs are 1. If the correct output is 0, both inputs should be 1. The input that isn't is in error, an error that is, itself, the output of a NAND gate where at least one input is 0 when it should be 1. We applied this method recursively, finding the source of all the problems in both our designs in under half an hour.

How To Debug Any Program: https://www.blinddata.com/blog/how-to-debug-any-program-9
May 8th 2019 by Saketh Are

Start by Questioning Everything

...

When a program is behaving unexpectedly, our attention tends to be drawn first to the most complex portions of the code. However, mistakes can come in all forms. I've personally been guilty of rushing to debug sophisticated portions of my code when the real bug was that I forgot to read in the input file. In the following section, we'll discuss how to reliably focus our attention on the portions of the program that need correction.

Then Question as Little as Possible

Suppose that we have a program and some input on which its behavior doesn’t match our expectations. The goal of debugging is to narrow our focus to as small a section of the program as possible. Once our area of interest is small enough, the value of the incorrect output that is being produced will typically tell us exactly what the bug is.

In order to catch the point at which our program diverges from expected behavior, we must inspect the intermediate state of the program. Suppose that we select some point during execution of the program and print out all values in memory. We can inspect the results manually and decide whether they match our expectations. If they don't, we know for a fact that we can focus on the first half of the program. It either contains a bug, or our expectations of what it should produce were misguided. If the intermediate state does match our expectations, we can focus on the second half of the program. It either contains a bug, or our understanding of what input it expects was incorrect.

Question Things Efficiently

For practical purposes, inspecting intermediate state usually doesn't involve a complete memory dump. We'll typically print a small number of variables and check whether they have the properties we expect of them. Verifying the behavior of a section of code involves:

1. Before it runs, inspecting all values in memory that may influence its behavior.
2. Reasoning about the expected behavior of the code.
3. After it runs, inspecting all values in memory that may be modified by the code.

Reasoning about expected behavior is typically the easiest step to perform even in the case of highly complex programs. Practically speaking, it's time-consuming and mentally strenuous to write debug output into your program and to read and decipher the resulting values. It is therefore advantageous to structure your code into functions and sections that pass a relatively small amount of information between themselves, minimizing the number of values you need to inspect.

...

Finding the Right Question to Ask

We’ve assumed so far that we have available a test case on which our program behaves unexpectedly. Sometimes, getting to that point can be half the battle. There are a few different approaches to finding a test case on which our program fails. It is reasonable to attempt them in the following order:

1. Verify correctness on the sample inputs.
2. Test additional small cases generated by hand.
3. Adversarially construct corner cases by hand.
4. Re-read the problem to verify understanding of input constraints.
5. Design large cases by hand and write a program to construct them.
6. Write a generator to construct large random cases and a brute force oracle to verify outputs.
techtariat  dan-luu  engineering  programming  debugging  IEEE  reflection  stories  education  higher-ed  checklists  iteration-recursion  divide-and-conquer  thinking  ground-up  nitty-gritty  giants  feynman  error  input-output  structure  composition-decomposition  abstraction  systematic-ad-hoc  reduction  teaching  state  correctness  multi  oly  oly-programming  metabuch  neurons  problem-solving  wire-guided  marginal  strategy  tactics  methodology  simplification-normalization 
may 2019 by nhaliday
quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.

edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]
q-n-a  stackex  programming  engineering  nitty-gritty  error  flux-stasis  books  recommendations  software  checking  debugging  pro-rata  pls  comparison  parsimony  measure  data  objektbuch  speculation  accuracy  density  correctness  estimate  street-fighting  multi  quality  stylized-facts  methodology 
april 2019 by nhaliday
Why read old philosophy? | Meteuphoric
(This story would suggest that in physics students are maybe missing out on learning the styles of thought that produce progress in physics. My guess is that instead they learn them in grad school when they are doing research themselves, by emulating their supervisors, and that the helpfulness of this might partially explain why Nobel prizewinner advisors beget Nobel prizewinner students.)

The story I hear about philosophy—and I actually don’t know how much it is true—is that as bits of philosophy come to have any methodological tools other than ‘think about it’, they break off and become their own sciences. So this would explain philosophy’s lone status in studying old thinkers rather than impersonal methods—philosophy is the lone ur-discipline without impersonal methods but thinking.

This suggests a research project: try summarizing what Aristotle is doing rather than Aristotle’s views. Then write a nice short textbook about it.
ratty  learning  reading  studying  prioritizing  history  letters  philosophy  science  comparison  the-classics  canon  speculation  reflection  big-peeps  iron-age  mediterranean  roots  lens  core-rats  thinking  methodology  grad-school  academia  physics  giants  problem-solving  meta:research  scholar  the-trenches  explanans  crux  metameta  duplication  sociality  innovation  quixotic  meta:reading  classic 
june 2018 by nhaliday
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
april 2018 by nhaliday
Who We Are | West Hunter
I’m going to review David Reich’s new book, Who We Are and How We Got Here. Extensively: in a sense I’ve already been doing this for a long time. Probably there will be a podcast. The GoFundMe link is here. You can also send money via Paypal (Use the donate button), or bitcoins to 1Jv4cu1wETM5Xs9unjKbDbCrRF2mrjWXr5. In-kind donations, such as orichalcum or mithril, are always appreciated.

This is the book about the application of ancient DNA to prehistory and history.

height difference between northern and southern europeans: https://westhunt.wordpress.com/2018/03/29/who-we-are-1/
mixing, genocide of males, etc.: https://westhunt.wordpress.com/2018/03/29/who-we-are-2-purity-of-essence/
rapid change in polygenic traits (appearance by Kevin Mitchell and funny jab at Brad Delong ("regmonkey")): https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/
schiz, bipolar, and IQ: https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/#comment-105605
Dan Graur being dumb: https://westhunt.wordpress.com/2018/04/02/the-usual-suspects/
prediction of neanderthal mixture and why: https://westhunt.wordpress.com/2018/04/03/who-we-are-3-neanderthals/
New Guineans tried to use Denisovan admixture to avoid UN sanctions (by "not being human"): https://westhunt.wordpress.com/2018/04/04/who-we-are-4-denisovans/
also some commentary on decline of Out-of-Africa, including:
"Homo Naledi, a small-brained homonin identified from recently discovered fossils in South Africa, appears to have hung around way later that you’d expect (up to 200,000 years ago, maybe later) than would be the case if modern humans had occupied that area back then. To be blunt, we would have eaten them."

Live Not By Lies: https://westhunt.wordpress.com/2018/04/08/live-not-by-lies/
Next he slams people that suspect that upcoming genetic genetic analysis will, in most cases, confirm traditional stereotypes about race – the way the world actually looks.

The people Reich dumps on are saying perfectly reasonable things. He criticizes Henry Harpending for saying that he’d never seen an African with a hobby. Of course, Henry had actually spent time in Africa, and that’s what he’d seen. The implication is that people in Malthusian farming societies – which Africa was not – were selected to want to work, even where there was no immediate necessity to do so. Thus hobbies, something like a gerbil running in an exercise wheel.

He criticized Nicholas Wade, for saying that different races have different dispositions. Wade’s book wasn’t very good, but of course personality varies by race: Darwin certainly thought so. You can see differences at birth. Cover a baby’s nose with a cloth: Chinese and Navajo babies quietly breathe through their mouth, European and African babies fuss and fight.

Then he attacks Watson, for asking when Reich was going to look at Jewish genetics – the kind that has led to greater-than-average intelligence. Watson was undoubtedly trying to get a rise out of Reich, but it’s a perfectly reasonable question. Ashkenazi Jews are smarter than the average bear and everybody knows it. Selection is the only possible explanation, and the conditions in the Middle ages – white-collar job specialization and a high degree of endogamy, were just what the doctor ordered.

Watson’s a prick, but he’s a great prick, and what he said was correct. Henry was a prince among men, and Nick Wade is a decent guy as well. Reich is totally out of line here: he’s being a dick.

Now Reich may be trying to burnish his anti-racist credentials, which surely need some renewal after having pointing out that race as colloquially used is pretty reasonable, there’s no reason pops can’t be different, people that said otherwise ( like Lewontin, Gould, Montagu, etc. ) were lying, Aryans conquered Europe and India, while we’re tied to the train tracks with scary genetic results coming straight at us. I don’t care: he’s being a weasel, slandering the dead and abusing the obnoxious old genius who laid the foundations of his field. Reich will also get old someday: perhaps he too will someday lose track of all the nonsense he’s supposed to say, or just stop caring. Maybe he already has… I’m pretty sure that Reich does not like lying – which is why he wrote this section of the book (not at all logically necessary for his exposition of the ancient DNA work) but the required complex juggling of lies and truth required to get past the demented gatekeepers of our society may not be his forte. It has been said that if it was discovered that someone in the business was secretly an android, David Reich would be the prime suspect. No Talleyrand he.

https://westhunt.wordpress.com/2018/04/12/who-we-are-6-the-americas/
The population that accounts for the vast majority of Native American ancestry, which we will call Amerinds, came into existence somewhere in northern Asia. It was formed from a mix of Ancient North Eurasians and a population related to the Han Chinese – about 40% ANE and 60% proto-Chinese. Is looks as if most of the paternal ancestry was from the ANE, while almost all of the maternal ancestry was from the proto-Han. [Aryan-Transpacific ?!?] This formation story – ANE boys, East-end girls – is similar to the formation story for the Indo-Europeans.

https://westhunt.wordpress.com/2018/04/18/who-we-are-7-africa/
In some ways, on some questions, learning more from genetics has left us less certain. At this point we really don’t know where anatomically humans originated. Greater genetic variety in sub-Saharan African has been traditionally considered a sign that AMH originated there, but it possible that we originated elsewhere, perhaps in North Africa or the Middle East, and gained extra genetic variation when we moved into sub-Saharan Africa and mixed with various archaic groups that already existed. One consideration is that finding recent archaic admixture in a population may well be a sign that modern humans didn’t arise in that region ( like language substrates) – which makes South Africa and West Africa look less likely. The long-continued existence of homo naledi in South Africa suggests that modern humans may not have been there for all that long – if we had co-existed with homo naledi, they probably wouldn’t lasted long. The oldest known skull that is (probably) AMh was recently found in Morocco, while modern humans remains, already known from about 100,000 years ago in Israel, have recently been found in northern Saudi Arabia.

While work by Nick Patterson suggests that modern humans were formed by a fusion between two long-isolated populations, a bit less than half a million years ago.

So: genomics had made recent history Africa pretty clear. Bantu agriculuralists expanded and replaced hunter-gatherers, farmers and herders from the Middle East settled North Africa, Egypt and northeaat Africa, while Nilotic herdsmen expanded south from the Sudan. There are traces of earlier patterns and peoples, but today, only traces. As for questions back further in time, such as the origins of modern humans – we thought we knew, and now we know we don’t. But that’s progress.

https://westhunt.wordpress.com/2018/04/18/reichs-journey/
David Reich’s professional path must have shaped his perspective on the social sciences. Look at the record. He starts his professional career examining the role of genetics in the elevated prostate cancer risk seen in African-American men. Various social-science fruitcakes oppose him even looking at the question of ancestry ( African vs European). But they were wrong: certain African-origin alleles explain the increased risk. Anthropologists (and human geneticists) were sure (based on nothing) that modern humans hadn’t interbred with Neanderthals – but of course that happened. Anthropologists and archaeologists knew that Gustaf Kossina couldn’t have been right when he said that widespread material culture corresponded to widespread ethnic groups, and that migration was the primary explanation for changes in the archaeological record – but he was right. They knew that the Indo-European languages just couldn’t have been imposed by fire and sword – but Reich’s work proved them wrong. Lots of people – the usual suspects plus Hindu nationalists – were sure that the AIT ( Aryan Invasion Theory) was wrong, but it looks pretty good today.

Some sociologists believed that caste in India was somehow imposed or significantly intensified by the British – but it turns out that most jatis have been almost perfectly endogamous for two thousand years or more…

It may be that Reich doesn’t take these guys too seriously anymore. Why should he?

varnas, jatis, aryan invastion theory: https://westhunt.wordpress.com/2018/04/22/who-we-are-8-india/

europe and EEF+WHG+ANE: https://westhunt.wordpress.com/2018/05/01/who-we-are-9-europe/

https://www.nationalreview.com/2018/03/book-review-david-reich-human-genes-reveal-history/
The massive mixture events that occurred in the recent past to give rise to Europeans and South Asians, to name just two groups, were likely “male mediated.” That’s another way of saying that men on the move took local women as brides or concubines. In the New World there are many examples of this, whether it be among African Americans, where most European ancestry seems to come through men, or in Latin America, where conquistadores famously took local women as paramours. Both of these examples are disquieting, and hint at the deep structural roots of patriarchal inequality and social subjugation that form the backdrop for the emergence of many modern peoples.
west-hunter  scitariat  books  review  sapiens  anthropology  genetics  genomics  history  antiquity  iron-age  world  europe  gavisti  aDNA  multi  politics  culture-war  kumbaya-kult  social-science  academia  truth  westminster  environmental-effects  embodied  pop-diff  nordic  mediterranean  the-great-west-whale  germanic  the-classics  shift  gene-flow  homo-hetero  conquest-empire  morality  diversity  aphorism  migration  migrant-crisis  EU  africa  MENA  gender  selection  speed  time  population-genetics  error  concrete  econotariat  economics  regression  troll  lol  twitter  social  media  street-fighting  methodology  robust  disease  psychiatry  iq  correlation  usa  obesity  dysgenics  education  track-record  people  counterexample  reason  thinking  fisher  giants  old-anglo  scifi-fantasy  higher-ed  being-right  stories  reflection  critique  multiplicative  iteration-recursion  archaics  asia  developing-world  civil-liberty  anglo  oceans  food  death  horror  archaeology  gnxp  news  org:mag  right-wing  age-of-discovery  latin-america  ea 
march 2018 by nhaliday
Frontiers | Can We Validate the Results of Twin Studies? A Census-Based Study on the Heritability of Educational Achievement | Genetics
As for most phenotypes, the amount of variance in educational achievement explained by SNPs is lower than the amount of additive genetic variance estimated in twin studies. Twin-based estimates may however be biased because of self-selection and differences in cognitive ability between twins and the rest of the population. Here we compare twin registry based estimates with a census-based heritability estimate, sampling from the same Dutch birth cohort population and using the same standardized measure for educational achievement. Including important covariates (i.e., sex, migration status, school denomination, SES, and group size), we analyzed 893,127 scores from primary school children from the years 2008–2014. For genetic inference, we used pedigree information to construct an additive genetic relationship matrix. Corrected for the covariates, this resulted in an estimate of 85%, which is even higher than based on twin studies using the same cohort and same measure. We therefore conclude that the genetic variance not tagged by SNPs is not an artifact of the twin method itself.
study  biodet  behavioral-gen  iq  psychometrics  psychology  cog-psych  twin-study  methodology  variance-components  state-of-art  🌞  developmental  age-generation  missing-heritability  biases  measurement  sampling-bias  sib-study 
december 2017 by nhaliday
galaxy - How do astronomers estimate the total mass of dust in clouds and galaxies? - Astronomy Stack Exchange
Dust absorbs stellar light (primarily in the ultraviolet), and is heated up. Subsequently it cools by emitting infrared, "thermal" radiation. Assuming a dust composition and grain size distribution, the amount of emitted IR light per unit dust mass can be calculated as a function of temperature. Observing the object at several different IR wavelengths, a Planck curve can be fitted to the data points, yielding the dust temperature. The more UV light incident on the dust, the higher the temperature.

The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).
nibble  q-n-a  overflow  space  measurement  measure  estimate  physics  electromag  visuo  methodology 
december 2017 by nhaliday
How do you measure the mass of a star? (Beginner) - Curious About Astronomy? Ask an Astronomer
Measuring the mass of stars in binary systems is easy. Binary systems are sets of two or more stars in orbit about each other. By measuring the size of the orbit, the stars' orbital speeds, and their orbital periods, we can determine exactly what the masses of the stars are. We can take that knowledge and then apply it to similar stars not in multiple systems.

We also can easily measure the luminosity and temperature of any star. A plot of luminocity versus temperature for a set of stars is called a Hertsprung-Russel (H-R) diagram, and it turns out that most stars lie along a thin band in this diagram known as the main Sequence. Stars arrange themselves by mass on the Main Sequence, with massive stars being hotter and brighter than their small-mass bretheren. If a star falls on the Main Sequence, we therefore immediately know its mass.

In addition to these methods, we also have an excellent understanding of how stars work. Our models of stellar structure are excellent predictors of the properties and evolution of stars. As it turns out, the mass of a star determines its life history from day 1, for all times thereafter, not only when the star is on the Main Sequence. So actually, the position of a star on the H-R diagram is a good indicator of its mass, regardless of whether it's on the Main Sequence or not.
nibble  q-n-a  org:junk  org:edu  popsci  space  physics  electromag  measurement  mechanics  gravity  cycles  oscillation  temperature  visuo  plots  correlation  metrics  explanation  measure  methodology 
december 2017 by nhaliday
Land, history or modernization? Explaining ethnic fractionalization: Ethnic and Racial Studies: Vol 38, No 2
Ethnic fractionalization (EF) is frequently used as an explanatory tool in models of economic development, civil war and public goods provision. However, if EF is endogenous to political and economic change, its utility for further research diminishes. This turns out not to be the case. This paper provides the first comprehensive model of EF as a dependent variable.
study  polisci  sociology  political-econ  economics  broad-econ  diversity  putnam-like  race  concept  conceptual-vocab  definition  realness  eric-kaufmann  roots  database  dataset  robust  endogenous-exogenous  causation  anthropology  cultural-dynamics  tribalism  methodology  world  developing-world  🎩  things  metrics  intricacy  microfoundations 
december 2017 by nhaliday
microeconomics - Partial vs. general equilibrium - Economics Stack Exchange
The main difference between partial and general equilibrium models is, that partial equilibrium models assume that what happens on the market one wants to analyze has no effect on other markets.
q-n-a  stackex  explanation  jargon  comparison  concept  models  economics  micro  macro  equilibrium  supply-demand  markets  methodology  competition 
november 2017 by nhaliday
Use and Interpretation of LD Score Regression
LD Score regression distinguishes confounding from polygenicity in genome-wide association studies: https://sci-hub.bz/10.1038/ng.3211
- Po-Ru Loh, Nick Patterson, et al.

https://www.biorxiv.org/content/biorxiv/early/2014/02/21/002931.full.pdf

Both polygenicity (i.e. many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield inflated distributions of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from bias and true signal from polygenicity. We have developed an approach that quantifies the contributions of each by examining the relationship between test statistics and linkage disequilibrium (LD). We term this approach LD Score regression. LD Score regression provides an upper bound on the contribution of confounding bias to the observed inflation in test statistics and can be used to estimate a more powerful correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size.

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n3/extref/ng.3211-S1.pdf

An atlas of genetic correlations across human diseases
and traits: https://sci-hub.bz/10.1038/ng.3406

https://www.biorxiv.org/content/early/2015/01/27/014498.full.pdf

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n11/extref/ng.3406-S1.pdf

https://github.com/bulik/ldsc
ldsc is a command line tool for estimating heritability and genetic correlation from GWAS summary statistics. ldsc also computes LD Scores.
nibble  pdf  slides  talks  bio  biodet  genetics  genomics  GWAS  genetic-correlation  correlation  methodology  bioinformatics  concept  levers  🌞  tutorial  explanation  pop-structure  gene-drift  ideas  multi  study  org:nat  article  repo  software  tools  libraries  stats  hypothesis-testing  biases  confounding  gotchas  QTL  simulation  survey  preprint  population-genetics 
november 2017 by nhaliday
Ancient Admixture in Human History
- Patterson, Reich et al., 2012
Population mixture is an important process in biology. We present a suite of methods for learning about population mixtures, implemented in a software package called ADMIXTOOLS, that support formal tests for whether mixture occurred and make it possible to infer proportions and dates of mixture. We also describe the development of a new single nucleotide polymorphism (SNP) array consisting of 629,433 sites with clearly documented ascertainment that was specifically designed for population genetic analyses and that we genotyped in 934 individuals from 53 diverse populations. To illustrate the methods, we give a number of examples that provide new insights about the history of human admixture. The most striking finding is a clear signal of admixture into northern Europe, with one ancestral population related to present-day Basques and Sardinians and the other related to present-day populations of northeast Asia and the Americas. This likely reflects a history of admixture between Neolithic migrants and the indigenous Mesolithic population of Europe, consistent with recent analyses of ancient bones from Sweden and the sequencing of the genome of the Tyrolean “Iceman.”
nibble  pdf  study  article  methodology  bio  sapiens  genetics  genomics  population-genetics  migration  gene-flow  software  trees  concept  history  antiquity  europe  roots  gavisti  🌞  bioinformatics  metrics  hypothesis-testing  levers  ideas  libraries  tools  pop-structure 
november 2017 by nhaliday
SEXUAL DIMORPHISM, SEXUAL SELECTION, AND ADAPTATION IN POLYGENIC CHARACTERS - Lande - 1980 - Evolution - Wiley Online Library
https://twitter.com/gcochran99/status/970758341990367232
https://archive.is/mcKvr
Lol, that's nothing, my biology teacher in high school told me sex differences couldn't evolve since all of us inherit 50% of genes from parents of both sexes. Being a raucous hispanic kid I burst out laughing, she was not pleased
--
Sex differences actually evolve more slowly because of that: something like 80 times more slowly.
...
Doesn't have that number, but in the same ballpark.

Sexual Dimorphism, Sexual Selection, And Adaptation In Polygenic Characters

Russell Lande

https://twitter.com/gcochran99/status/999189778867208193
https://archive.is/AR8FY
I believe it, because sex differences [ in cases where the trait is not sex-limited ] evolve far more slowly than other things, on the order of 100 times more slowly. Lande 1980: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1558-5646.1980.tb04817.x

The deep past has a big vote in such cases.
...
as for the extent that women were voluntarily choosing mates 20k years ago, or 100k years ago - I surely don't know.

other time mentioned: https://pinboard.in/u:nhaliday/b:3a7c5b42dd50
study  article  bio  biodet  gender  gender-diff  evolution  genetics  population-genetics  methodology  nibble  sex  🌞  todo  pdf  piracy  marginal  comparison  pro-rata  data  multi  twitter  social  discussion  backup  west-hunter  scitariat  farmers-and-foragers  sexuality  evopsych  EEA 
november 2017 by nhaliday
Global Evidence on Economic Preferences
- Benjamin Enke et al

This paper studies the global variation in economic preferences. For this purpose, we present the Global Preference Survey (GPS), an experimentally validated survey dataset of time preference, risk preference, positive and negative reciprocity, altruism, and trust from 80,000 individuals in 76 countries. The data reveal substantial heterogeneity in preferences across countries, but even larger within-country heterogeneity. Across individuals, preferences vary with age, gender, and cognitive ability, yet these relationships appear partly country specific. At the country level, the data reveal correlations between preferences and bio-geographic and cultural variables such as agricultural suitability, language structure, and religion. Variation in preferences is also correlated with economic outcomes and behaviors. Within countries and subnational regions, preferences are linked to individual savings decisions, labor market choices, and prosocial behaviors. Across countries, preferences vary with aggregate outcomes ranging from per capita income, to entrepreneurial activities, to the frequency of armed conflicts.

...

This paper explores these questions by making use of the core features of the GPS: (i) coverage of 76 countries that represent approximately 90 percent of the world population; (ii) representative population samples within each country for a total of 80,000 respondents, (iii) measures designed to capture time preference, risk preference, altruism, positive reciprocity, negative reciprocity, and trust, based on an ex ante experimental validation procedure (Falk et al., 2016) as well as pre-tests in culturally heterogeneous countries, (iv) standardized elicitation and translation techniques through the pre-existing infrastructure of a global polling institute, Gallup. Upon publication, the data will be made publicly available online. The data on individual preferences are complemented by a comprehensive set of covariates provided by the Gallup World Poll 2012.

...

The GPS preference measures are based on twelve survey items, which were selected in an initial survey validation study (see Falk et al., 2016, for details). The validation procedure involved conducting multiple incentivized choice experiments for each preference, and testing the relative abilities of a wide range of different question wordings and formats to predict behavior in these choice experiments. The particular items used to construct the GPS preference measures were selected based on optimal performance out of menus of alternative items (for details see Falk et al., 2016). Experiments provide a valuable benchmark for selecting survey items, because they can approximate the ideal choice situations, specified in economic theory, in which individuals make choices in controlled decision contexts. Experimental measures are very costly, however, to implement in a globally representative sample, whereas survey measures are much less costly.⁴ Selecting survey measures that can stand in for incentivized revealed preference measures leverages the strengths of both approaches.

The Preference Survey Module: A Validated Instrument for Measuring Risk, Time, and Social Preferences: http://ftp.iza.org/dp9674.pdf

Table 1: Survey items of the GPS

Figure 1: World maps of patience, risk taking, and positive reciprocity.
Figure 2: World maps of negative reciprocity, altruism, and trust.

Figure 3: Gender coefficients by country. For each country, we regress the respective preference on gender, age and its square, and subjective math skills, and plot the resulting gender coefficients as well as their significance level. In order to make countries comparable, each preference was standardized (z-scores) within each country before computing the coefficients.

Figure 4: Cognitive ability coefficients by country. For each country, we regress the respective preference on gender, age and its square, and subjective math skills, and plot the resulting coefficients on subjective math skills as well as their significance level. In order to make countries comparable, each preference was standardized (z-scores) within each country before computing the coefficients.

Figure 5: Age profiles by OECD membership.

Table 6: Pairwise correlations between preferences and geographic and cultural variables

Figure 10: Distribution of preferences at individual level.
Figure 11: Distribution of preferences at country level.

interesting digression:
D Discussion of Measurement Error and Within- versus Between-Country Variation
study  dataset  data  database  let-me-see  economics  growth-econ  broad-econ  microfoundations  anthropology  cultural-dynamics  culture  psychology  behavioral-econ  values  🎩  pdf  piracy  world  spearhead  general-survey  poll  group-level  within-group  variance-components  🌞  correlation  demographics  age-generation  gender  iq  cooperate-defect  time-preference  temperance  labor  wealth  wealth-of-nations  entrepreneurialism  outcome-risk  altruism  trust  patience  developing-world  maps  visualization  n-factor  things  phalanges  personality  regression  gender-diff  pop-diff  geography  usa  canada  anglo  europe  the-great-west-whale  nordic  anglosphere  MENA  africa  china  asia  sinosphere  latin-america  self-report  hive-mind  GT-101  realness  long-short-run  endo-exo  signal-noise  communism  japan  korea  methodology  measurement  org:ngo  white-paper  endogenous-exogenous  within-without  hari-seldon 
october 2017 by nhaliday
Karl Pearson and the Chi-squared Test
Pearson's paper of 1900 introduced what subsequently became known as the chi-squared test of goodness of fit. The terminology and allusions of 80 years ago create a barrier for the modern reader, who finds that the interpretation of Pearson's test procedure and the assessment of what he achieved are less than straightforward, notwithstanding the technical advances made since then. An attempt is made here to surmount these difficulties by exploring Pearson's relevant activities during the first decade of his statistical career, and by describing the work by his contemporaries and predecessors which seem to have influenced his approach to the problem. Not all the questions are answered, and others remain for further study.

original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
nibble  papers  acm  stats  hypothesis-testing  methodology  history  mostly-modern  pre-ww2  old-anglo  giants  science  the-trenches  stories  multi  q-n-a  overflow  explanation  summary  innovation  discovery  distribution  degrees-of-freedom  limits 
october 2017 by nhaliday
Section 10 Chi-squared goodness-of-fit test.
- pf that chi-squared statistic for Pearson's test (multinomial goodness-of-fit) actually has chi-squared distribution asymptotically
- the gotcha: terms Z_j in sum aren't independent
- solution:
- compute the covariance matrix of the terms to be E[Z_iZ_j] = -sqrt(p_ip_j)
- note that an equivalent way of sampling the Z_j is to take a random standard Gaussian and project onto the plane orthogonal to (sqrt(p_1), sqrt(p_2), ..., sqrt(p_r))
- that is equivalent to just sampling a Gaussian w/ 1 less dimension (hence df=r-1)
QED
pdf  nibble  lecture-notes  mit  stats  hypothesis-testing  acm  probability  methodology  proofs  iidness  distribution  limits  identity  direction  lifts-projections 
october 2017 by nhaliday
self study - Looking for a good and complete probability and statistics book - Cross Validated
I never had the opportunity to visit a stats course from a math faculty. I am looking for a probability theory and statistics book that is complete and self-sufficient. By complete I mean that it contains all the proofs and not just states results.
nibble  q-n-a  overflow  data-science  stats  methodology  books  recommendations  list  top-n  confluence  proofs  rigor  reference  accretion 
october 2017 by nhaliday
Why are children in the same family so different from one another? - PubMed - NCBI
- Plomin et al

The article has three goals: (1) To describe quantitative genetic methods and research that lead to the conclusion that nonshared environment is responsible for most environmental variation relevant to psychological development, (2) to discuss specific nonshared environmental influences that have been studied to date, and (3) to consider relationships between nonshared environmental influences and behavioral differences between children in the same family. The reason for presenting this article in BBS is to draw attention to the far-reaching implications of finding that psychologically relevant environmental influences make children in a family different from, not similar to, one another.
study  essay  article  survey  spearhead  psychology  social-psych  biodet  behavioral-gen  🌞  methodology  environmental-effects  signal-noise  systematic-ad-hoc  composition-decomposition  pdf  piracy  volo-avolo  developmental  iq  cog-psych  variance-components  GxE  nonlinearity  twin-study  personality  sib-study 
october 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractframescitk

related tags

-_-  2016-election  :/  aaronson  ability-competence  abortion-contraception-embryo  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  aDNA  advanced  adversarial  advertising  advice  africa  age-generation  age-of-discovery  aggregator  aging  agri-mindset  agriculture  ai  ai-control  albion  algebra  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  AMT  analogy  analysis  analytical-holistic  anarcho-tyranny  anglo  anglosphere  anomie  anonymity  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  app  apple  applicability-prereqs  applications  approximation  arbitrage  archaeology  archaics  arms  arrows  article  asia  assembly  assortative-mating  atmosphere  atoms  attaq  attention  audio  authoritarianism  autism  automata-languages  automation  autor  aversion  axelrod  axioms  backup  bare-hands  barons  bayesian  begin-middle-end  behavioral-econ  behavioral-gen  being-right  benchmarks  best-practices  better-explained  betting  bias-variance  biases  bible  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blog  blowhards  bonferroni  books  bostrom  bounded-cognition  brain-scan  branches  brands  britain  broad-econ  browser  buddhism  build-packaging  business  business-models  c(pp)  c:*  c:**  c:***  caching  calculation  calculator  caltech  canada  cancer  candidate-gene  canon  capital  capitalism  cardio  career  carmack  cartoons  causation  censorship  chan  characterization  charity  chart  cheatsheet  checking  checklists  chicago  china  christianity  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  classic  classification  clever-rats  climate-change  cliometrics  clown-world  cmu  coalitions  coarse-fine  cochrane  cocktail  code-dive  code-organizing  coding-theory  cog-psych  cohesion  cold-war  columbia  comics  coming-apart  commentary  common-case  communication  communism  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  composition-decomposition  compressed-sensing  compression  computation  computer-memory  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confluence  confounding  confucian  confusion  conquest-empire  constraint-satisfaction  context  contracts  contradiction  contrarianism  control  convergence  convexity-curvature  cool  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  coupling-cohesion  courage  course  cracker-econ  cracker-prog  creative  crime  criminal-justice  criminology  CRISPR  critique  crooked  crosstab  crux  crypto  crypto-anarchy  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  dan-luu  darwinian  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debt  debugging  decision-making  decision-theory  deep-learning  deep-materialism  defense  definite-planning  definition  degrees-of-freedom  dementia  democracy  demographic-transition  demographics  dennett  density  dental  dependence-independence  descriptive  design  desktop  detail-architecture  developing-world  developmental  devops  devtools  diet  differential  differential-privacy  dimensionality  diogenes  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distributed  distribution  divergence  diversity  divide-and-conquer  documentation  domestication  dotnet  douthatish  draft  drama  dropbox  drugs  DSL  duplication  duty  dynamical  dysgenics  early-modern  earth  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  ed-yong  eden  education  EEA  effect-size  efficiency  egalitarianism-hierarchy  ego-depletion  EGT  elections  electromag  elegance  elite  email  embedded-cognition  embodied  embodied-cognition  emergent  emotion  empirical  ems  encyclopedic  endo-exo  endocrine  endogenous-exogenous  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epigenetics  epistemic  equilibrium  ergodic  eric-kaufmann  erlang  error  essay  estimate  ethanol  ethical-algorithms  ethics  ethnocentrism  ethnography  EU  europe  events  evidence  evidence-based  evolution  evopsych  examples  exegesis-hermeneutics  existence  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  explore-exploit  exposition  expression-survival  externalities  extra-introversion  facebook  faq  farmers-and-foragers  fashun  features  fermi  fertility  feudal  feynman  fiction  field-study  finance  fire  fisher  fitness  fitsci  flexibility  fluid  flux-stasis  food  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forum  fourier  frameworks  frequency  frequentist  frontend  frontier  functional  futurism  gallic  galor-like  game-theory  games  garett-jones  gavisti  gbooks  GCTA  gedanken  gelman  gender  gender-diff  gene-drift  gene-flow  general-survey  generalization  genetic-correlation  genetic-load  genetics  genomics  geoengineering  geography  germanic  giants  gibbon  github  gnon  gnosis-logos  gnxp  golang  google  gotchas  government  gowers  grad-school  gradient-descent  graphical-models  graphics  graphs  gravity  gray-econ  greedy  gregory-clark  grokkability  grokkability-clarity  ground-up  group-level  group-selection  growth  growth-econ  GT-101  guide  guilt-shame  GWAS  gwern  GxE  habit  haidt  hanson  hanushek  happy-sad  hard-tech  hardware  hari-seldon  harvard  hashing  haskell  hate  health  healthcare  heavy-industry  heavyweights  henrich  hetero-advantage  heterodox  heuristic  hi-order-bits  hidden-motives  high-dimension  higher-ed  history  hive-mind  hmm  hn  homepage  homo-hetero  honor  horror  housing  howto  hsu  huge-data-the-biggest  human-capital  human-ml  human-study  humility  hypocrisy  hypothesis-testing  ideas  identification-equivalence  identity  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impact  impetus  incentives  increase-decrease  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  init  innovation  input-output  insight  instinct  institutions  insurance  integral  integrity  intel  intellectual-property  intelligence  interdisciplinary  interests  interface-compatibility  internet  interpretability  interpretation  intersection  intersection-connectedness  intervention  interview  intricacy  intuition  invariance  investing  ioannidis  iq  iron-age  is-ought  islam  israel  iteration-recursion  janus  japan  jargon  journos-pundits  judaism  judgement  julia  justice  jvm  kinship  knowledge  korea  krugman  kumbaya-kult  labor  language  large-factor  larry-summers  latent-variables  latin-america  lattice  law  leadership  learning  learning-theory  lecture-notes  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifts-projections  limits  linear-algebra  linear-models  linear-programming  linearity  liner-notes  linguistics  links  linux  list  literature  lived-experience  llvm  local-global  lol  long-short-run  long-term  longevity  longform  longitudinal  low-hanging  lower-bounds  machiavelli  machine-learning  macro  madisonian  magnitude  malaise  male-variability  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  market-power  markets  martial  martingale  matching  math  math.CA  math.CO  math.DS  math.FA  math.NT  mathtariat  matrix-factorization  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  MENA  mena4  mendel-randomization  mental-math  meta-analysis  meta:medicine  meta:prediction  meta:reading  meta:research  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metal-to-virtual  metameta  methodology  metric-space  metrics  michael-nielsen  micro  microfoundations  microsoft  migrant-crisis  migration  military  minimalism  minimum-viable  missing-heritability  mit  ML-MAP-E  mobile  mobility  model-class  model-organism  model-selection  models  modernity  mokyr-allen-mccloskey  moloch  moments  monetary-fiscal  money  monte-carlo  mooc  morality  mostly-modern  motivation  move-fast-(and-break-things)  mrtz  multi  multiplicative  music  mutation  mystic  myth  n-factor  nascent-state  nationalism-globalism  natural-experiment  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nihil  nitty-gritty  nl-and-so-can-you  nlp  no-go  noahpinion  noble-lie  noblesse-oblige  noise-structure  nonlinearity  nonparametric  nordic  norms  north-weingast-like  northeast  nostalgia  novelty  null-result  numerics  nutrition  nyc  obesity  objective-measure  objektbuch  observer-report  ocaml-sml  occam  occident  oceans  ocr  old-anglo  oly  oly-programming  online-learning  oop  open-closed  operational  opioids  opsec  optimate  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:biz  org:bleg  org:bv  org:com  org:data  org:davos  org:econlib  org:edge  org:edu  org:fin  org:foreign  org:gov  org:health  org:inst  org:junk  org:lite  org:local  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  org:theos  organizing  orient  orwellian  os  oscillation  oss  osx  outcome-risk  outliers  overflow  oxbridge  p:someday  PAC  papers  parable  paradox  parallax  parametric  parasites-microbiome  parenting  pareto  parsimony  paternal-age  path-dependence  patho-altruism  patience  paul-romer  paying-rent  pdf  peace-violence  pennsylvania  people  percolation  performance  personal-finance  personality  perturbation  pessimism  phalanges  phase-transition  philosophy  phys-energy  physics  pic  pigeonhole-markov  piketty  pinker  piracy  planning  plots  pls  plt  poast  podcast  poetry  polanyi-marx  polarization  policy  polis  polisci  political-econ  politics  poll  pop-diff  pop-structure  popsci  population  population-genetics  positivity  postmortem  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  prediction-markets  preference-falsification  prejudice  prepping  preprint  presentation  prioritizing  priors-posteriors  privacy  pro-rata  probability  problem-solving  prof  programming  project  proofs  propaganda  properties  property-rights  proposal  protestant-catholic  protocol-metadata  prudence  pseudoE  psych-architecture  psychiatry  psycho-atoms  psychology  psychometrics  public-goodish  public-health  publishing  putnam-like  python  q-n-a  qra  QTL  quality  quantitative-qualitative  quantum  questions  quixotic  quiz  quora  quotes  r-lang  race  rand-approx  random  random-matrices  random-networks  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  reading  realness  realpolitik  reason  recent-selection  recommendations  recruiting  red-queen  reddit  redistribution  reduction  reference  reflection  regression  regression-to-mean  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  repo  research  research-program  retention  retrofit  review  revolution  rhetoric  right-wing  rigor  rindermann-thompson  risk  robust  rock  roots  rot  russia  rust  s-factor  s:*  s:**  s:***  s:null  saas  safety  sampling  sampling-bias  sanctity-degradation  sapiens  scala  scale  scaling-tech  scaling-up  schelling  scholar  sci-comp  science  science-anxiety  scifi-fantasy  scitariat  search  securities  security  selection  self-interest  self-report  selfish-gene  sensitivity  sentiment  sequential  series  sex  sexuality  shakespeare  shalizi  shift  shipping  short-circuit  sib-study  signal-noise  signaling  signum  similarity  simler  simplex  simplification-normalization  simulation  singularity  sinosphere  skeleton  skunkworks  sky  sleuthin  slides  slippery-slope  smoothness  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  sociality  society  sociology  software  solid-study  solzhenitsyn  space  span-cover  sparsity  spatial  speaking  spearhead  speculation  speed  speedometer  spock  sports  spreading  ssc  stackex  stagnation  stanford  stat-mech  stat-power  state  state-of-art  statesmen  static-dynamic  stats  status  steel-man  stereotypes  stochastic-processes  stock-flow  stories  strategy  straussian  stream  street-fighting  stress  structure  study  studying  stylized-facts  sublinear  success  sulla  summary  summer-2014  supply-demand  survey  sv  symmetry  synchrony  synthesis  system-design  systematic-ad-hoc  systems  tactics  tails  tainter  talks  taxes  tcstariat  teaching  tech  tech-infrastructure  technocracy  technology  techtariat  telos-atelos  temperance  temperature  tetlock  the-basilisk  the-bones  the-classics  the-great-west-whale  the-south  the-trenches  the-watchers  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  tidbits  time  time-complexity  time-preference  time-series  tip-of-tongue  todo  toolkit  tools  top-n  topology  toxo-gondii  toxoplasmosis  traces  track-record  trade  tradecraft  tradeoffs  tradition  transportation  trees  trends  tribalism  tricks  trivia  troll  trust  truth  tumblr  tutorial  tutoring  tv  twin-study  twitter  types  ubiquity  ui  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  unix  unsupervised  urban  urban-rural  us-them  usa  vaclav-smil  values  vampire-squid  variance-components  vcs  video  virginia-DC  visual-understanding  visualization  visuo  vitality  volo-avolo  vulgar  walter-scheidel  war  washington  water  waves  wealth  wealth-of-nations  web  webapp  weird  welfare-state  west-hunter  westminster  whiggish-hegelian  white-paper  whole-partial-many  wiki  wild-ideas  winner-take-all  winter-2016  winter-2017  wire-guided  wisdom  within-group  within-without  wonkish  workflow  working-stiff  world  world-war  wormholes  worrydream  worse-is-better/the-right-thing  writing  wut  X-not-about-Y  xenobio  yak-shaving  yoga  yvain  zeitgeist  zero-positive-sum  zooming  🌞  🎓  🎩  🐸  👳  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: