nhaliday + metrics   129

Measures of cultural distance - Marginal REVOLUTION
A new paper with many authors — most prominently Joseph Henrich — tries to measure the cultural gaps between different countries.  I am reproducing a few of their results (see pp.36-37 for more), noting that higher numbers represent higher gaps:

...

Overall the numbers show much greater cultural distance of other nations from China than from the United States, a significant and under-discussed problem for China. For instance, the United States is about as culturally close to Hong Kong as China is.

[ed.: Japan is closer to the US than China. Interesting. I'd like to see some data based on something other than self-reported values though.]

the study:
Beyond WEIRD Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259613
We present a new tool that provides a means to measure the psychological and cultural distance between two societies and create a distance scale with any population as the point of comparison. Since psychological data is dominated by samples drawn from the United States or other WEIRD nations, this tool provides a “WEIRD scale” to assist researchers in systematically extending the existing database of psychological phenomena to more diverse and globally representative samples. As the extreme WEIRDness of the literature begins to dissolve, the tool will become more useful for designing, planning, and justifying a wide range of comparative psychological projects. We have made our code available and developed an online application for creating other scales (including the “Sino scale” also presented in this paper). We discuss regional diversity within nations showing the relative homogeneity of the United States. Finally, we use these scales to predict various psychological outcomes.
econotariat  marginal-rev  henrich  commentary  study  summary  list  data  measure  metrics  similarity  culture  cultural-dynamics  sociology  things  world  usa  anglo  anglosphere  china  asia  japan  sinosphere  russia  developing-world  canada  latin-america  MENA  europe  eastern-europe  germanic  comparison  great-powers  thucydides  foreign-policy  the-great-west-whale  generalization  anthropology  within-group  homo-hetero  moments  exploratory  phalanges  the-bones  🎩  🌞  broad-econ  cocktail  n-factor  measurement  expectancy  distribution  self-report  values  expression-survival  uniqueness 
7 weeks ago by nhaliday
Two Performance Aesthetics: Never Miss a Frame and Do Almost Nothing - Tristan Hume
I’ve noticed when I think about performance nowadays that I think in terms of two different aesthetics. One aesthetic, which I’ll call Never Miss a Frame, comes from the world of game development and is focused on writing code that has good worst case performance by making good use of the hardware. The other aesthetic, which I’ll call Do Almost Nothing comes from a more academic world and is focused on algorithmically minimizing the work that needs to be done to the extent that there’s barely any work left, paying attention to the performance at all scales.

[ed.: Neither of these exactly matches TCS performance PoV but latter is closer (the focus on diffs is kinda weird).]

...

Never Miss a Frame

In game development the most important performance criteria is that your game doesn’t miss frame deadlines. You have a target frame rate and if you miss the deadline for the screen to draw a new frame your users will notice the jank. This leads to focusing on the worst case scenario and often having fixed maximum limits for various quantities. This property can also be important in areas other than game development, like other graphical applications, real-time audio, safety-critical systems and many embedded systems. A similar dynamic occurs in distributed systems where one server needs to query 100 others and combine the results, you’ll wait for the slowest of the 100 every time so speeding up some of them doesn’t make the query faster, and queries occasionally taking longer (e.g because of garbage collection) will impact almost every request!

...

In this kind of domain you’ll often run into situations where in the worst case you can’t avoid processing a huge number of things. This means you need to focus your effort on making the best use of the hardware by writing code at a low level and paying attention to properties like cache size and memory bandwidth.

Projects with inviolable deadlines need to adjust different factors than speed if the code runs too slow. For example a game might decrease the size of a level or use a more efficient but less pretty rendering technique.

Aesthetically: Data should be tightly packed, fixed size, and linear. Transcoding data to and from different formats is wasteful. Strings and their variable lengths and inefficient operations must be avoided. Only use tools that allow you to work at a low level, even if they’re annoying, because that’s the only way you can avoid piles of fixed costs making everything slow. Understand the machine and what your code does to it.

Personally I identify this aesthetic most with Jonathan Blow. He has a very strong personality and I’ve watched enough of videos of him that I find imagining “What would Jonathan Blow say?” as a good way to tap into this aesthetic. My favourite articles about designs following this aesthetic are on the Our Machinery Blog.

...

Do Almost Nothing

Sometimes, it’s important to be as fast as you can in all cases and not just orient around one deadline. The most common case is when you simply have to do something that’s going to take an amount of time noticeable to a human, and if you can make that time shorter in some situations that’s great. Alternatively each operation could be fast but you may run a server that runs tons of them and you’ll save on server costs if you can decrease the load of some requests. Another important case is when you care about power use, for example your text editor not rapidly draining a laptop’s battery, in this case you want to do the least work you possibly can.

A key technique for this approach is to never recompute something from scratch when it’s possible to re-use or patch an old result. This often involves caching: keeping a store of recent results in case the same computation is requested again.

The ultimate realization of this aesthetic is for the entire system to deal only in differences between the new state and the previous state, updating data structures with only the newly needed data and discarding data that’s no longer needed. This way each part of the system does almost no work because ideally the difference from the previous state is very small.

Aesthetically: Data must be in whatever structure scales best for the way it is accessed, lots of trees and hash maps. Computations are graphs of inputs and results so we can use all our favourite graph algorithms to optimize them! Designing optimal systems is hard so you should use whatever tools you can to make it easier, any fixed cost they incur will be made negligible when you optimize away all the work they need to do.

Personally I identify this aesthetic most with my friend Raph Levien and his articles about the design of the Xi text editor, although Raph also appreciates the other aesthetic and taps into it himself sometimes.

...

_I’m conflating the axes of deadline-oriented vs time-oriented and low-level vs algorithmic optimization, but part of my point is that while they are different, I think these axes are highly correlated._

...

Text Editors

Sublime Text is a text editor that mostly follows the Never Miss a Frame approach. ...

The Xi Editor is designed to solve this problem by being designed from the ground up to grapple with the fact that some operations, especially those interacting with slow compilers written by other people, can’t be made instantaneous. It does this using a fancy asynchronous plugin model and lots of fancy data structures.
...

...

Compilers

Jonathan Blow’s Jai compiler is clearly designed with the Never Miss a Frame aesthetic. It’s written to be extremely fast at every level, and the language doesn’t have any features that necessarily lead to slow compiles. The LLVM backend wasn’t fast enough to hit his performance goals so he wrote an alternative backend that directly writes x86 code to a buffer without doing any optimizations. Jai compiles something like 100,000 lines of code per second. Designing both the language and compiler to not do anything slow lead to clean build performance 10-100x faster than other commonly-used compilers. Jai is so fast that its clean builds are faster than most compilers incremental builds on common project sizes, due to limitations in how incremental the other compilers are.

However, Jai’s compiler is still O(n) in the codebase size where incremental compilers can be O(n) in the size of the change. Some compilers like the work-in-progress rust-analyzer and I think also Roslyn for C# take a different approach and focus incredibly hard on making everything fully incremental. For small changes (the common case) this can let them beat Jai and respond in milliseconds on arbitrarily large projects, even if they’re slower on clean builds.

Conclusion
I find both of these aesthetics appealing, but I also think there’s real trade-offs that incentivize leaning one way or the other for a given project. I think people having different performance aesthetics, often because one aesthetic really is better suited for their domain, is the source of a lot of online arguments about making fast systems. The different aesthetics also require different bases of knowledge to pursue, like knowledge of data-oriented programming in C++ vs knowledge of abstractions for incrementality like Adapton, so different people may find that one approach seems way easier and better for them than the other.

I try to choose how to dedicate my effort to pursuing each aesthetics on a per project basis by trying to predict how effort in each direction would help. Some projects I know if I code it efficiently it will always hit the performance deadline, others I know a way to drastically cut down on work by investing time in algorithmic design, some projects need a mix of both. Personally I find it helpful to think of different programmers where I have a good sense of their aesthetic and ask myself how they’d solve the problem. One reason I like Rust is that it can do both low-level optimization and also has a good ecosystem and type system for algorithmic optimization, so I can more easily mix approaches in one project. In the end the best approach to follow depends not only on the task, but your skills or the skills of the team working on it, as well as how much time you have to work towards an ambitious design that may take longer for a better result.
techtariat  reflection  things  comparison  lens  programming  engineering  cracker-prog  carmack  games  performance  big-picture  system-design  constraint-satisfaction  metrics  telos-atelos  distributed  incentives  concurrency  cost-benefit  tradeoffs  systems  metal-to-virtual  latency-throughput  abstraction  marginal  caching  editors  strings  ideas  ui  common-case  examples  applications  flux-stasis  nitty-gritty  ends-means  thinking  summary  correlation  degrees-of-freedom  c(pp)  rust  interface  integration-extension  aesthetics  interface-compatibility  efficiency  adversarial 
10 weeks ago by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.

...

However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.

...

A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.

...

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

https://news.ycombinator.com/item?id=20191348
study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases 
june 2019 by nhaliday
classification - ImageNet: what is top-1 and top-5 error rate? - Cross Validated
Now, in the case of top-1 score, you check if the top class (the one having the highest probability) is the same as the target label.

In the case of top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities).
nibble  q-n-a  overflow  machine-learning  deep-learning  metrics  comparison  ranking  top-n  classification  computer-vision  benchmarks  dataset  accuracy  error  jargon 
june 2019 by nhaliday
performance - What is the difference between latency, bandwidth and throughput? - Stack Overflow
Latency is the amount of time it takes to travel through the tube.
Bandwidth is how wide the tube is.
The amount of water flow will be your throughput

Vehicle Analogy:

Container travel time from source to destination is latency.
Container size is bandwidth.
Container load is throughput.

--

Note, bandwidth in particular has other common meanings, I've assumed networking because this is stackoverflow but if it was a maths or amateur radio forum I might be talking about something else entirely.
q-n-a  stackex  programming  IEEE  nitty-gritty  definition  jargon  network-structure  metrics  speedometer  time  stock-flow  performance  latency-throughput  amortization-potential 
may 2019 by nhaliday
Frama-C
Frama-C is organized with a plug-in architecture (comparable to that of the Gimp or Eclipse). A common kernel centralizes information and conducts the analysis. Plug-ins interact with each other through interfaces defined by the kernel. This makes for robustness in the development of Frama-C while allowing a wide functionality spectrum.

...

Three heavyweight plug-ins that are used by the other plug-ins:

- Eva (Evolved Value analysis)
This plug-in computes variation domains for variables. It is quite automatic, although the user may guide the analysis in places. It handles a wide spectrum of C constructs. This plug-in uses abstract interpretation techniques.
- Jessie and Wp, two deductive verification plug-ins
These plug-ins are based on weakest precondition computation techniques. They allow to prove that C functions satisfy their specification as expressed in ACSL. These proofs are modular: the specifications of the called functions are used to establish the proof without looking at their code.

For browsing unfamiliar code:
- Impact analysis
This plug-in highlights the locations in the source code that are impacted by a modification.
- Scope & Data-flow browsing
This plug-in allows the user to navigate the dataflow of the program, from definition to use or from use to definition.
- Variable occurrence browsing
Also provided as a simple example for new plug-in development, this plug-in allows the user to reach the statements where a given variable is used.
- Metrics calculation
This plug-in allows the user to compute various metrics from the source code.

For code transformation:
- Semantic constant folding
This plug-in makes use of the results of the evolved value analysis plug-in to replace, in the source code, the constant expressions by their values. Because it relies on EVA, it is able to do more of these simplifications than a syntactic analysis would.
- Slicing
This plug-in slices the code according to a user-provided criterion: it creates a copy of the program, but keeps only those parts which are necessary with respect to the given criterion.
- Spare code: remove "spare code", code that does not contribute to the final results of the program.
- E-ACSL: translate annotations into C code for runtime assertion checking.
For verifying functional specifications:

- Aoraï: verify specifications expressed as LTL (Linear Temporal Logic) formulas
Other functionalities documented together with the EVA plug-in can be considered as verifying low-level functional specifications (inputs, outputs, dependencies,…)
For test-case generation:

- PathCrawler automatically finds test-case inputs to ensure coverage of a C function. It can be used for structural unit testing, as a complement to static analysis or to study the feasible execution paths of the function.
For concurrent programs:

- Mthread
This plug-in automatically analyzes concurrent C programs, using the EVA plug-in, taking into account all possible thread interactions. At the end of its execution, the concurrent behavior of each thread is over-approximated, resulting in precise information about shared variables, which mutex protects a part of the code, etc.
Front-end for other languages

- Frama-Clang
This plug-in provides a C++ front-end to Frama-C, based on the clang compiler. It transforms C++ code into a Frama-C AST, which can then be analyzed by the plug-ins above. Note however that it is very experimental and only supports a subset of C++11
tools  devtools  formal-methods  programming  software  c(pp)  systems  memory-management  ocaml-sml  debugging  checking  rigor  oss  code-dive  graphs  state  metrics  llvm  gallic  cool  worrydream  impact  flux-stasis  correctness  computer-memory  structure  static-dynamic 
may 2019 by nhaliday
Basic Error Rates
This page describes human error rates in a variety of contexts.

Most of the error rates are for mechanical errors. A good general figure for mechanical error rates appears to be about 0.5%.

Of course the denominator differs across studies. However only fairly simple actions are used in the denominator.

The Klemmer and Snyder study shows that much lower error rates are possible--in this case for people whose job consisted almost entirely of data entry.

The error rate for more complex logic errors is about 5%, based primarily on data on other pages, especially the program development page.
org:junk  list  links  objektbuch  data  database  error  accuracy  human-ml  machine-learning  ai  pro-rata  metrics  automation  benchmarks  marginal  nlp  language  density  writing  dataviz  meta:reading  speedometer 
may 2019 by nhaliday
Continuous Code Quality | SonarSource
they have cyclomatic complexity rule
$150/year for dev edition (needed for C++ but not Java/Python)
devtools  software  ruby  saas  programming  python  checking  c(pp)  jvm  structure  intricacy  graphs  golang  scala  metrics  javascript  dotnet  quality  static-dynamic 
may 2019 by nhaliday
its-not-software - steveyegge2
You don't work in the software industry.

...

So what's the software industry, and how do we differ from it?

Well, the software industry is what you learn about in school, and it's what you probably did at your previous company. The software industry produces software that runs on customers' machines — that is, software intended to run on a machine over which you have no control.

So it includes pretty much everything that Microsoft does: Windows and every application you download for it, including your browser.

It also includes everything that runs in the browser, including Flash applications, Java applets, and plug-ins like Adobe's Acrobat Reader. Their deployment model is a little different from the "classic" deployment models, but it's still software that you package up and release to some unknown client box.

...

Servware

Our industry is so different from the software industry, and it's so important to draw a clear distinction, that it needs a new name. I'll call it Servware for now, lacking anything better. Hardware, firmware, software, servware. It fits well enough.

Servware is stuff that lives on your own servers. I call it "stuff" advisedly, since it's more than just software; it includes configuration, monitoring systems, data, documentation, and everything else you've got there, all acting in concert to produce some observable user experience on the other side of a network connection.
techtariat  sv  tech  rhetoric  essay  software  saas  devops  engineering  programming  contrarianism  list  top-n  best-practices  applicability-prereqs  desktop  flux-stasis  homo-hetero  trends  games  thinking  checklists  dbs  models  communication  tutorial  wiki  integration-extension  frameworks  api  whole-partial-many  metrics  retrofit  c(pp)  pls  code-dive  planning  working-stiff  composition-decomposition  libraries  conceptual-vocab  amazon  system-design  cracker-prog  tech-infrastructure  blowhards 
may 2019 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
https://twitter.com/robinhanson/status/981291048965087232
https://archive.is/dUTD5
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408
https://archive.is/RpygO
How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Overcoming Bias : The Model to Beat: Status Rank
People often presume that policy can mostly ignore income inequality if key individual outcomes like health or happiness depend mainly on individual income. Yes, there’s some room for promoting insurance against income risk, but not much room. However, people often presume that policy should pay a lot more attention to inequality if individual outcomes depend more directly on the income of others, such as via envy or discouragement.

However, there’s a simple and plausible income interdependence scenario where inequality matters little for policy: when outcomes depend on rank. If individual outcomes are a function of each person’s percentile income rank, and if social welfare just adds up those individual outcomes, then income policy becomes irrelevant, because this social welfare sum is guaranteed to always add up to the same constant. Income-related policy may influence outcomes via other channels, but not via this channel. This applies whether the relevant rank is global, comparing each person to the entire world, or local, comparing each person only to a local community.

That 2010 paper, by Christopher Boyce, Gordon Brown, and Simon Moore, makes a strong case that in fact the outcome of life satisfaction depends on the incomes of others only via income rank. (Two followup papers find the same result for outcomes of psychological distress and nine measures of health.) They looked at 87,000 Brits, and found that while income rank strongly predicted outcomes, neither individual (log) income nor an average (log) income of their reference group predicted outcomes, after controlling for rank (and also for age, gender, education, marital status, children, housing ownership, labor-force status, and disabilities). These seem to me remarkably strong and robust results. (Confirmed here.)
ratty  hanson  commentary  study  summary  economics  psychology  social-psych  values  envy  inequality  status  s-factor  absolute-relative  compensation  money  ranking  local-global  emotion  meaningness  planning  long-term  stylized-facts  britain  health  biases  farmers-and-foragers  redistribution  moments  metrics  replication  happy-sad 
march 2018 by nhaliday
How do you measure the mass of a star? (Beginner) - Curious About Astronomy? Ask an Astronomer
Measuring the mass of stars in binary systems is easy. Binary systems are sets of two or more stars in orbit about each other. By measuring the size of the orbit, the stars' orbital speeds, and their orbital periods, we can determine exactly what the masses of the stars are. We can take that knowledge and then apply it to similar stars not in multiple systems.

We also can easily measure the luminosity and temperature of any star. A plot of luminocity versus temperature for a set of stars is called a Hertsprung-Russel (H-R) diagram, and it turns out that most stars lie along a thin band in this diagram known as the main Sequence. Stars arrange themselves by mass on the Main Sequence, with massive stars being hotter and brighter than their small-mass bretheren. If a star falls on the Main Sequence, we therefore immediately know its mass.

In addition to these methods, we also have an excellent understanding of how stars work. Our models of stellar structure are excellent predictors of the properties and evolution of stars. As it turns out, the mass of a star determines its life history from day 1, for all times thereafter, not only when the star is on the Main Sequence. So actually, the position of a star on the H-R diagram is a good indicator of its mass, regardless of whether it's on the Main Sequence or not.
nibble  q-n-a  org:junk  org:edu  popsci  space  physics  electromag  measurement  mechanics  gravity  cycles  oscillation  temperature  visuo  plots  correlation  metrics  explanation  measure  methodology 
december 2017 by nhaliday
Land, history or modernization? Explaining ethnic fractionalization: Ethnic and Racial Studies: Vol 38, No 2
Ethnic fractionalization (EF) is frequently used as an explanatory tool in models of economic development, civil war and public goods provision. However, if EF is endogenous to political and economic change, its utility for further research diminishes. This turns out not to be the case. This paper provides the first comprehensive model of EF as a dependent variable.
study  polisci  sociology  political-econ  economics  broad-econ  diversity  putnam-like  race  concept  conceptual-vocab  definition  realness  eric-kaufmann  roots  database  dataset  robust  endogenous-exogenous  causation  anthropology  cultural-dynamics  tribalism  methodology  world  developing-world  🎩  things  metrics  intricacy  microfoundations 
december 2017 by nhaliday
Autoignition temperature - Wikipedia
The autoignition temperature or kindling point of a substance is the lowest temperature at which it spontaneously ignites in normal atmosphere without an external source of ignition, such as a flame or spark. This temperature is required to supply the activation energy needed for combustion. The temperature at which a chemical ignites decreases as the pressure or oxygen concentration increases. It is usually applied to a combustible fuel mixture.

The time {\displaystyle t_{\text{ig}}} {\displaystyle t_{\text{ig}}} it takes for a material to reach its autoignition temperature {\displaystyle T_{\text{ig}}} {\displaystyle T_{\text{ig}}} when exposed to a heat flux {\displaystyle q''} {\displaystyle q''} is given by the following equation:
nibble  wiki  reference  concept  metrics  identity  physics  thermo  temperature  time  stock-flow  phys-energy  chemistry  article  street-fighting  fire  magnitude  data  list 
november 2017 by nhaliday
Hyperbolic angle - Wikipedia
A unit circle {\displaystyle x^{2}+y^{2}=1} x^2 + y^2 = 1 has a circular sector with an area half of the circular angle in radians. Analogously, a unit hyperbola {\displaystyle x^{2}-y^{2}=1} {\displaystyle x^{2}-y^{2}=1} has a hyperbolic sector with an area half of the hyperbolic angle.
nibble  math  trivia  wiki  reference  physics  relativity  concept  atoms  geometry  ground-up  characterization  measure  definition  plots  calculation  nitty-gritty  direction  metrics  manifolds 
november 2017 by nhaliday
Ancient Admixture in Human History
- Patterson, Reich et al., 2012
Population mixture is an important process in biology. We present a suite of methods for learning about population mixtures, implemented in a software package called ADMIXTOOLS, that support formal tests for whether mixture occurred and make it possible to infer proportions and dates of mixture. We also describe the development of a new single nucleotide polymorphism (SNP) array consisting of 629,433 sites with clearly documented ascertainment that was specifically designed for population genetic analyses and that we genotyped in 934 individuals from 53 diverse populations. To illustrate the methods, we give a number of examples that provide new insights about the history of human admixture. The most striking finding is a clear signal of admixture into northern Europe, with one ancestral population related to present-day Basques and Sardinians and the other related to present-day populations of northeast Asia and the Americas. This likely reflects a history of admixture between Neolithic migrants and the indigenous Mesolithic population of Europe, consistent with recent analyses of ancient bones from Sweden and the sequencing of the genome of the Tyrolean “Iceman.”
nibble  pdf  study  article  methodology  bio  sapiens  genetics  genomics  population-genetics  migration  gene-flow  software  trees  concept  history  antiquity  europe  roots  gavisti  🌞  bioinformatics  metrics  hypothesis-testing  levers  ideas  libraries  tools  pop-structure 
november 2017 by nhaliday
Culture, Ethnicity, and Diversity - American Economic Association
We investigate the empirical relationship between ethnicity and culture, defined as a vector of traits reflecting norms, values, and attitudes. Using survey data for 76 countries, we find that ethnic identity is a significant predictor of cultural values, yet that within-group variation in culture trumps between-group variation. Thus, in contrast to a commonly held view, ethnic and cultural diversity are unrelated. Although only a small portion of a country’s overall cultural heterogeneity occurs between groups, we find that various political economy outcomes (such as civil conflict and public goods provision) worsen when there is greater overlap between ethnicity and culture. (JEL D74, H41, J15, O15, O17, Z13)

definition of chi-squared index, etc., under:
II. Measuring Heterogeneity

Table 5—Incidence of Civil Conflict and Diversity
Table 6—Public Goods Provision and Diversity

https://twitter.com/GarettJones/status/924002043576115202
https://archive.is/oqMnC
https://archive.is/sBqqo
https://archive.is/1AcXn
χ2 diversity: raising the risk of civil war. Desmet, Ortuño-Ortín, Wacziarg, in the American Economic Review (1/N)

What predicts higher χ2 diversity? The authors tell us that, too. Here are all of the variables that have a correlation > 0.4: (7/N)

one of them is UK legal origin...

online appendix (with maps, Figures B1-3): http://www.anderson.ucla.edu/faculty_pages/romain.wacziarg/downloads/2017_culture_appendix.pdf
study  economics  growth-econ  broad-econ  world  developing-world  race  diversity  putnam-like  culture  cultural-dynamics  entropy-like  metrics  within-group  anthropology  microfoundations  political-econ  🎩  🌞  pdf  piracy  public-goodish  general-survey  cohesion  ethnocentrism  tribalism  behavioral-econ  sociology  cooperate-defect  homo-hetero  revolution  war  stylized-facts  econometrics  group-level  variance-components  multi  twitter  social  commentary  spearhead  econotariat  garett-jones  backup  summary  maps  data  visualization  correlation  values  poll  composition-decomposition  concept  conceptual-vocab  definition  intricacy  nonlinearity  anglosphere  regression  law  roots  within-without 
september 2017 by nhaliday
Reynolds number - Wikipedia
The Reynolds number is the ratio of inertial forces to viscous forces within a fluid which is subjected to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation.[6]

Re = ρuL/μ

(inertial forces)/(viscous forces)
= (mass)(acceleration) / (dynamic viscosity)(velocity/distance)(area)
= (ρL^3)(v/t) / μ(v/L)L^2
= Re

NB: viscous force/area ~ μ du/dy is definition of viscosity
nibble  concept  metrics  definition  physics  mechanics  fluid  street-fighting  wiki  reference  atoms  history  early-modern  europe  the-great-west-whale  britain  science  the-trenches  experiment 
september 2017 by nhaliday
Power of a point - Wikipedia
The power of point P (see in Figure 1) can be defined equivalently as the product of distances from the point P to the two intersection points of any ray emanating from P.
nibble  math  geometry  spatial  ground-up  concept  metrics  invariance  identity  atoms  wiki  reference  measure  yoga  calculation 
september 2017 by nhaliday
Divorce demography - Wikipedia
https://en.wikipedia.org/wiki/Divorce_in_the_United_States#Rates_of_divorce
https://psychcentral.com/lib/the-myth-of-the-high-rate-of-divorce/

Marriage update: less divorce, and less sex: https://familyinequality.wordpress.com/2017/04/14/marriage-update-less-divorce-and-less-sex/

Breaking Up Is Hard to Count: The Rise of Divorce in the United States, 1980–2010: https://link.springer.com/article/10.1007%2Fs13524-013-0270-9
Divorce rates have doubled over the past two decades among persons over age 35. Among the youngest couples, however, divorce rates are stable or declining. If current trends continue, overall age-standardized divorce rates could level off or even decline over the next few decades. We argue that the leveling of divorce among persons born since 1980 probably reflects the increasing selectivity of marriage.
sociology  methodology  demographics  social-science  social-structure  life-history  sex  wiki  reference  pro-rata  metrics  longitudinal  intricacy  multi  org:sci  wonkish  sexuality  trends  data  analysis  general-survey  study  history  mostly-modern  usa  selection  age-generation  chart  begin-middle-end 
august 2017 by nhaliday
Is the economy illegible? | askblog
In the model of the economy as a GDP factory, the most fundamental equation is the production function, Y = f(K,L).

This says that total output (Y) is determined by the total amount of capital (K) and the total amount of labor (L).

Let me stipulate that the economy is legible to the extent that this model can be applied usefully to explain economic developments. I want to point out that the economy, while never as legible as economists might have thought, is rapidly becoming less legible.
econotariat  cracker-econ  economics  macro  big-picture  empirical  legibility  let-me-see  metrics  measurement  econ-metrics  volo-avolo  securities  markets  amazon  business-models  business  tech  sv  corporation  inequality  compensation  polarization  econ-productivity  stagnation  monetary-fiscal  models  complex-systems  map-territory  thinking  nationalism-globalism  time-preference  cost-disease  education  healthcare  composition-decomposition  econometrics  methodology  lens  arrows  labor  capital  trends  intricacy  🎩  moments  winner-take-all  efficiency  input-output 
august 2017 by nhaliday
The Determinants of Trust
Both individual experiences and community characteristics influence how much people trust each other. Using data drawn from US localities we find that the strongest factors that reduce trust are: i) a recent history of traumatic experiences, even though the passage of time reduces this effect fairly rapidly; ii) belonging to a group that historically felt discriminated against, such as minorities (black in particular) and, to a lesser extent, women; iii) being economically unsuccessful in terms of income and education; iv) living in a racially mixed community and/or in one with a high degree of income disparity. Religious beliefs and ethnic origins do not significantly affect trust. The latter result may be an indication that the American melting pot at least up to a point works, in terms of homogenizing attitudes of different cultures, even though racial cleavages leading to low trust are still quite high.

Understanding Trust: http://www.nber.org/papers/w13387
In this paper we resolve this puzzle by recognizing that trust has two components: a belief-based one and a preference based one. While the sender's behavior reflects both, we show that WVS-like measures capture mostly the belief-based component, while questions on past trusting behavior are better at capturing the preference component of trust.

MEASURING TRUST: http://scholar.harvard.edu/files/laibson/files/measuring_trust.pdf
We combine two experiments and a survey to measure trust and trustworthiness— two key components of social capital. Standard attitudinal survey questions about trust predict trustworthy behavior in our experiments much better than they predict trusting behavior. Trusting behavior in the experiments is predicted by past trusting behavior outside of the experiments. When individuals are closer socially, both trust and trustworthiness rise. Trustworthiness declines when partners are of different races or nationalities. High status individuals are able to elicit more trustworthiness in others.

What is Social Capital? The Determinants of Trust and Trustworthiness: http://www.nber.org/papers/w7216
Using a sample of Harvard undergraduates, we analyze trust and social capital in two experiments. Trusting behavior and trustworthiness rise with social connection; differences in race and nationality reduce the level of trustworthiness. Certain individuals appear to be persistently more trusting, but these people do not say they are more trusting in surveys. Survey questions about trust predict trustworthiness not trust. Only children are less trustworthy. People behave in a more trustworthy manner towards higher status individuals, and therefore status increases earnings in the experiment. As such, high status persons can be said to have more social capital.

Trust and Cheating: http://www.nber.org/papers/w18509
We find that: i) both parties to a trust exchange have implicit notions of what constitutes cheating even in a context without promises or messages; ii) these notions are not unique - the vast majority of senders would feel cheated by a negative return on their trust/investment, whereas a sizable minority defines cheating according to an equal split rule; iii) these implicit notions affect the behavior of both sides to the exchange in terms of whether to trust or cheat and to what extent. Finally, we show that individual's notions of what constitutes cheating can be traced back to two classes of values instilled by parents: cooperative and competitive. The first class of values tends to soften the notion while the other tightens it.

Nationalism and Ethnic-Based Trust: Evidence from an African Border Region: https://u.osu.edu/robinson.1012/files/2015/12/Robinson_NationalismTrust-1q3q9u1.pdf
These results offer microlevel evidence that a strong and salient national identity can diminish ethnic barriers to trust in diverse societies.

One Team, One Nation: Football, Ethnic Identity, and Conflict in Africa: http://conference.nber.org/confer//2017/SI2017/DEV/Durante_Depetris-Chauvin.pdf
Do collective experiences that prime sentiments of national unity reduce interethnic tensions and conflict? We examine this question by looking at the impact of national football teams’ victories in sub-Saharan Africa. Combining individual survey data with information on over 70 official matches played between 2000 and 2015, we find that individuals interviewed in the days after a victory of their country’s national team are less likely to report a strong sense of ethnic identity and more likely to trust people of other ethnicities than those interviewed just before. The effect is sizable and robust and is not explained by generic euphoria or optimism. Crucially, national victories do not only affect attitudes but also reduce violence. Indeed, using plausibly exogenous variation from close qualifications to the Africa Cup of Nations, we find that countries that (barely) qualified experience significantly less conflict in the following six months than countries that (barely) did not. Our findings indicate that, even where ethnic tensions have deep historical roots, patriotic shocks can reduce inter-ethnic tensions and have a tangible impact on conflict.

Why Does Ethnic Diversity Undermine Public Goods Provision?: http://www.columbia.edu/~mh2245/papers1/HHPW.pdf
We identify three families of mechanisms that link diversity to public goods provision—–what we term “preferences,” “technology,” and “strategy selection” mechanisms—–and run a series of experimental games that permit us to compare the explanatory power of distinct mechanisms within each of these three families. Results from games conducted with a random sample of 300 subjects from a slum neighborhood of Kampala, Uganda, suggest that successful public goods provision in homogenous ethnic communities can be attributed to a strategy selection mechanism: in similar settings, co-ethnics play cooperative equilibria, whereas non-co-ethnics do not. In addition, we find evidence for a technology mechanism: co-ethnics are more closely linked on social networks and thus plausibly better able to support cooperation through the threat of social sanction. We find no evidence for prominent preference mechanisms that emphasize the commonality of tastes within ethnic groups or a greater degree of altruism toward co-ethnics, and only weak evidence for technology mechanisms that focus on the impact of shared ethnicity on the productivity of teams.

does it generalize to first world?

Higher Intelligence Groups Have Higher Cooperation Rates in the Repeated Prisoner's Dilemma: https://ideas.repec.org/p/iza/izadps/dp8499.html
The initial cooperation rates are similar, it increases in the groups with higher intelligence to reach almost full cooperation, while declining in the groups with lower intelligence. The difference is produced by the cumulation of small but persistent differences in the response to past cooperation of the partner. In higher intelligence subjects, cooperation after the initial stages is immediate and becomes the default mode, defection instead requires more time. For lower intelligence groups this difference is absent. Cooperation of higher intelligence subjects is payoff sensitive, thus not automatic: in a treatment with lower continuation probability there is no difference between different intelligence groups

Why societies cooperate: https://voxeu.org/article/why-societies-cooperate
Three attributes are often suggested to generate cooperative behaviour – a good heart, good norms, and intelligence. This column reports the results of a laboratory experiment in which groups of players benefited from learning to cooperate. It finds overwhelming support for the idea that intelligence is the primary condition for a socially cohesive, cooperative society. Warm feelings towards others and good norms have only a small and transitory effect.

individual payoff, etc.:

Trust, Values and False Consensus: http://www.nber.org/papers/w18460
Trust beliefs are heterogeneous across individuals and, at the same time, persistent across generations. We investigate one mechanism yielding these dual patterns: false consensus. In the context of a trust game experiment, we show that individuals extrapolate from their own type when forming trust beliefs about the same pool of potential partners - i.e., more (less) trustworthy individuals form more optimistic (pessimistic) trust beliefs - and that this tendency continues to color trust beliefs after several rounds of game-play. Moreover, we show that one's own type/trustworthiness can be traced back to the values parents transmit to their children during their upbringing. In a second closely-related experiment, we show the economic impact of mis-calibrated trust beliefs stemming from false consensus. Miscalibrated beliefs lower participants' experimental trust game earnings by about 20 percent on average.

The Right Amount of Trust: http://www.nber.org/papers/w15344
We investigate the relationship between individual trust and individual economic performance. We find that individual income is hump-shaped in a measure of intensity of trust beliefs. Our interpretation is that highly trusting individuals tend to assume too much social risk and to be cheated more often, ultimately performing less well than those with a belief close to the mean trustworthiness of the population. On the other hand, individuals with overly pessimistic beliefs avoid being cheated, but give up profitable opportunities, therefore underperforming. The cost of either too much or too little trust is comparable to the income lost by forgoing college.

...

This framework allows us to show that income-maximizing trust typically exceeds the trust level of the average person as well as to estimate the distribution of income lost to trust mistakes. We find that although a majority of individuals has well calibrated beliefs, a non-trivial proportion of the population (10%) has trust beliefs sufficiently poorly calibrated to lower income by more than 13%.

Do Trust and … [more]
study  economics  alesina  growth-econ  broad-econ  trust  cohesion  social-capital  religion  demographics  race  diversity  putnam-like  compensation  class  education  roots  phalanges  general-survey  multi  usa  GT-101  conceptual-vocab  concept  behavioral-econ  intricacy  composition-decomposition  values  descriptive  correlation  harvard  field-study  migration  poll  status  🎩  🌞  chart  anthropology  cultural-dynamics  psychology  social-psych  sociology  cooperate-defect  justice  egalitarianism-hierarchy  inequality  envy  n-factor  axelrod  pdf  microfoundations  nationalism-globalism  africa  intervention  counter-revolution  tribalism  culture  society  ethnocentrism  coordination  world  developing-world  innovation  econ-productivity  government  stylized-facts  madisonian  wealth-of-nations  identity-politics  public-goodish  s:*  legacy  things  optimization  curvature  s-factor  success  homo-hetero  higher-ed  models  empirical  contracts  human-capital  natural-experiment  endo-exo  data  scale  trade  markets  time  supply-demand  summary 
august 2017 by nhaliday
Harmonic mean - Wikipedia
The harmonic mean is a Schur-concave function, and dominated by the minimum of its arguments, in the sense that for any positive set of arguments, {\displaystyle \min(x_{1}\ldots x_{n})\leq H(x_{1}\ldots x_{n})\leq n\min(x_{1}\ldots x_{n})} . Thus, the harmonic mean cannot be made arbitrarily large by changing some values to bigger ones (while having at least one value unchanged).

more generally, for the weighted mean w/ Pr(x_i)=t_i, H(x1,...,xn) <= x_i/t_i
nibble  math  properties  estimate  concept  definition  wiki  reference  extrema  magnitude  expectancy  metrics  ground-up 
july 2017 by nhaliday
Dimensions - Geert Hofstede
http://geerthofstede.com/culture-geert-hofstede-gert-jan-hofstede/6d-model-of-national-culture/

https://www.reddit.com/r/europe/comments/4g88kt/eu28_countries_ranked_by_hofstedes_cultural/
https://archive.is/rXnII

https://hbdchick.wordpress.com/2013/09/07/national-individualism-collectivism-scores/

Individualism and Collectivism in Israeli Society: Comparing Religious and Secular High-School Students: https://sci-hub.tw/https://link.springer.com/article/10.1023/A:1016945121604
A common collective basis of mutual value consensus was found in the two groups; however, as predicted, there were differences between secular and religious students on the three kinds of items, since the religious scored higher than the secular students on items emphasizing collectivist orientation. The differences, however, do not fit the common theoretical framework of collectivism-individualism, but rather tend to reflect the distinction between in-group and universal collectivism.

Individualism and Collectivism in Two Conflicted Societies: Comparing Israeli-Jewish and Palestinian-Arab High School Students: https://sci-hub.tw/http://journals.sagepub.com/doi/10.1177/0044118X01033001001
Both groups were found to be more collectivistic than individualistic oriented. However, as predicted, the Palestinians scored higher than the Israeli students on items emphasizing in-group collectivist orientation (my nationality, my country, etc.). The differences between the two groups tended to reflect some subdistinctions such as different elements of individualism and collectivism. Moreover, they reflected the historical context and contemporary influences, such as the stage where each society is at in the nation-making process.

Religion as culture: religious individualism and collectivism among american catholics, jews, and protestants.: https://www.ncbi.nlm.nih.gov/pubmed/17576356
We propose the theory that religious cultures vary in individualistic and collectivistic aspects of religiousness and spirituality. Study 1 showed that religion for Jews is about community and biological descent but about personal beliefs for Protestants. Intrinsic and extrinsic religiosity were intercorrelated and endorsed differently by Jews, Catholics, and Protestants in a pattern that supports the theory that intrinsic religiosity relates to personal religion, whereas extrinsic religiosity stresses community and ritual (Studies 2 and 3). Important life experiences were likely to be social for Jews but focused on God for Protestants, with Catholics in between (Study 4). We conclude with three perspectives in understanding the complex relationships between religion and culture.

Inglehart–Welzel cultural map of the world: https://en.wikipedia.org/wiki/Inglehart%E2%80%93Welzel_cultural_map_of_the_world
Live cultural map over time 1981 to 2015: https://www.youtube.com/watch?v=ABWYOcru7js

https://en.wikipedia.org/wiki/Post-materialism

https://ourworldindata.org/materialism-and-post-materialism
By Income of the Country

Most of the low post-materialism, high income countries are East Asian :(. Some decent options: Norway, Netherlands, Iceland (surprising!). Other Euro countries fall into that category but interest me less for other reasons.

https://graphpaperdiaries.com/2016/06/10/materialism-and-post-materialism/

Postmaterialism and the Economic Condition: https://www.jstor.org/stable/2111573
prof  psychology  social-psych  values  culture  cultural-dynamics  anthropology  individualism-collectivism  expression-survival  long-short-run  time-preference  uncertainty  outcome-risk  gender  egalitarianism-hierarchy  things  phalanges  group-level  world  tools  comparison  data  database  n-factor  occident  social-norms  project  microfoundations  multi  maps  visualization  org:junk  psych-architecture  personality  hari-seldon  discipline  self-control  geography  shift  developing-world  europe  the-great-west-whale  anglosphere  optimate  china  asia  japan  sinosphere  orient  MENA  reddit  social  discussion  backup  EU  inequality  envy  britain  anglo  nordic  ranking  top-n  list  eastern-europe  germanic  gallic  mediterranean  cog-psych  sociology  guilt-shame  duty  tribalism  us-them  cooperate-defect  competition  gender-diff  metrics  politics  wiki  concept  society  civilization  infographic  ideology  systematic-ad-hoc  let-me-see  general-survey  chart  video  history  metabuch  dynamic  trends  plots  time-series  reference  water  mea 
june 2017 by nhaliday
Comprehensive Military Power: World’s Top 10 Militaries of 2015 - The Unz Review
gnon  military  defense  scale  top-n  list  ranking  usa  china  asia  analysis  data  sinosphere  critique  russia  capital  magnitude  street-fighting  individualism-collectivism  europe  germanic  world  developing-world  latin-america  MENA  india  war  meta:war  history  mostly-modern  world-war  prediction  trends  realpolitik  strategy  thucydides  great-powers  multi  news  org:mag  org:biz  org:foreign  current-events  the-bones  org:rec  org:data  org:popup  skunkworks  database  dataset  power  energy-resources  heavy-industry  economics  growth-econ  foreign-policy  geopolitics  maps  project  expansionism  the-world-is-just-atoms  civilization  let-me-see  wiki  reference  metrics  urban  population  japan  britain  gallic  allodium  definite-planning  kumbaya-kult  peace-violence  urban-rural  wealth  wealth-of-nations  econ-metrics  dynamic  infographic 
june 2017 by nhaliday
Reading | West Hunter
Reading speed and comprehension interest me, but I don’t have as much information as I would like.  I would like to see the distribution of reading speeds ( in the general population, and also in college graduates).  I have looked a bit at discussions of this, and there’s something wrong.  Or maybe a lot wrong.  Researchers apparently say that nobody reads 900 words a minute with full comprehension, but I’ve seen it done.  I would also like to know if anyone has statistically validated methods that  increase reading speed.

On related topics, I wonder how many serious readers  there are, here and also in other countries.  Are they as common in Japan or China, with their very different scripts?   Are reading speeds higher or lower there?

How many people have  their houses really, truly stuffed with books?  Here and elsewhere?  Last time I checked we had about 5000 books around the house: I figure that’s serious, verging on the pathological.

To what extent do people remember what they read?  Judging from the general results of  adult knowledge studies, not very much of what they took in school, but maybe voluntary reading is different.

https://westhunt.wordpress.com/2012/06/05/reading/#comment-3187
The researchers claim that the range of high-comprehension reading speed doesn’t go up anywhere near 900 wpm. But my daughter routinely reads at that speed. In high school, I took a reading speed test and scored a bit over 1000 wpm, with perfect comprehension.

I have suggested that the key to high reading speed is the experience of trying to finish a entire science fiction paperback in a drugstore before the proprietor tells you to buy the damn thing or get out. Helps if you can hide behind the bookrack.

https://westhunt.wordpress.com/2019/03/31/early-reading/
There are a few small children, mostly girls, that learn to read very early. You read stories to them and before you know they’re reading by themselves. By very early, I men age 3 or 4.

Does this happen in China ?

hmm:
Beijingers' average daily reading time exceeds an hour: report: http://www.chinadaily.com.cn/a/201712/07/WS5a293e1aa310fcb6fafd44c0.html

Free Speed Reading Test by AceReader: http://www.freereadingtest.com/
time+comprehension

http://www.readingsoft.com/
claims: 1000 wpm with 85% comprehension at top 1%, 200 wpm at 60% for average

https://www.wsj.com/articles/speed-reading-returns-1395874723
http://projects.wsj.com/speedread/

https://news.ycombinator.com/item?id=929753
Take a look at "Reading Rate: A Review of Research and Theory" by Ronald P. Carver
http://www.amazon.com/Reading-Rate-Review-Research-Theory/dp...
The conclusion is, basically, that speed reading courses don't work.
You can teach people to skim at a faster rate than they'd read with maximum comprehension and retention. And you can teach people study skills, such as how to summarize salient points, and take notes.
But all these skills are not at all the same as what speed reading usually promises, which is to drastically increase the rate at which you read with full comprehension and retention. According to Carver's book, it can't be done, at least not drastically past about the rate you'd naturally read at the college level.
west-hunter  scitariat  discussion  speculation  ideas  rant  critique  learning  studying  westminster  error  realness  language  japan  china  asia  sinosphere  retention  foreign-lang  info-foraging  scale  speed  innovation  explanans  creative  multi  data  urban-rural  time  time-use  europe  the-great-west-whale  occident  orient  people  track-record  trivia  books  number  knowledge  poll  descriptive  distribution  tools  quiz  neurons  anglo  hn  poast  news  org:rec  metrics  density  writing  meta:reading  thinking 
june 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
https://en.wikipedia.org/wiki/Coefficient_of_determination
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric

https://en.wikipedia.org/wiki/Explained_variation

http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.

this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)

https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
stats  science  hypothesis-testing  correlation  metrics  plots  regression  wiki  reference  nibble  methodology  multi  twitter  social  discussion  best-practices  econotariat  garett-jones  concept  conceptual-vocab  accuracy  causation  acm  matrix-factorization  todo  explanation  yoga  hsu  street-fighting  levers  🌞  2014  scitariat  variance-components  meta:prediction  biodet  s:**  mental-math  reddit  commentary  ssc  poast  gwern  data-science  metric-space  similarity  measure  dependence-independence 
may 2017 by nhaliday
Kin selection - Wikipedia
Formally, genes should increase in frequency when

{\displaystyle rB>C}
where

r=the genetic relatedness of the recipient to the actor, often defined as the probability that a gene picked randomly from each at the same locus is identical by descent.
B=the additional reproductive benefit gained by the recipient of the altruistic act,
C=the reproductive cost to the individual performing the act.
This inequality is known as Hamilton's rule after W. D. Hamilton who in 1964 published the first formal quantitative treatment of kin selection.

The relatedness parameter (r) in Hamilton's rule was introduced in 1922 by Sewall Wright as a coefficient of relationship that gives the probability that at a random locus, the alleles there will be identical by descent.[20] Subsequent authors, including Hamilton, sometimes reformulate this with a regression, which, unlike probabilities, can be negative. A regression analysis producing statistically significant negative relationships indicates that two individuals are less genetically alike than two random ones (Hamilton 1970, Nature & Grafen 1985 Oxford Surveys in Evolutionary Biology). This has been invoked to explain the evolution of spiteful behaviour consisting of acts that result in harm, or loss of fitness, to both the actor and the recipient.

Several scientific studies have found that the kin selection model can be applied to nature. For example, in 2010 researchers used a wild population of red squirrels in Yukon, Canada to study kin selection in nature. The researchers found that surrogate mothers would adopt related orphaned squirrel pups but not unrelated orphans. The researchers calculated the cost of adoption by measuring a decrease in the survival probability of the entire litter after increasing the litter by one pup, while benefit was measured as the increased chance of survival of the orphan. The degree of relatedness of the orphan and surrogate mother for adoption to occur depended on the number of pups the surrogate mother already had in her nest, as this affected the cost of adoption. The study showed that females always adopted orphans when rB > C, but never adopted when rB < C, providing strong support for Hamilton's rule.[21]
bio  nature  evolution  selection  group-selection  kinship  altruism  levers  methodology  population-genetics  genetics  wiki  reference  nibble  stylized-facts  biodet  🌞  concept  metrics  EGT  selfish-gene  cooperate-defect  similarity  interests  ecology 
march 2017 by nhaliday
Minor allele frequency - Wikipedia
It is widely used in population genetics studies because it provides information to differentiate between common and rare variants in the population.
jargon  genetics  genomics  bioinformatics  population-genetics  QTL  wiki  reference  metrics  distribution 
march 2017 by nhaliday
Whole Health Source: The Glycemic Index: A Critical Evaluation
Overall, these studies do not support the idea that lowering the glycemic index of carbohydrate foods is useful for weight loss, insulin or glucose control, or anything else besides complicating your life. I'll keep my finger on the pulse of this research as it expands, but for the time being I don't see the glycemic index per se as a significant way to combat fat gain or metabolic disease.
critique  concept  diet  nutrition  stamina  embodied-cognition  embodied  health  taubes-guyenet  org:health  contrarianism  fitsci  obesity  metrics 
march 2017 by nhaliday
Surnames: a New Source for the History of Social Mobility
This paper explains how surname distributions can be used as a way to
measure rates of social mobility in contemporary and historical societies.
This allows for estimates of social mobility rates for any population for which the distribution of surnames overall is known as well as the distribution of surnames among some elite or underclass. Such information exists, for example, for England back to 1300, and for Sweden back to 1700. However surname distributions reveal a different, more fundamental type of mobility than that conventionally estimated. Thus surname estimates also allow for measuring a different aspect of social mobility, but the aspect that matters for mobility of social groups, and for families in the long run.

Immobile Australia: Surnames Show Strong Status Persistence, 1870–2017: http://ftp.iza.org/dp11021.pdf

The Big Sort: Selective Migration and the Decline of Northern England, 1800-2017: http://migrationcluster.ucdavis.edu/events/seminars_2015-2016/sem_assets/clark/paper_clark_northern-disadvantage.pdf
The north of England in recent years has been poorer, less healthy, less educated and slower growing than the south. Using two sources - surnames that had a different regional distribution in England in the 1840s, and a detailed genealogy of 78,000 people in England giving birth and death locations - we show that the decline of the north is mainly explained by selective outmigration of the educated and talented.

Genetic Consequences of Social Stratification in Great Britain: https://www.biorxiv.org/content/biorxiv/early/2018/10/30/457515
pdf  study  spearhead  gregory-clark  economics  cliometrics  status  class  mobility  language  methodology  metrics  natural-experiment  🎩  tricks  history  early-modern  britain  china  asia  path-dependence  europe  nordic  pro-rata  higher-ed  elite  success  society  legacy  stylized-facts  age-generation  broad-econ  s-factor  measurement  within-group  pop-structure  flux-stasis  microfoundations  multi  shift  mostly-modern  migration  biodet  endo-exo  behavioral-gen  regression-to-mean  human-capital  education  oxbridge  endogenous-exogenous  ideas  bio  preprint  genetics  genomics  GWAS  labor  anglo  egalitarianism-hierarchy  welfare-state  sociology  org:ngo  white-paper 
march 2017 by nhaliday
Has creative destruction become more destructive? - Marginal REVOLUTION
However, we conjecture that recently the destructive component of innovations has increased relative to the size of the creative component as the new technologies are often creating products which are close substitutes for the ones they replace whose value depreciates substantially in the process of destruction. Consequently, the contribution of recent innovations to GDP is likely upwardly biased.
econotariat  marginal-rev  study  summary  commentary  economics  growth-econ  innovation  unintended-consequences  econ-metrics  speculation  automation  cjones-like  externalities  realness  metrics  measurement  stagnation  hmm 
march 2017 by nhaliday
How Universal Is the Big Five? Testing the Five-Factor Model of Personality Variation Among Forager–Farmers in the Bolivian Amazon
We failed to find robust support for the FFM, based on tests of (a) internal consistency of items expected to segregate into the Big Five factors, (b) response stability of the Big Five, (c) external validity of the Big Five with respect to observed behavior, (d) factor structure according to exploratory and confirmatory factor analysis, and (e) similarity with a U.S. target structure based on Procrustes rotation analysis.

...

We argue that Tsimane personality variation displays 2 principal factors that may reflect socioecological characteristics common to small-scale societies. We offer evolutionary perspectives on why the structure of personality variation may not be invariant across human societies.

Niche diversity can explain cross-cultural differences in personality structure: https://www.nature.com/articles/s41562-019-0730-3.epdf?author_access_token=OePuGOtdzdnQNlUm-C2oidRgN0jAjWel9jnR3ZoTv0PAovoNXZmNaZE03-rNo0RKOI7i7PG10G8tISp-_6W5yDqI3sDx0WdZZuk2ekMJbzGZtJ7_XsMUy0k4UGpsNDt9NHMarkg3dmAWt-Ttawxu1g%3D%3D
Cross-cultural studies have challenged this view, finding that less-complex societies exhibit stronger covaria-tion among behavioural characteristics, resulting in fewer derived personality factors. To explain these results, we propose the niche diversity hypothesis, in which a greater diversity of social and ecological niches elicits a broader range of multi-variate behavioural profiles and, hence, lower trait covariance in a population.
...
This work provides a general explanation for population differences in personality structure in both humans and other animals and suggests a substantial reimagining of personality research: instead of reifying statistical descriptions of manifest personality structures, research should focus more on modelling their underlying causes.

sounds obvious but actually kinda interesting
pdf  study  psychology  cog-psych  society  embedded-cognition  personality  metrics  generalization  methodology  farmers-and-foragers  latin-america  context  homo-hetero  info-dynamics  water  psychometrics  exploratory  things  phalanges  dimensionality  anthropology  universalism-particularism  applicability-prereqs  multi  sapiens  cultural-dynamics  social-psych  evopsych  psych-architecture  org:nat  🌞  roots  explanans  causation  pop-diff  cybernetics  ecology  scale  moments  large-factor 
february 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum 
february 2017 by nhaliday
Welcome to wbdata’s documentation! — wbdata 0.2.7 documentation
Wbdata is a simple python interface to find and request information from the World Bank’s various databases, either as a dictionary containing full metadata or as a pandas DataFrame. Currently, wbdata wraps most of the World Bank API, and also adds some convenience functions for searching and retrieving information.
data  metrics  econ-metrics  objektbuch  libraries  python  documentation  yak-shaving  api  hmm  world  developing-world  sleuthin  maps  programming  reference 
february 2017 by nhaliday
Genetics and educational attainment | npj Science of Learning
Figure 1 is quite good
Sibling Correlations for Behavioral Traits. This figure displays sibling correlations for five traits measured in a large sample of Swedish brother pairs born 1951–1970. All outcomes except years of schooling are measured at conscription, around the age of 18.

correlations for IQ/EA for adoptees are actually nontrivial in adulthood, hmm

Figure 2 has GWAS R^2s through 2016 (in-sample, I guess?)
study  org:nat  biodet  education  methodology  essay  survey  genetics  GWAS  variance-components  init  causation  🌞  metrics  population-genetics  explanation  unit  nibble  len:short  big-picture  behavioral-gen  state-of-art  iq  embodied  correlation  twin-study  sib-study  summary  europe  nordic  data  visualization  s:*  tip-of-tongue  spearhead  bioinformatics 
february 2017 by nhaliday
Odds ratio - Wikipedia
- (P(y=1|x=1) / P(y=0|x=1)) / (P(y=1|x=0) / P(y=0|x=0))
- when P(y=1|x=0) and P(y=1|x=1) are both small, approximately the relative risk = P(y=1|x=1)/P(y=1|x=0)

The two other major ways of quantifying association are the risk ratio ("RR") and the absolute risk reduction ("ARR"). In clinical studies and many other settings, the parameter of greatest interest is often actually the RR, which is determined in a way that is similar to the one just described for the OR, except using probabilities instead of odds. Frequently, however, the available data only allows the computation of the OR; notably, this is so in the case of case-control studies, as explained below. On the other hand, if one of the properties (say, A) is sufficiently rare (the "rare disease assumption"), then the OR of having A given that the individual has B is a good approximation to the corresponding RR (the specification "A given B" is needed because, while the OR treats the two properties symmetrically, the RR and other measures do not).
concept  metrics  methodology  science  hypothesis-testing  wiki  reference  stats  effect-size 
february 2017 by nhaliday
MinHash - Wikipedia
- goal: compute Jaccard coefficient J(A, B) = |A∩B| / |A∪B| in sublinear space
- idea: pick random injective hash function h, define h_min(S) = argmin_{x in S} h(x), and note that Pr[h_min(A) = h_min(B)] = J(A, B)
- reduce variance w/ Chernoff bound
algorithms  data-structures  sublinear  hashing  wiki  reference  random  tcs  nibble  measure  metric-space  metrics  similarity  PAC  intersection  intersection-connectedness 
february 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : metangtechie

related tags

aaronson  ability-competence  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  aDNA  advanced  adversarial  advice  aesthetics  africa  age-generation  aging  agri-mindset  agriculture  ai  ai-control  albion  alesina  algebra  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  amortization-potential  analogy  analysis  anarcho-tyranny  anglo  anglosphere  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  applicability-prereqs  applications  architecture  arms  arrows  article  asia  atoms  attaq  attention  authoritarianism  automation  aversion  axelrod  backup  barons  bayesian  begin-middle-end  behavioral-econ  behavioral-gen  benchmarks  best-practices  bias-variance  biases  big-peeps  big-picture  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blowhards  bonferroni  books  bostrom  bounded-cognition  brain-scan  branches  brands  britain  broad-econ  business  business-models  c(pp)  caching  calculation  canada  capital  career  carmack  causation  characterization  chart  cheatsheet  checking  checklists  chemistry  china  christianity  civic  civil-liberty  civilization  cjones-like  class  class-warfare  classification  climate-change  cliometrics  cloud  coarse-fine  cocktail  code-dive  code-organizing  coding-theory  cog-psych  cohesion  collaboration  coming-apart  commentary  common-case  communication  comparison  compensation  competition  complex-systems  complexity  composition-decomposition  compression  computation  computational-geometry  computer-memory  computer-vision  concentration-of-measure  concept  conceptual-vocab  concurrency  confidence  confusion  conquest-empire  constraint-satisfaction  context  contracts  contrarianism  control  convexity-curvature  cool  cooperate-defect  coordination  corporation  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  counterexample  coupling-cohesion  cracker-econ  cracker-prog  creative  criminal-justice  critique  crooked  crosstab  cs  cultural-dynamics  culture  current-events  curvature  cybernetics  cycles  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  defense  definite-planning  definition  degrees-of-freedom  democracy  demographics  dennett  density  dependence-independence  descriptive  desktop  deterrence  developing-world  devops  devtools  diet  differential  dignity  dimensionality  direction  dirty-hands  discipline  discussion  disease  distributed  distribution  diversity  documentation  domestication  dotnet  drugs  duplication  duty  dynamic  dysgenics  early-modern  easterly  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden-heaven  editors  education  effect-size  efficiency  egalitarianism-hierarchy  EGT  electromag  elegance  elite  embedded-cognition  embeddings  embodied  embodied-cognition  emergent  emotion  empirical  ems  endo-exo  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  ensembles  entropy-like  environment  envy  epidemiology  epistemic  eric-kaufmann  erlang  error  essay  estimate  ethical-algorithms  ethnocentrism  EU  europe  evidence-based  evolution  evopsych  examples  existence  expanders  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  exposition  expression-survival  externalities  extrema  facebook  farmers-and-foragers  features  fermi  field-study  fields  finance  fire  fisher  fitsci  flexibility  fluid  flux-stasis  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forms-instances  frameworks  frontend  frontier  functional  futurism  gallic  games  garett-jones  gavisti  gedanken  gelman  gender  gender-diff  gene-drift  gene-flow  general-survey  generalization  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  giants  gibbon  git  gnon  gnxp  golang  google  gotchas  government  graph-theory  graphs  gravity  great-powers  gregory-clark  grokkability  grokkability-clarity  ground-up  group-level  group-selection  growth-econ  GT-101  guide  guilt-shame  GWAS  gwern  hanson  happy-sad  hardness  hari-seldon  harvard  hashing  health  healthcare  heavy-industry  henrich  heterodox  heuristic  higher-ed  history  hive-mind  hmm  hn  homo-hetero  honor  howto  hsu  huge-data-the-biggest  human-capital  human-ml  humanity  hypothesis-testing  ideas  identity  identity-politics  ideology  idk  IEEE  impact  incentives  india  individualism-collectivism  industrial-org  inequality  inference  info-dynamics  info-foraging  infographic  information-theory  inhibition  init  innovation  input-output  institutions  integration-extension  intel  intelligence  interdisciplinary  interests  interface  interface-compatibility  internet  interpretability  intersection  intersection-connectedness  intervention  interview  intricacy  intuition  invariance  investing  iq  iron-age  islam  israel  iteration-recursion  janus  japan  jargon  javascript  judaism  judgement  justice  jvm  kaggle  kinship  knowledge  korea  kumbaya-kult  labor  language  large-factor  latency-throughput  latex  latin-america  law  learning  learning-theory  lecture-notes  legacy  legibility  len:short  lens  let-me-see  letters  levers  leviathan  libraries  life-history  limits  linear-algebra  linearity  liner-notes  links  list  llvm  local-global  lol  long-short-run  long-term  longevity  longitudinal  machine-learning  macro  madisonian  magnitude  malaise  malthus  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  markets  math  math.CA  math.CO  math.DS  math.MG  matrix-factorization  meaningness  measure  measurement  mechanics  medieval  mediterranean  memetics  memory-management  MENA  mental-math  meta-analysis  meta:prediction  meta:reading  meta:rhetoric  meta:science  meta:war  metabuch  metal-to-virtual  methodology  metric-space  metrics  micro  microfoundations  microsoft  migration  military  minimalism  minimum-viable  miri-cfar  ML-MAP-E  mobility  models  modernity  moloch  moments  monetary-fiscal  money  mostly-modern  move-fast-(and-break-things)  msr  multi  music  mutation  n-factor  nascent-state  nationalism-globalism  natural-experiment  nature  network-structure  networking  neuro  neurons  news  nibble  nihil  nitty-gritty  nlp  no-go  nonlinearity  nordic  nuclear  null-result  number  nutrition  obama  obesity  objective-measure  objektbuch  observer-report  ocaml-sml  occam  occident  off-convex  offense-defense  old-anglo  oly  operational  optimate  optimism  optimization  order-disorder  ORFE  org:anglo  org:biz  org:bleg  org:com  org:data  org:davos  org:econlib  org:edu  org:fin  org:foreign  org:gov  org:health  org:junk  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  oscillation  oss  outcome-risk  outliers  overflow  oxbridge  PAC  papers  parasites-microbiome  pareto  parsimony  path-dependence  patience  pdf  peace-violence  people  performance  personality  pessimism  phalanges  pharma  philosophy  phys-energy  physics  pic  pinker  piracy  planning  plots  pls  poast  polanyi-marx  polarization  policy  polisci  political-econ  politics  poll  pop-diff  pop-structure  popsci  population  population-genetics  populism  positivity  postrat  power  pragmatic  prediction  preprint  presentation  princeton  prioritizing  priors-posteriors  pro-rata  probability  prof  programming  project  proofs  propaganda  properties  property-rights  proposal  protestant-catholic  pseudoE  psych-architecture  psychology  psychometrics  public-goodish  publishing  putnam-like  python  q-n-a  qra  QTL  quality  quantitative-qualitative  questions  quiz  quotes  r-lang  race  random  randy-ayndy  ranking  rant  rationality  ratty  reading  real-nominal  realness  realpolitik  reason  recent-selection  recommendations  reddit  redistribution  reduction  reference  reflection  regression  regression-to-mean  regularizer  regulation  relativity  religion  rent-seeking  replication  research  retention  retrofit  review  revolution  rhetoric  right-wing  rigor  risk  robust  rock  roots  rot  ruby  russia  rust  s-factor  s:*  s:**  saas  safety  sample-complexity  sanjeev-arora  sapiens  scala  scale  scaling-tech  scholar  science  scitariat  securities  selection  self-control  self-report  selfish-gene  sequential  sex  sexuality  shift  shipping  sib-study  signal-noise  signum  similarity  simler  simplification-normalization  simulation  singularity  sinosphere  skunkworks  sleuthin  smoothness  soccer  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  society  sociology  socs-and-mops  software  space  span-cover  spatial  spearhead  speculation  speed  speedometer  spock  sports  spreading  ssc  stackex  stagnation  stamina  startups  stat-power  state  state-of-art  static-dynamic  stats  status  stock-flow  stories  strategy  street-fighting  strings  structure  study  studying  stylized-facts  sublinear  success  summary  supply-demand  survey  sv  system-design  systematic-ad-hoc  systems  tails  tainter  taubes-guyenet  taxes  tcs  tcstariat  tech  tech-infrastructure  technical-writing  technocracy  technology  techtariat  telos-atelos  temperature  terminal  tetlock  the-bones  the-classics  the-great-west-whale  the-self  the-trenches  the-watchers  the-world-is-just-atoms  theos  thermo  thiel  things  thinking  threat-modeling  thucydides  tidbits  time  time-preference  time-series  time-use  tip-of-tongue  todo  tools  top-n  topology  toxoplasmosis  traces  track-record  tracker  trade  tradeoffs  transportation  trees  trends  tribalism  tricks  trivia  trump  trust  truth  turchin  tutorial  twin-study  twitter  ubiquity  ui  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unix  urban  urban-rural  us-them  usa  utopia-dystopia  values  variance-components  vc-dimension  vcs  venture  vgr  video  visualization  visuo  volo-avolo  vulgar  walls  war  water  wealth  wealth-of-nations  web  welfare-state  west-hunter  westminster  white-paper  whole-partial-many  wiki  winner-take-all  wire-guided  within-group  within-without  wonkish  workflow  working-stiff  world  world-war  wormholes  worrydream  writing  wtf  yak-shaving  yoga  yvain  zeitgeist  zero-positive-sum  🌞  🎩  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: