nhaliday + decision-theory   46

The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Prisoner's dilemma - Wikipedia
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]
concept  conceptual-vocab  wiki  reference  article  models  GT-101  game-theory  anthropology  cultural-dynamics  trust  cooperate-defect  coordination  iteration-recursion  sequential  axelrod  discrete  smoothness  evolution  evopsych  EGT  economics  behavioral-econ  sociology  new-religion  deep-materialism  volo-avolo  characterization  hsu  scitariat  altruism  justice  group-selection  decision-making  tribalism  organizing  hari-seldon  theory-practice  applicability-prereqs  bio  finiteness  multi  history  science  social-science  decision-theory  commentary  study  summary  giants  the-trenches  zero-positive-sum  🔬  bounded-cognition  info-dynamics  org:edge  explanation  exposition  org:nat  eden  retention  long-short-run  darwinian  markov  equilibrium  linear-algebra  nitty-gritty  competition  war  explanans  n-factor  europe  the-great-west-whale  occident  china  asia  sinosphere  orient  decentralized  markets  market-failure  cohesion  metabuch  stylized-facts  interdisciplinary  physics  pdf  pessimism  time  insight  the-basilisk  noblesse-oblige  the-watchers  ideas  l 
march 2018 by nhaliday
Stein's example - Wikipedia
Stein's example (or phenomenon or paradox), in decision theory and estimation theory, is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955.[1]

An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent; this occurs in channel estimation in telecommunications, for instance (different factors affect overall channel performance). On the other hand, if one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse.

...

Many simple, practical estimators achieve better performance than the ordinary estimator. The best-known example is the James–Stein estimator, which works by starting at X and moving towards a particular point (such as the origin) by an amount inversely proportional to the distance of X from that point.
nibble  concept  levers  wiki  reference  acm  stats  probability  decision-theory  estimate  distribution  atoms 
february 2018 by nhaliday
Anisogamy - Wikipedia
Anisogamy is a fundamental concept of sexual dimorphism that helps explain phenotypic differences between sexes.[3] In most species a male and female sex exist, both of which are optimized for reproductive potential. Due to their differently sized and shaped gametes, both males and females have developed physiological and behavioral differences that optimize the individual’s fecundity.[3] Since most egg laying females typically must bear the offspring and have a more limited reproductive cycle, this typically makes females a limiting factor in the reproductive success rate of males in a species. This process is also true for females selecting males, and assuming that males and females are selecting for different traits in partners, would result in phenotypic differences between the sexes over many generations. This hypothesis, known as the Bateman’s Principle, is used to understand the evolutionary pressures put on males and females due to anisogamy.[4] Although this assumption has criticism, it is a generally accepted model for sexual selection within anisogamous species. The selection for different traits depending on sex within the same species is known as sex-specific selection, and accounts for the differing phenotypes found between the sexes of the same species. This sex-specific selection between sexes over time also lead to the development of secondary sex characteristics, which assist males and females in reproductive success.

...

Since this process is very energy-demanding and time consuming for the female, mate choice is often integrated into the female’s behavior.[3] Females will often be very selective of the males they choose to reproduce with, for the phenotype of the male can be indicative of the male’s physical health and heritable traits. Females employ mate choice to pressure males into displaying their desirable traits to females through courtship, and if successful, the male gets to reproduce. This encourages males and females of specific species to invest in courtship behaviors as well as traits that can display physical health to a potential mate. This process, known as sexual selection,[3] results in the development of traits to ease reproductive success rather than individual survival, such as the inflated size of a termite queen. It is also important for females to select against potential mates that may have a sexually transmitted infection, for the disease could not only hurt the female’s reproductive ability, but also damage the resulting offspring.[7]

Although not uncommon in males, females are more associated with parental care.[8] Since females are on a more limited reproductive schedule than males, a female often invests more in protecting the offspring to sexual maturity than the male. Like mate choice, the level of parental care varies greatly between species, and is often dependent on the number of offspring produced per sexual encounter.[8]

...

Since females are often the limiting factor in a species reproductive success, males are often expected by the females to search and compete for the female, known as intraspecific competition.[4] This can be seen in organisms such as bean beetles, as the male that searches for females more frequently is often more successful at finding mates and reproducing. In species undergoing this form of selection, a fit male would be one that is fast, has more refined sensory organs, and spatial awareness.[4]

Darwinian sex roles confirmed across the animal kingdom: http://advances.sciencemag.org/content/2/2/e1500983.full
Since Darwin’s conception of sexual selection theory, scientists have struggled to identify the evolutionary forces underlying the pervasive differences between male and female behavior, morphology, and physiology. The Darwin-Bateman paradigm predicts that anisogamy imposes stronger sexual selection on males, which, in turn, drives the evolution of conventional sex roles in terms of female-biased parental care and male-biased sexual dimorphism. Although this paradigm forms the cornerstone of modern sexual selection theory, it still remains untested across the animal tree of life. This lack of evidence has promoted the rise of alternative hypotheses arguing that sex differences are entirely driven by environmental factors or chance. We demonstrate that, across the animal kingdom, sexual selection, as captured by standard Bateman metrics, is indeed stronger in males than in females and that it is evolutionarily tied to sex biases in parental care and sexual dimorphism. Our findings provide the first comprehensive evidence that Darwin’s concept of conventional sex roles is accurate and refute recent criticism of sexual selection theory.

Coevolution of parental investment and sexually selected traits drives sex-role divergence: https://www.nature.com/articles/ncomms12517
Sex-role evolution theory attempts to explain the origin and direction of male–female differences. A fundamental question is why anisogamy, the difference in gamete size that defines the sexes, has repeatedly led to large differences in subsequent parental care. Here we construct models to confirm predictions that individuals benefit less from caring when they face stronger sexual selection and/or lower certainty of parentage. However, we overturn the widely cited claim that a negative feedback between the operational sex ratio and the opportunity cost of care selects for egalitarian sex roles. We further argue that our model does not predict any effect of the adult sex ratio (ASR) that is independent of the source of ASR variation. Finally, to increase realism and unify earlier models, we allow for coevolution between parental investment and investment in sexually selected traits. Our model confirms that small initial differences in parental investment tend to increase due to positive evolutionary feedback, formally supporting long-standing, but unsubstantiated, verbal arguments.

Parental investment, sexual selection and sex ratios: http://www.kokkonuts.org/wp-content/uploads/Parental_investment_review.pdf
The second argument takes the reasonable premise that anisogamy produces a male-biased operational sex ratio (OSR) leading to males competing for mates. Male care is then predicted to be less likely to evolve as it consumes resources that could otherwise be used to increase competitiveness. However, given each offspring has precisely two genetic parents (the Fisher condition), a biased OSR generates frequency-dependent selection, analogous to Fisherian sex ratio selection, that favours increased parental investment by whichever sex faces more intense competition. Sex role divergence is therefore still an evolutionary conundrum. Here we review some possible solutions. Factors that promote conventional sex roles are sexual selection on males (but non-random variance in male mating success must be high to override the Fisher condition), loss of paternity because of female multiple mating or group spawning and patterns of mortality that generate female-biased adult sex ratios (ASR). We present an integrative model that shows how these factors interact to generate sex roles. We emphasize the need to distinguish between the ASR and the operational sex ratio (OSR). If mortality is higher when caring than competing this diminishes the likelihood of sex role divergence because this strongly limits the mating success of the earlier deserting sex. We illustrate this in a model where a change in relative mortality rates while caring and competing generates a shift from a mammalian type breeding system (female-only care, male-biased OSR and female-biased ASR) to an avian type system (biparental care and a male-biased OSR and ASR).

LATE FEMINISM: https://jacobitemag.com/2017/08/01/late-feminism/
Woman has had a good run. For 200,000 years humankind’s anisogamous better (and bigger) half has enjoyed a position of desirability and safety befitting a scarce commodity. She has also piloted the evolutionary destiny of our species, both as a sexual selector and an agitator during man’s Promethean journey. In terms of comfort and agency, the human female is uniquely privileged within the annals of terrestrial biology.

But the era of female privilege is ending, in a steady decline that began around 1572. Woman’s biological niche is being crowded out by capital.

...

Strictly speaking, the breadth of the coming changes extend beyond even civilizational dynamics. They will affect things that are prior. One of the oldest and most practical definitions for a biological species defines its boundary as the largest group of organisms where two individuals, via sexual reproduction, can produce fertile offspring together. The imminent arrival of new reproductive technologies will render the sexual reproduction criteria either irrelevant or massively expanded, depending upon one’s perspective. Fertility of the offspring is similarly of limited relevance, since the modification of gametes will be de rigueur in any case. What this looming technology heralds is less a social revolution than it is a full sympatric speciation event.

Accepting the inevitability of the coming bespoke reproductive revolution, consider a few questions & probable answers regarding our external-womb-grown ubermenschen:

Q: What traits will be selected for?

A: Ability to thrive in a global market economy (i.e. ability to generate value for capital.)

Q: What material substrate will generate the new genomes?

A: Capital equipment.

Q: Who will be making the selection?

A: People, at least initially, (and who coincidentally will be making decisions that map 1-to-1 to the interests of capital.)

_Replace any of the above instances of the word capital with women, and you would have accurate answers for most of our species’ history._

...

In terms of pure informational content, the supernova seen from earth can be represented in a singularly compressed way: a flash of light on a black field where there previously was none. A single photon in the cone of the eye, at the limit. Whether … [more]
biodet  deep-materialism  new-religion  evolution  eden  gender  gender-diff  concept  jargon  wiki  reference  bio  roots  explanans  🌞  ideas  EGT  sex  analysis  things  phalanges  matching  parenting  water  competition  egalitarianism-hierarchy  ranking  multi  study  org:nat  nature  meta-analysis  survey  solid-study  male-variability  darwinian  empirical  realness  sapiens  models  evopsych  legacy  investing  uncertainty  outcome-risk  decision-theory  pdf  life-history  chart  accelerationism  horror  capital  capitalism  similarity  analogy  land  gnon  🐸  europe  the-great-west-whale  industrial-revolution  science  kinship  n-factor  speculation  personality  creative  pop-diff  curiosity  altruism  cooperate-defect  anthropology  cultural-dynamics  civil-liberty  recent-selection  technocracy  frontier  futurism  prediction  quotes  aphorism  religion  theos  enhancement  biotech  revolution  insight  history  early-modern  gallic  philosophy  enlightenment-renaissance-restoration-reformation  ci 
january 2018 by nhaliday
Are Sunk Costs Fallacies? - Gwern.net
But to what extent is the sunk cost fallacy a real fallacy?
Below, I argue the following:
1. sunk costs are probably issues in big organizations
- but maybe not ones that can be helped
2. sunk costs are not issues in animals
3. sunk costs appear to exist in children & adults
- but many apparent instances of the fallacy are better explained as part of a learning strategy
- and there’s little evidence sunk cost-like behavior leads to actual problems in individuals
4. much of what we call sunk cost looks like simple carelessness & thoughtlessness
ratty  gwern  analysis  meta-analysis  faq  biases  rationality  decision-making  decision-theory  economics  behavioral-econ  realness  cost-benefit  learning  wire-guided  marginal  age-generation  aging  industrial-org  organizing  coordination  nature  retention  knowledge  iq  education  tainter  management  government  competition  equilibrium  models  roots  chart 
december 2017 by nhaliday
Kelly criterion - Wikipedia
In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet, is a formula used to determine the optimal size of a series of bets. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run (that is, over a span of time in which the observed fraction of bets that are successful equals the probability that any given bet will be successful). It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956.[1] The practical use of the formula has been demonstrated.[2][3][4]

The Kelly Criterion is to bet a predetermined fraction of assets and can be counterintuitive. In one study,[5][6] each participant was given $25 and asked to bet on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. Behavior was far from optimal. "Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment." Using the Kelly criterion and based on the odds in the experiment, the right approach would be to bet 20% of the pot on each throw (see first example in Statement below). If losing, the size of the bet gets cut; if winning, the stake increases.
nibble  betting  investing  ORFE  acm  checklists  levers  probability  algorithms  wiki  reference  atoms  extrema  parsimony  tidbits  decision-theory  decision-making  street-fighting  mental-math  calculation 
august 2017 by nhaliday
Stat 260/CS 294: Bayesian Modeling and Inference
Topics
- Priors (conjugate, noninformative, reference)
- Hierarchical models, spatial models, longitudinal models, dynamic models, survival models
- Testing
- Model choice
- Inference (importance sampling, MCMC, sequential Monte Carlo)
- Nonparametric models (Dirichlet processes, Gaussian processes, neutral-to-the-right processes, completely random measures)
- Decision theory and frequentist perspectives (complete class theorems, consistency, empirical Bayes)
- Experimental design
unit  course  berkeley  expert  michael-jordan  machine-learning  acm  bayesian  probability  stats  lecture-notes  priors-posteriors  markov  monte-carlo  frequentist  latent-variables  decision-theory  expert-experience  confidence  sampling 
july 2017 by nhaliday
William Stanley Jevons - Wikipedia
William Stanley Jevons FRS (/ˈdʒɛvənz/;[2] 1 September 1835 – 13 August 1882) was an English economist and logician.

Irving Fisher described Jevons' book A General Mathematical Theory of Political Economy (1862) as the start of the mathematical method in economics.[3] It made the case that economics as a science concerned with quantities is necessarily mathematical.[4] In so doing, it expounded upon the "final" (marginal) utility theory of value. Jevons' work, along with similar discoveries made by Carl Menger in Vienna (1871) and by Léon Walras in Switzerland (1874), marked the opening of a new period in the history of economic thought. Jevons' contribution to the marginal revolution in economics in the late 19th century established his reputation as a leading political economist and logician of the time.

Jevons broke off his studies of the natural sciences in London in 1854 to work as an assayer in Sydney, where he acquired an interest in political economy. Returning to the UK in 1859, he published General Mathematical Theory of Political Economy in 1862, outlining the marginal utility theory of value, and A Serious Fall in the Value of Gold in 1863. For Jevons, the utility or value to a consumer of an additional unit of a product is inversely related to the number of units of that product he already owns, at least beyond some critical quantity.

It was for The Coal Question (1865), in which he called attention to the gradual exhaustion of the UK's coal supplies, that he received public recognition, in which he put forth what is now known as the Jevons paradox, i.e. that increases in energy production efficiency leads to more not less consumption. The most important of his works on logic and scientific methods is his Principles of Science (1874),[5] as well as The Theory of Political Economy (1871) and The State in Relation to Labour (1882). Among his inventions was the logic piano, a mechanical computer.

https://en.wikipedia.org/wiki/Jevons_paradox
In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes the Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises because of increasing demand.[1] The Jevons paradox is perhaps the most widely known paradox in environmental economics.[2] However, governments and environmentalists generally assume that efficiency gains will lower resource consumption, ignoring the possibility of the paradox arising.[3]

The Coal Question: http://www.econlib.org/library/YPDBooks/Jevons/jvnCQ.html
people  big-peeps  history  early-modern  britain  economics  growth-econ  ORFE  industrial-revolution  energy-resources  giants  anglosphere  wiki  nihil  civilization  prepping  old-anglo  biophysical-econ  the-world-is-just-atoms  pre-ww2  multi  stylized-facts  efficiency  technology  org:econlib  books  modernity  volo-avolo  values  formal-values  decision-making  decision-theory 
may 2017 by nhaliday
How Transparency Kills Information Aggregation: Theory and Experiment
We investigate the potential of transparency to influence committee decision-making. We present a model in which career concerned committee members receive private information of different type-dependent accuracy, deliberate and vote. We study three levels of transparency under which career concerns are predicted to affect behavior differently, and test the model’s key predictions in a laboratory experiment. The model’s predictions are largely borne out – transparency negatively affects information aggregation at the deliberation and voting stages, leading to sharply different committee error rates than under secrecy. This occurs despite subjects revealing more information under transparency than theory predicts.
study  economics  micro  decision-making  decision-theory  collaboration  coordination  info-econ  info-dynamics  behavioral-econ  field-study  clarity  ethics  civic  integrity  error  unintended-consequences  🎩  org:ngo  madisonian  regularizer  enlightenment-renaissance-restoration-reformation  white-paper  microfoundations  open-closed  composition-decomposition  organizing 
april 2017 by nhaliday
Predicting with confidence: the best machine learning idea you never heard of | Locklin on science
The advantages of conformal prediction are many fold. These ideas assume very little about the thing you are trying to forecast, the tool you’re using to forecast or how the world works, and they still produce a pretty good confidence interval. Even if you’re an unrepentant Bayesian, using some of the machinery of conformal prediction, you can tell when things have gone wrong with your prior. The learners work online, and with some modifications and considerations, with batch learning. One of the nice things about calculating confidence intervals as a part of your learning process is they can actually lower error rates or use in semi-supervised learning as well. Honestly, I think this is the best bag of tricks since boosting; everyone should know about and use these ideas.

The essential idea is that a “conformity function” exists. Effectively you are constructing a sort of multivariate cumulative distribution function for your machine learning gizmo using the conformity function. Such CDFs exist for classical stuff like ARIMA and linear regression under the correct circumstances; CP brings the idea to machine learning in general, and to models like ARIMA when the standard parametric confidence intervals won’t work. Within the framework, the conformity function, whatever may be, when used correctly can be guaranteed to give confidence intervals to within a probabilistic tolerance. The original proofs and treatments of conformal prediction, defined for sequences, is extremely computationally inefficient. The conditions can be relaxed in many cases, and the conformity function is in principle arbitrary, though good ones will produce narrower confidence regions. Somewhat confusingly, these good conformity functions are referred to as “efficient” -though they may not be computationally efficient.
techtariat  acmtariat  acm  machine-learning  bayesian  stats  exposition  research  online-learning  probability  decision-theory  frontier  unsupervised  confidence 
february 2017 by nhaliday
Heritability of ultimatum game responder behavior
Employing standard structural equation modeling techniques, we estimate that >40% of the variation in subjects' rejection behavior is explained by additive genetic effects. Our estimates also suggest a very modest role for common environment as a source of phenotypic variation.
study  biodet  org:nat  psychology  social-psych  behavioral-econ  variance-components  decision-theory  twin-study  europe  nordic  trust  GT-101  objective-measure  zero-positive-sum  justice  behavioral-gen  cooperate-defect  microfoundations 
february 2017 by nhaliday
Hyperbolic discounting - Wikipedia
Individuals using hyperbolic discounting reveal a strong tendency to make choices that are inconsistent over time – they make choices today that their future self would prefer not to have made, despite using the same reasoning. This dynamic inconsistency happens because the value of future rewards is much lower under hyperbolic discounting than under exponential discounting.
psychology  cog-psych  behavioral-econ  values  time-preference  wiki  reference  concept  models  distribution  time  uncertainty  decision-theory  decision-making  sequential  stamina  neurons  akrasia  contradiction  self-control  patience  article  formal-values  microfoundations  constraint-satisfaction  additive  long-short-run 
january 2017 by nhaliday
Bestiary of Behavioral Economics/Trust Game - Wikibooks, open books for an open world
In the trust game, like the ultimatum game and the dictator game, there are two participants that are anonymously paired. Both of these individuals are given some quantity of money. The first individual, or player, is told that he must send some amount of his money to an anonymous second player, though the amount sent may be zero. The first player is also informed that whatever he sends will be tripled by the experimenter. So, when the first player chooses a value, the experimenter will take it, triple it, and give that money to the second player. The second player is then told to make a similar choice – give some amount of the now-tripled money back to the first player, even if that amount is zero.

Even with perfect information about the mechanics of the game, the first player option to send nothing (and thus the second player option to send nothing back) is the Nash equilibrium for the game.

In the original Berg et al. experiment, thirty out of thirty-two game trials resulted in a violation of the results predicted by standard economic theory. In these thirty cases, first players sent money that averaged slightly over fifty percent of their original endowment.

Heritability of cooperative behavior in the trust game: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2268795/
- trust defined by the standard A->B->A trust game
- smallish h^2, small but nonzero shared environment, primarily non-shared environment (~70%)

The results of our mixed-effects Bayesian ACE analysis suggest that variation in how subjects play the trust game is partially accounted for by genetic differences (Tables 2 and ​and33 and Fig. 2). In the ACE model of trust, the heritability estimate is 20% (C.I. 3–38%) in the Swedish experiment and 10% (C.I. 4–21%) in the U.S. experiment. The ACE model of trust also demonstrates that environmental variation plays a role. In particular, unshared environmental variation is a much more significant source of phenotypic variation than genetic variation (e2 = 68% vs. c2 = 12% in Sweden and e2 = 82% vs. c2 = 8% in the U.S.; P < 0.0001 in both samples). In the ACE model of trustworthiness, heritability (h2) generates 18% (C.I. 8–30%) of the variance in the Swedish experiment and 17% (C.I. 5–32%) in the U.S. experiment. Once again, environmental differences play a role (e2 = 66% vs. c2 = 17% in Sweden and e2 = 71% vs. c2 = 12% in the U.S.; P < 0.0001 in both samples).

Trust and Gender: An Examination of Behavior and Beliefs in the Investment Game: https://www.researchgate.net/publication/222329553_Trust_and_Gender_An_Examination_of_Behavior_and_Beliefs_in_the_Investment_Game
How does gender influence trust, the likelihood of being trusted and the level of trustworthiness? We compare choices by men and women in the Investment Game and use questionnaire data to try to understand the motivations for the behavioral differences. We find that men trust more than women, and women are more trustworthy than men. The relationship between expected return and trusting behavior is stronger among men than women, suggesting that men view the interaction more strategically than women. Women felt more obligated both to trust and reciprocate, but the impact of obligation on behavior varies.

Genetic Influences Are Virtually Absent for Trust: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0093880
trust defined by poll

Over the past decades, numerous twin studies have revealed moderate to high heritability estimates for individual differences in a wide range of human traits, including cognitive ability, psychiatric disorders, and personality traits. Even factors that are generally believed to be environmental in nature have been shown to be under genetic control, albeit modest. Is such heritability also present in _social traits that are conceptualized as causes and consequences of social interactions_ or in other ways strongly shaped by behavior of other people? Here we examine a population-based sample of 1,012 twins and relatives. We show that the genetic influence on generalized trust in other people (trust-in-others: h2 = 5%, ns), and beliefs regarding other people’s trust in the self (trust-in-self: h2 = 13%, ns), is virtually absent. As test-retest reliability for both scales were found to be moderate or high (r = .76 and r = .53, respectively) in an independent sample, we conclude that all variance in trust is likely to be accounted for by non-shared environmental influences.

Dutch sample

Generalized Trust: Four Lessons From Genetics and Culture: http://journals.sagepub.com/doi/abs/10.1177/0963721414552473
We share four basic lessons on trust: (a) Generalized trust is more a matter of culture than genetics; (b) trust is deeply rooted in social interaction experiences (that go beyond childhood), networks, and media; (c) people have too little trust in other people in general; and (d) it is adaptive to regulate a “healthy dose” of generalized trust.

Trust is heritable, whereas distrust is not: http://www.pnas.org/content/early/2017/06/13/1617132114
Notably, although both trust and distrust are strongly influenced by the individual’s unique environment, interestingly, trust shows significant genetic influences, whereas distrust does not. Rather, distrust appears to be primarily socialized, including influences within the family.

[ed.: All this is consistent with my intuition that moral behavior is more subject to cultural/"free will"-type influences.]
models  economics  behavioral-econ  decision-theory  wiki  reference  classic  minimum-viable  game-theory  decision-making  trust  GT-101  putnam-like  justice  social-capital  cooperate-defect  microfoundations  multi  study  psychology  social-psych  regularizer  environmental-effects  coordination  variance-components  europe  nordic  usa  🌞  🎩  anglo  biodet  objective-measure  sociology  behavioral-gen  poll  self-report  null-result  comparison  org:nat  chart  iteration-recursion  homo-hetero  intricacy 
december 2016 by nhaliday
Mandelbrot (and Hudson’s) The (mis)Behaviour of Markets: A Fractal View of Risk, Ruin, and Reward | EVOLVING ECONOMICS
If you have read Nassim Taleb’s The Black Swan you will have come across some of Benoit Mandelbrot’s ideas. However, Mandelbrot and Hudson’s The (mis)Behaviour of Markets: A Fractal View of Risk, Ruin, and Reward offers a much clearer critique of the underpinnings of modern financial theory (there are many parts of The Black Swan where I’m still not sure I understand what Taleb is saying). Mandelbrot describes and pulls apart the contributions of Markowitz, Sharpe, Black, Scholes and friends in a way likely understandable to the intelligent lay reader. I expect that might flow from science journalist Richard Hudson’s involvement in writing the book.

- interesting parable about lakes and markets (but power laws aren't memoryless...?)
- yeah I think that's completely wrong actually. the important property of power laws is the lack of finite higher-order moments.

based off http://www.iima.ac.in/~jrvarma/blog/index.cgi/2008/12/21/ I think he really did mean a power law (x = 100/sqrt(r) => pdf is p(x) ~ |dr/dx| = 2e4/x^3)

edit: ah I get it now, for X ~ p(x) = 2/x^3 on [1,inf), we have E[X|X > k] = 2k, so not memoryless, but rather subject to a "slippery slope"
books  summary  finance  map-territory  tetlock  review  econotariat  distribution  parable  blowhards  multi  risk  decision-theory  tails  meta:prediction  complex-systems  broad-econ  power-law 
november 2016 by nhaliday
Risk Arbitrage | Ordinary Ideas
People have different risk profiles, and different beliefs about the future. But it seems to me like these differences should probably get washed out in markets, so that as a society we pursue investments if and only if they have good returns using some particular beliefs (call them the market’s beliefs) and with respect to some particular risk profile (call it the market’s risk profile).

As it turns out, if we idealize the world hard enough these two notions collapse, yielding a single probability distribution P which has the following property: on the margins, every individual should make an investment if and only if it has a positive expected value with respect to P. This probability distribution tends to be somewhat pessimistic: because people care about wealth more in worlds where wealth is scarce (being risk averse), events like a complete market collapse receive higher probability under P than under the “real” probability distribution over possible futures.
insight  thinking  hanson  rationality  explanation  finance  🤖  alt-inst  spock  confusion  prediction-markets  markets  ratty  decision-theory  clever-rats  pre-2013  acmtariat  outcome-risk  info-econ  info-dynamics 
september 2016 by nhaliday
Shut Up And Guess - Less Wrong
At what confidence level do you guess? At what confidence level do you answer "don't know"?

I took several of these tests last month, and the first thing I did was some quick mental calculations. If I have zero knowledge of a question, my expected gain from answering is 50% probability of earning one point and 50% probability of losing one half point. Therefore, my expected gain from answering a question is .5(1)-.5(.5)= +.25 points. Compare this to an expected gain of zero from not answering the question at all. Therefore, I ought to guess on every question, even if I have zero knowledge. If I have some inkling, well, that's even better.

You look disappointed. This isn't a very exciting application of arcane Less Wrong knowledge. Anyone with basic math skills should be able to calculate that out, right?

I attend a pretty good university, and I'm in a postgraduate class where most of us have at least a bachelor's degree in a hard science, and a few have master's degrees. And yet, talking to my classmates in the cafeteria after the first test was finished, I started to realize I was the only person in the class who hadn't answered "don't know" to any questions.

even more interesting stories in the comments
street-fighting  lesswrong  yvain  essay  rationality  regularizer  len:short  ratty  stories  higher-ed  education  decision-theory  frontier  thinking  spock  biases  pre-2013  low-hanging  decision-making  mental-math  bounded-cognition  nitty-gritty  paying-rent  info-dynamics  analytical-holistic  quantitative-qualitative 
september 2016 by nhaliday
Information Processing: Bounded cognition
Many people lack standard cognitive tools useful for understanding the world around them. Perhaps the most egregious case: probability and statistics, which are central to understanding health, economics, risk, crime, society, evolution, global warming, etc. Very few people have any facility for calculating risk, visualizing a distribution, understanding the difference between the average, the median, variance, etc.

Risk, Uncertainty, and Heuristics: http://infoproc.blogspot.com/2018/03/risk-uncertainty-and-heuristics.html
Risk = space of outcomes and probabilities are known. Uncertainty = probabilities not known, and even space of possibilities may not be known. Heuristic rules are contrasted with algorithms like maximization of expected utility.

How do smart people make smart decisions? | Gerd Gigerenzer

Helping Doctors and Patients Make Sense of Health Statistics: http://www.ema.europa.eu/docs/en_GB/document_library/Presentation/2014/12/WC500178514.pdf
street-fighting  thinking  stats  rationality  hsu  metabuch  models  biases  distribution  pre-2013  scitariat  intelligence  neurons  conceptual-vocab  map-territory  clarity  meta:prediction  nibble  mental-math  bounded-cognition  nitty-gritty  s:*  info-dynamics  quantitative-qualitative  chart  tricki  pdf  white-paper  multi  outcome-risk  uncertainty  heuristic  study  medicine  meta:medicine  decision-making  decision-theory 
july 2016 by nhaliday

bundles : abstractacmframepredictionthinkingvague

related tags

absolute-relative  abstraction  academia  accelerationism  accretion  acm  acmtariat  additive  adversarial  age-generation  aging  agriculture  ai  ai-control  akrasia  algebra  algorithms  alignment  alt-inst  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  anthropology  antiquity  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  arbitrage  arms  article  ascetic  asia  atoms  attention  audio  authoritarianism  automation  average-case  axelrod  axioms  bare-hands  bayesian  behavioral-econ  behavioral-gen  being-right  ben-recht  benchmarks  berkeley  betting  biases  big-peeps  big-picture  big-surf  big-yud  bio  biodet  biohacking  biophysical-econ  biotech  bits  blowhards  books  bostrom  bounded-cognition  britain  broad-econ  business  calculation  capital  capitalism  cartoons  characterization  charity  chart  checklists  chemistry  china  christianity  civic  civil-liberty  civilization  clarity  class  classic  clever-rats  coalitions  coarse-fine  coding-theory  cog-psych  cohesion  collaboration  comics  commentary  comparison  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  concentration-of-measure  concept  conceptual-vocab  confidence  confluence  confusion  conquest-empire  constraint-satisfaction  contracts  contradiction  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  core-rats  cost-benefit  counterfactual  counting  course  cracker-econ  creative  criminal-justice  critique  crux  crypto  cs  cultural-dynamics  curiosity  current-events  cybernetics  cycles  dark-arts  darwinian  data  data-science  death  debate  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  definition  dennett  density  descriptive  detail-architecture  deterrence  differential  dimensionality  direct-indirect  discrete  discussion  distribution  diversity  domestication  duality  early-modern  ecology  economics  econotariat  eden  eden-heaven  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  elections  electromag  elite  embodied  embodied-cognition  embodied-street-fighting  emotion  empirical  ems  endogenous-exogenous  energy-resources  enhancement  enlightenment-renaissance-restoration-reformation  entropy-like  environmental-effects  envy  epistemic  equilibrium  error  essay  estimate  ethics  europe  evolution  evopsych  examples  existence  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  explore-exploit  exposition  extrema  faq  farmers-and-foragers  field-study  finance  finiteness  flexibility  flux-stasis  formal-values  free-riding  frequentist  frontier  fungibility-liquidity  futurism  gallic  game-theory  garett-jones  gender  gender-diff  generalization  giants  gnon  gnosis-logos  google  gotchas  government  gradient-descent  gray-econ  gregory-clark  group-selection  growth  growth-econ  GT-101  guide  guilt-shame  gwern  hanson  hardware  hari-seldon  hashing  health  healthcare  heavy-industry  henrich  heuristic  hi-order-bits  high-dimension  higher-ed  history  hmm  homo-hetero  honor  horror  hsu  huge-data-the-biggest  humanity  hypothesis-testing  ideas  ideology  idk  iidness  illusion  impetus  impro  incentives  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  innovation  insight  insurance  integrity  intelligence  interdisciplinary  interests  internet  intersection-connectedness  interview  intricacy  intuition  investing  iq  iron-age  iteration-recursion  jargon  justice  kinship  knowledge  labor  land  language  large-factor  latent-variables  law  learning  lecture-notes  legacy  len:long  len:short  lens  lesswrong  letters  levers  leviathan  life-history  linear-algebra  linear-programming  linearity  links  list  local-global  logic  long-short-run  longform  love-hate  low-hanging  machine-learning  macro  madisonian  magnitude  male-variability  malthus  management  manifolds  map-territory  marginal  marginal-rev  market-failure  markets  markov  martial  matching  math  math.CA  math.DS  math.GR  math.NT  math.RT  matrix-factorization  measure  measurement  medicine  medieval  mediterranean  mental-math  meta-analysis  meta:medicine  meta:prediction  meta:rhetoric  meta:war  metabolic  metabuch  metameta  methodology  metrics  michael-jordan  micro  microfoundations  minimum-viable  miri-cfar  model-class  model-organism  models  modernity  moloch  moments  money  monte-carlo  morality  mostly-modern  multi  multiplicative  music-theory  mutation  n-factor  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  nibble  nietzschean  nihil  nitty-gritty  noblesse-oblige  noise-structure  nonlinearity  nootropics  nordic  null-result  number  numerics  objective-measure  occam  occident  old-anglo  online-learning  open-closed  openai  optimization  order-disorder  ORFE  org:bleg  org:econlib  org:edge  org:junk  org:mat  org:med  org:nat  org:ngo  org:popup  organizing  orient  outcome-risk  p:***  p:null  p:someday  papers  parable  parasites-microbiome  parenting  parsimony  patho-altruism  patience  paying-rent  pdf  peace-violence  people  performance  personality  pessimism  phalanges  phase-transition  philosophy  phys-energy  physics  piracy  plots  policy  politics  poll  pop-diff  population  power-law  pragmatic  pre-2013  pre-ww2  prediction  prediction-markets  prepping  preprint  princeton  prioritizing  priors-posteriors  pro-rata  probability  problem-solving  proofs  properties  proposal  prudence  psychology  psychometrics  public-goodish  publishing  putnam-like  q-n-a  quantified-self  quantitative-qualitative  quantum  quixotic  quotes  random  ranking  rationality  ratty  reading  realness  recent-selection  recommendations  redistribution  reduction  reference  reflection  regularizer  reinforcement  relativity  religion  replication  reputation  research  retention  review  revolution  rhetoric  rigor  rigorous-crypto  risk  ritual  robust  roots  rot  s:*  s:**  saas  sampling  sanjeev-arora  sapiens  scale  science  scitariat  SDP  search  selection  self-control  self-interest  self-report  sequential  sex  shift  signal-noise  signaling  similarity  simplex  singularity  sinosphere  skeleton  sky  slides  slippery-slope  smoothness  social  social-capital  social-choice  social-norms  social-psych  social-science  sociality  society  sociology  software  solid-study  space  speculation  speed  speedometer  spock  stamina  stat-mech  stats  status  stories  strategy  street-fighting  structure  study  studying  stylized-facts  subculture  summary  survey  synthesis  systems  tails  tainter  tcs  technocracy  technology  techtariat  telos-atelos  temperance  tetlock  the-basilisk  the-classics  the-great-west-whale  the-self  the-trenches  the-watchers  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  things  thinking  threat-modeling  tidbits  time  time-preference  toolkit  top-n  topology  track-record  trade  tribalism  tricki  troll  trust  truth  turchin  turing  tutorial  twin-study  twitter  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unsupervised  urban-rural  us-them  usa  values  vampire-squid  variance-components  visual-understanding  visualization  volo-avolo  von-neumann  vulgar  war  water  wealth  westminster  white-paper  wiki  wild-ideas  winner-take-all  wire-guided  within-without  world-war  X-not-about-Y  yoga  yvain  zero-positive-sum  🌞  🎩  🐸  👳  🔬  🤖  🦉 

Copy this bookmark:



description:


tags: