nhaliday + additive   22

Altruism in a volatile world | Nature
The evolution of altruism—costly self-sacrifice in the service of others—has puzzled biologists1 since The Origin of Species. For half a century, attempts to understand altruism have developed around the concept that altruists may help relatives to have extra offspring in order to spread shared genes2. This theory—known as inclusive fitness—is founded on a simple inequality termed Hamilton’s rule2. However, explanations of altruism have typically not considered the stochasticity of natural environments, which will not necessarily favour genotypes that produce the greatest average reproductive success3,4. Moreover, empirical data across many taxa reveal associations between altruism and environmental stochasticity5,6,7,8, a pattern not predicted by standard interpretations of Hamilton’s rule. Here we derive Hamilton’s rule with explicit stochasticity, leading to new predictions about the evolution of altruism. We show that altruists can increase the long-term success of their genotype by reducing the temporal variability in the number of offspring produced by their relatives. Consequently, costly altruism can evolve even if it has a net negative effect on the average reproductive success of related recipients. The selective pressure on volatility-suppressing altruism is proportional to the coefficient of variation in population fitness, and is therefore diminished by its own success. Our results formalize the hitherto elusive link between bet-hedging and altruism4,9,10,11, and reveal missing fitness effects in the evolution of animal societies.
study  bio  evolution  altruism  kinship  stylized-facts  models  intricacy  random  signal-noise  time  order-disorder  org:nat  EGT  cooperate-defect  population-genetics  moments  expectancy  multiplicative  additive 
march 2018 by nhaliday
Lecture 14: When's that meteor arriving
- Meteors as a random process
- Limiting approximations
- Derivation of the Exponential distribution
- Derivation of the Poisson distribution
- A "Poisson process"
nibble  org:junk  org:edu  exposition  lecture-notes  physics  mechanics  space  earth  probability  stats  distribution  stochastic-processes  closure  additive  limits  approximation  tidbits  acm  binomial  multiplicative 
september 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
Structure theorem for finitely generated modules over a principal ideal domain - Wikipedia
- finitely generative modules over PID isomorphic to sum of quotients by decreasing sequences of proper ideals
- never really understood the proof of this in Ma5b
math  algebra  characterization  levers  math.AC  wiki  reference  nibble  proofs  additive  arrows 
february 2017 by nhaliday
Why does 'everything look correlated on a log-log scale'? - Quora
A correlation on a log log scale is meant to suggest the data follows a power law relationship of the form yy∝x−n.∝x−n.

A low R2R2 is suppose to suggest that the data either actually follows some other distribution like yy∝e−x∝e−xor is simply random noise. The problem is that log log correlation is a necessary but not sufficient condition to prove a power law relationship. While ruling out random noise is fairly easy, ruling out an alternate functional form is much harder- you can reject a power law hypothesis by a log log plot but you cannot prove it by one. As Aaron Brown answer points out, a lot of stuff that looks like it has a power law relationship does not actually follow it in reality. In particular, an exponential or log normal relationship might give similar results over most of the range but will diverge strongly at the tail end .[1] This difference can be difficult to detect if limited data is collected at the tail ends and deviations look like noise.

An example of a log normal distribution plotted on a normal and log-log scale. [2] Note the appearance of a straight line on the right tail that diverges strongly on the left tail. Using a power law relationship in this region will cause serious errors.
q-n-a  qra  data-science  correlation  regression  magnitude  dataviz  street-fighting  gotchas  nibble  plots  multiplicative  additive  power-law 
february 2017 by nhaliday
Performance Trends in AI | Otium
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?

In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.

In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.

In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.

In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.

In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.

...

The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

http://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/
ratty  core-rats  summary  prediction  trends  analysis  spock  ai  deep-learning  state-of-art  🤖  deepgoog  games  nlp  computer-vision  nibble  reinforcement  model-class  faq  org:bleg  shift  chart  technology  language  audio  accuracy  speaking  foreign-lang  definite-planning  china  asia  microsoft  google  ideas  article  speedometer  whiggish-hegelian  yvain  ssc  smoothness  data  hsu  scitariat  genetics  iq  enhancement  genetic-load  neuro  neuro-nitgrit  brain-scan  time-series  multiplicative  iteration-recursion  additive  multi  arrows 
january 2017 by nhaliday
Hyperbolic discounting - Wikipedia
Individuals using hyperbolic discounting reveal a strong tendency to make choices that are inconsistent over time – they make choices today that their future self would prefer not to have made, despite using the same reasoning. This dynamic inconsistency happens because the value of future rewards is much lower under hyperbolic discounting than under exponential discounting.
psychology  cog-psych  behavioral-econ  values  time-preference  wiki  reference  concept  models  distribution  time  uncertainty  decision-theory  decision-making  sequential  stamina  neurons  akrasia  contradiction  self-control  patience  article  formal-values  microfoundations  constraint-satisfaction  additive  long-short-run 
january 2017 by nhaliday
Overcoming Bias : Lognormal Jobs
could be the case that exponential tech improvement -> linear job replacement, as long as distribution of jobs across automatability is log-normal (I don't entirely follow the argument)

Paul Christiano has objection (to premise not argument) in the comments
hanson  thinking  street-fighting  futurism  automation  labor  economics  ai  prediction  🎩  gray-econ  regularizer  contrarianism  c:*  models  distribution  marginal  2016  meta:prediction  discussion  clever-rats  ratty  speedometer  ideas  neuro  additive  multiplicative  magnitude  iteration-recursion 
november 2016 by nhaliday

bundles : abstractmath

related tags

academia  accuracy  acm  additive  additive-combo  advice  age-generation  ai  akrasia  algebra  alt-inst  altruism  analysis  analytical-holistic  anglosphere  aphorism  approximation  arms  arrows  article  asia  audio  automation  axioms  behavioral-econ  being-becoming  best-practices  biases  big-list  big-surf  binomial  bio  biophysical-econ  biotech  books  borel-cantelli  brain-scan  c(pp)  c:*  caching  calculation  cartoons  characterization  chart  cheatsheet  checklists  china  clever-rats  closure  coalitions  cog-psych  commutativity  comparison  competition  computer-memory  computer-vision  concentration-of-measure  concept  concurrency  constraint-satisfaction  contradiction  contrarianism  cooperate-defect  core-rats  correlation  cost-benefit  critique  darwinian  data  data-science  dataviz  debate  debt  decision-making  decision-theory  deep-learning  deepgoog  defense  definite-planning  definition  demographics  detail-architecture  discussion  distribution  earth  economics  eden-heaven  EGT  empirical  energy-resources  enhancement  entropy-like  epistemic  equilibrium  error  error-handling  essence-existence  estimate  evolution  examples  existence  expectancy  explanans  exposition  faq  fermi  fertility  flexibility  flux-stasis  foreign-lang  formal-values  fourier  futurism  games  gavisti  gelman  genetic-load  genetics  gnon  gnosis-logos  google  gotchas  gowers  gray-econ  guessing  hacker  hanson  hardware  heuristic  hidden-motives  hive-mind  homo-hetero  howto  hsu  hypocrisy  ideas  identity  ideology  IEEE  inequality  info-dynamics  insight  institutions  intel  intervention  intricacy  intuition  iq  iteration-recursion  judgement  kinship  labor  language  latency-throughput  lecture-notes  lens  levers  lifts-projections  limits  linear-algebra  linearity  links  list  local-global  long-short-run  magnitude  marginal  markov  math  math.AC  math.CA  math.CO  math.CT  math.GR  math.NT  math.RT  mathtariat  measure  mechanics  mental-math  meta:prediction  meta:rhetoric  metabuch  metal-to-virtual  metameta  microfoundations  microsoft  migration  military  model-class  models  moments  motivation  multi  multiplicative  nature  near-far  neuro  neuro-nitgrit  neurons  nibble  nitty-gritty  nlp  numerics  objektbuch  oop  open-problems  optimate  optimization  order-disorder  org:bleg  org:edu  org:junk  org:nat  os  overflow  paleocon  patience  pdf  peace-violence  performance  philosophy  physics  plots  pls  poast  polisci  politics  population  population-genetics  power-law  pragmatic  prediction  priors-posteriors  privacy  pro-rata  probabilistic-method  probability  programming  proofs  prudence  psychology  publishing  puzzles  q-n-a  qra  questions  random  ranking  rant  rationality  ratty  recommendations  reference  regression  regularity  regularizer  reinforcement  retention  right-wing  rigor  risk  roots  s:*  sci-comp  scitariat  search  self-control  sequential  shift  signal-noise  slippery-slope  smoothness  social-psych  soft-question  space  speaking  speed  speedometer  spock  ssc  stamina  state-of-art  static-dynamic  stats  stochastic-processes  street-fighting  study  stylized-facts  summary  synthesis  systematic-ad-hoc  systems  technology  techtariat  tetlock  theory-practice  thinking  tidbits  time  time-preference  time-series  tradition  trends  tribalism  tricks  types  uncertainty  universalism-particularism  us-them  values  visualization  walter-scheidel  war  west-hunter  whiggish-hegelian  wiki  wire-guided  yoga  yvain  🎩  🤖 

Copy this bookmark:



description:


tags: