nhaliday + confusion   54

Solidarity Forever | West Hunter
If you had a gene with a conspicuous effect (like a green beard) that at the same time caused the carrier to favor other individuals with a green beard, you could get a very powerful kind of genetic altruism, one not limited to close relatives. A very strong effect, one that caused you to act as if other carriers were just as valuable as you are (as if other carriers were your identical twin) could exist, but weaker effects (green fuzz) could also be favored by selection – if you were just somewhat more likely to cooperate with others bearing the mark. That could be enough to drive strong selection for the gene, and might not even be terribly noticeable.

This might be especially powerful in humans: we have so very many ways of cooperating or tripping each other up. Now and then you get partial alignment of interests, and remarkable things happen. If we could all just get along, we could conquer the world and make everyone else our slaves and playthings!

...

Shortly after the Green Beards became influential, you’d see a lot of people wearing fake green beards, which would cut down on the advantage and possibly turn green beards into easy marks, chumps doomed to failure. It would work best if the identifying mark was hard to copy – difficult today, but in the past some things, eye color for example, would have been hard to copy.

This all gets complicated, since it’s not always easy to know what someone else’s best interest is – let along that of the entire Greenbeard race. For that matter it’s not always that easy to know what your own best interest is.

I’m for it, of course: trying to fighting off such a mutant takeover would make life more interesting.

https://westhunt.wordpress.com/2015/03/11/solidarity-forever/#comment-67414
There no evidence, that I know of, of anything like a strong green-beard effect in humans. If there was one, it would have dramatic consequences, which we haven’t observed, so I doubt if one exists. Although we could always create one, for laughs.

Any gene that selected for extended kin altruism would not flourish – would not increase in frequency – because the expensive altruistic effort would not be focused on people who were more likely than average (in that population!) to carry the relevant allele. Which means that every time that expensive altruism happened, the average allele frequency in that population would go down, not up: this is not the route to success. If you can’t understand, that’s your problem.

Frank Salter is entirely wrong. There is no such thing as “genetic interest”, in the sense he’s talking about, not one that makes people feel the way he’d like them to. Sheesh, if there were, he wouldn’t have to argue about it, anymore than you have to argue parents into caring about their children. Now if he said that having more Swedes in the world would result in something he liked, that could well be true: but there’s no instinct that says everyone, even most Swedes, have to favor that course.

You have to do the math: when you do, this idea doesn’t work. And that’s the end of this conversation.

https://westhunt.wordpress.com/2015/03/11/solidarity-forever/#comment-67424
That lady’s mind ain’t right.

Speaking of which, one has to wonder which is the greater threat – the increasing dumb fraction of this country, or the increasing crazy fraction.
west-hunter  scitariat  discussion  speculation  ideas  sapiens  genetics  population-genetics  group-selection  cohesion  EGT  CRISPR  altruism  🌞  kinship  coordination  organizing  gedanken  biotech  enhancement  cooperate-defect  axelrod  deep-materialism  new-religion  interests  tribalism  us-them  multi  poast  ethnocentrism  race  europe  nordic  instinct  prudence  iq  volo-avolo  confusion  cybernetics  sociality  alignment 
may 2017 by nhaliday
probability - Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? - Cross Validated
The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in 100p% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability pp given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself.

http://stats.stackexchange.com/questions/139290/a-psychology-journal-banned-p-values-and-confidence-intervals-is-it-indeed-wise

PS. Note that my question is not about the ban itself; it is about the suggested approach. I am not asking about frequentist vs. Bayesian inference either. The Editorial is pretty negative about Bayesian methods too; so it is essentially about using statistics vs. not using statistics at all.

wut

http://stats.stackexchange.com/questions/6966/why-continue-to-teach-and-use-hypothesis-testing-when-confidence-intervals-are
http://stats.stackexchange.com/questions/2356/are-there-any-examples-where-bayesian-credible-intervals-are-obviously-inferior
http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval
http://stats.stackexchange.com/questions/6652/what-precisely-is-a-confidence-interval
http://stats.stackexchange.com/questions/1164/why-havent-robust-and-resistant-statistics-replaced-classical-techniques/
http://stats.stackexchange.com/questions/16312/what-is-the-difference-between-confidence-intervals-and-hypothesis-testing
http://stats.stackexchange.com/questions/31679/what-is-the-connection-between-credible-regions-and-bayesian-hypothesis-tests
http://stats.stackexchange.com/questions/11609/clarification-on-interpreting-confidence-intervals
http://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals
q-n-a  overflow  nibble  stats  data-science  science  methodology  concept  confidence  conceptual-vocab  confusion  explanation  thinking  hypothesis-testing  jargon  multi  meta:science  best-practices  error  discussion  bayesian  frequentist  hmm  publishing  intricacy  wut  comparison  motivation  clarity  examples  robust  metabuch  🔬  info-dynamics  reference  grokkability-clarity 
february 2017 by nhaliday
Difference between off-policy and on-policy learning - Cross Validated
The reason that Q-learning is off-policy is that it updates its Q-values using the Q-value of the next state s′ and the greedy action a′. In other words, it estimates the return (total discounted future reward) for state-action pairs assuming a greedy policy were followed despite the fact that it's not following a greedy policy.

The reason that SARSA is on-policy is that it updates its Q-values using the Q-value of the next state s′ and the current policy's action a″. It estimates the return for state-action pairs assuming the current policy continues to be followed.

The distinction disappears if the current policy is a greedy policy. However, such an agent would not be good since it never explores.
q-n-a  overflow  machine-learning  acm  reinforcement  confusion  jargon  generalization  nibble  definition  greedy  comparison 
february 2017 by nhaliday
What is the relationship between information theory and Coding theory? - Quora
basically:
- finite vs. asymptotic
- combinatorial vs. probabilistic (lotsa overlap their)
- worst-case (Hamming) vs. distributional (Shannon)

Information and coding theory most often appear together in the subject of error correction over noisy channels. Historically, they were born at almost exactly the same time - both Richard Hamming and Claude Shannon were working at Bell Labs when this happened. Information theory tends to heavily use tools from probability theory (together with an "asymptotic" way of thinking about the world), while traditional "algebraic" coding theory tends to employ mathematics that are much more finite sequence length/combinatorial in nature, including linear algebra over Galois Fields. The emergence in the late 90s and first decade of 2000 of codes over graphs blurred this distinction though, as code classes such as low density parity check codes employ both asymptotic analysis and random code selection techniques which have counterparts in information theory.

They do not subsume each other. Information theory touches on many other aspects that coding theory does not, and vice-versa. Information theory also touches on compression (lossy & lossless), statistics (e.g. large deviations), modeling (e.g. Minimum Description Length). Coding theory pays a lot of attention to sphere packing and coverings for finite length sequences - information theory addresses these problems (channel & lossy source coding) only in an asymptotic/approximate sense.
q-n-a  qra  math  acm  tcs  information-theory  coding-theory  big-picture  comparison  confusion  explanation  linear-algebra  polynomials  limits  finiteness  math.CO  hi-order-bits  synthesis  probability  bits  hamming  shannon  intricacy  nibble  s:null  signal-noise 
february 2017 by nhaliday
What is the difference between inference and learning? - Quora
- basically boils down to latent variables vs. (hyper-)parameters
- so computing p(x_h|x_v,θ) vs. computing p(θ|X_v)
- from a completely Bayesian perspective, no real difference
- described in more detail in [Kevin Murphy, 10.4]
q-n-a  qra  jargon  machine-learning  stats  acm  bayesian  graphical-models  latent-variables  confusion  comparison  nibble 
january 2017 by nhaliday
I don't understand Python's Asyncio | Armin Ronacher's Thoughts and Writings
Man that thing is complex and it keeps getting more complex. I do not have the mental capacity to casually work with asyncio. It requires constantly updating the knowledge with all language changes and it has tremendously complicated the language. It's impressive that an ecosystem is evolving around it but I can't help but get the impression that it will take quite a few more years for it to become a particularly enjoyable and stable development experience.

What landed in 3.5 (the actual new coroutine objects) is great. In particular with the changes that will come up there is a sensible base that I wish would have been in earlier versions. The entire mess with overloading generators to be coroutines was a mistake in my mind. With regards to what's in asyncio I'm not sure of anything. It's an incredibly complex thing and super messy internally. It's hard to comprehend how it works in all details. When you can pass a generator, when it has to be a real coroutine, what futures are, what tasks are, how the loop works and that did not even come to the actual IO part.

The worst part is that asyncio is not even particularly fast. David Beazley's live demo hacked up asyncio replacement is twice as fast as it. There is an enormous amount of complexity that's hard to understand and reason about and then it fails on it's main promise. I'm not sure what to think about it but I know at least that I don't understand asyncio enough to feel confident about giving people advice about how to structure code for it.
python  libraries  review  concurrency  programming  pls  rant  🖥  techtariat  intricacy  design  confusion  performance  critique 
october 2016 by nhaliday
Risk Arbitrage | Ordinary Ideas
People have different risk profiles, and different beliefs about the future. But it seems to me like these differences should probably get washed out in markets, so that as a society we pursue investments if and only if they have good returns using some particular beliefs (call them the market’s beliefs) and with respect to some particular risk profile (call it the market’s risk profile).

As it turns out, if we idealize the world hard enough these two notions collapse, yielding a single probability distribution P which has the following property: on the margins, every individual should make an investment if and only if it has a positive expected value with respect to P. This probability distribution tends to be somewhat pessimistic: because people care about wealth more in worlds where wealth is scarce (being risk averse), events like a complete market collapse receive higher probability under P than under the “real” probability distribution over possible futures.
insight  thinking  hanson  rationality  explanation  finance  🤖  alt-inst  spock  confusion  prediction-markets  markets  ratty  decision-theory  clever-rats  pre-2013  acmtariat  outcome-risk  info-econ  info-dynamics 
september 2016 by nhaliday

bundles : emojithinking

related tags

aaronson  acm  acmtariat  albion  alignment  alt-inst  altruism  anglo  article  asia  atoms  axelrod  bayesian  behavioral-gen  best-practices  bias-variance  big-list  big-picture  bio  biodet  bioinformatics  biotech  bits  brands  business  capital  causation  chart  checklists  china  clarity  clever-rats  cocktail  coding-theory  cohesion  community  comparison  compensation  complexity  concept  conceptual-vocab  concurrency  confidence  confusion  context  contradiction  convergence  cooperate-defect  coordination  core-rats  cost-benefit  counterexample  CRISPR  critique  curiosity  cybernetics  data-science  decision-theory  deep-learning  deep-materialism  definition  design  differential  discussion  duplication  econometrics  economics  econotariat  EGT  elegance  endo-exo  endogenous-exogenous  engineering  enhancement  ensembles  entropy-like  environmental-effects  equilibrium  error  estimate  ethnocentrism  europe  examples  existence  expert  expert-experience  explanation  exploratory  faq  features  finance  finiteness  fisher  food  foreign-lang  forum  frequentist  game-theory  GCTA  gedanken  generalization  generative  genetics  genomics  geometry  giants  gotchas  gowers  graphical-models  graphs  greedy  grokkability-clarity  ground-up  group-selection  GWAS  hamming  hanson  hi-order-bits  hmm  hn  homo-hetero  hsu  hypothesis-testing  ideas  info-dynamics  info-econ  info-foraging  information-theory  init  innovation  insight  instinct  intellectual-property  interests  intersection  intersection-connectedness  intricacy  intuition  investing  iq  janus  jargon  kinship  language  large-factor  latent-variables  len:short  levers  lexical  libraries  limits  linear-algebra  linear-models  linearity  liner-notes  links  list  machine-learning  macro  manifolds  marginal  marginal-rev  markets  markov  martingale  matching  math  math.CO  math.DS  mathtariat  matrix-factorization  meta:math  meta:science  metabuch  methodology  metrics  ML-MAP-E  model-class  monetary-fiscal  monte-carlo  motivation  multi  networking  new-religion  news  nibble  nl-and-so-can-you  nonlinearity  nonparametric  nordic  occam  ORFE  org:anglo  org:bleg  org:edu  org:fin  org:junk  org:ngo  organizing  outcome-risk  overflow  parallax  parametric  parsimony  performance  physics  pls  poast  policy  polisci  political-econ  polynomials  population-genetics  pre-2013  prediction-markets  priors-posteriors  probability  programming  proofs  property-rights  prudence  publishing  puzzles  python  q-n-a  qra  QTL  quantum  quantum-info  race  rant  rationality  ratty  reason  redistribution  reference  reflection  regression  regularity  reinforcement  relativization  replication  research  review  rigor  robust  roots  s:null  sampling  sapiens  scaling-up  science  scitariat  shannon  signal-noise  sinosphere  sociality  soft-question  sparsity  spearhead  speculation  spock  ssc  stackex  stat-power  state  stats  stochastic-processes  structure  study  survey  symmetry  synthesis  taxes  tcs  tcstariat  tech  techtariat  the-trenches  things  thinking  tidbits  topology  tribalism  trivia  tutorial  unsupervised  us-them  variance-components  visual-understanding  visualization  volo-avolo  welfare-state  west-hunter  wigderson  wiki  wonkish  working-stiff  writing  wut  yoga  yvain  🌞  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: