nhaliday + accuracy   45

Lateralization of brain function - Wikipedia
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]


Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.


Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.


RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.


Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.


We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.


Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Commentary: Predictions and the brain: how musical sounds become rewarding
did i just learn something big?

Prerecorded music has ABSOLUTELY NO
SURVIVAL reward. Zero. It does not help
with procreation (well, unless you're the
one making the music, then you get
endless sex) and it does not help with
individual survival.
As such, one must seriously self test
(n=1) prerecorded music actually holds
you back.
If you're reading this and you try no
music for 2 weeks and fail, hit me up. I
have some mind blowing stuff to show
you in how you can control others with
study  psychology  cog-psych  yvain  ssc  models  speculation  music  art  aesthetics  evolution  evopsych  accuracy  meta:prediction  neuro  neuro-nitgrit  neurons  error  roots  intricacy  hmm  wire-guided  machiavelli  dark-arts  predictive-processing  reinforcement 
june 2018 by nhaliday
Gauging the Uncertainty of the Economic Outlook Using Historical Forecasting Errors: The Federal Reserve’s Approach
First, if past performance is a reasonable guide to future accuracy, considerable uncertainty surrounds all macroeconomic projections, including those of FOMC participants. Second, different forecasters have similar accuracy. Third, estimates of uncertainty about future real activity and interest rates are now considerably greater than prior to the financial crisis; in contrast, estimates of inflation accuracy have changed little.
pdf  study  economics  macro  meta:prediction  tetlock  accuracy  org:gov  government  wonkish  moments  🎩  volo-avolo 
september 2017 by nhaliday
All models are wrong - Wikipedia
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled "All models are wrong but some are useful". The section is copied below.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
thinking  metabuch  metameta  map-territory  models  accuracy  wire-guided  truth  philosophy  stats  data-science  methodology  lens  wiki  reference  complex-systems  occam  parsimony  science  nibble  hi-order-bits  info-dynamics  the-trenches  meta:science  physics  fluid  thermo  stat-mech  applicability-prereqs  theory-practice  elegance 
august 2017 by nhaliday
Diophantine approximation - Wikipedia
- rationals perfectly approximated by themselves, badly approximated (eps~1/q) by other rationals
- irrationals well-approximated (eps~1/q^2) by rationals: https://en.wikipedia.org/wiki/Dirichlet%27s_approximation_theorem
nibble  wiki  reference  math  math.NT  approximation  accuracy  levers  pigeonhole-markov  multi  tidbits  discrete  rounding 
august 2017 by nhaliday
Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? | Scientific Reports
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest – and hope – that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited – in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
study  org:nat  papers  machine-learning  chemistry  measurement  volo-avolo  lower-bounds  analysis  realness  speedometer  nibble  🔬  applications  frontier  state-of-art  no-go  accuracy  interdisciplinary 
july 2017 by nhaliday
How accurate are population forecasts?
2 The Accuracy of Past Projections: https://www.nap.edu/read/9828/chapter/4
good ebook:
Beyond Six Billion: Forecasting the World's Population (2000)
Appendix A: Computer Software Packages for Projecting Population
PDE Population Projections looks most relevant for my interests but it's also *ancient*
This Applied Demography Toolbox is a collection of applied demography computer programs, scripts, spreadsheets, databases and texts.

How Accurate Are the United Nations World Population Projections?: http://pages.stern.nyu.edu/~dbackus/BCH/demography/Keilman_JDR_98.pdf
news  org:lite  prediction  meta:prediction  tetlock  demographics  population  demographic-transition  fertility  islam  world  developing-world  africa  europe  multi  track-record  accuracy  org:ngo  pdf  study  sociology  measurement  volo-avolo  methodology  estimate  data-science  error  wire-guided  priors-posteriors  books  guide  howto  software  tools  recommendations  libraries 
july 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

This post should have been entitled “Zombies who only think of their next cool IV fix”
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated 
june 2017 by nhaliday
Historicity of the Bible - Wikipedia
Archaeological discoveries since the 19th century is open to interpretation, but broadly speaking they lend support to few of the Old Testament's historical narratives and offer evidence to challenge others.[a][3][4][b][c][d][8]

Pentateuch: http://www.newadvent.org/cathen/11646c.htm
Biblical Chronology: http://www.newadvent.org/cathen/03731a.htm

cf this guy's blog:

and Greg's twitter comment here (on unrelated subject):
Most wars known to have happened in historical times haven't left much of an archaeological record.
history  antiquity  canon  literature  religion  judaism  christianity  theos  letters  realness  article  wiki  reference  accuracy  archaeology  exegesis-hermeneutics  multi  org:theos  protestant-catholic  truth  law  bible 
june 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.


compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.


If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.


Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric


A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.

this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)

stats  science  hypothesis-testing  correlation  metrics  plots  regression  wiki  reference  nibble  methodology  multi  twitter  social  discussion  best-practices  econotariat  garett-jones  concept  conceptual-vocab  accuracy  causation  acm  matrix-factorization  todo  explanation  yoga  hsu  street-fighting  levers  🌞  2014  scitariat  variance-components  meta:prediction  biodet  s:**  mental-math  reddit  commentary  ssc  poast  gwern  data-science  metric-space  similarity  measure  dependence-independence 
may 2017 by nhaliday
Embryo editing for intelligence - Gwern.net
My hunch is CRISPR/Cas9 will not play a big role in intelligence enhancement. You'd have to edit so many loci b/c of small effect sizes, increasing errors. Embryo selection is much more promising. Peoples with high avg genetic values, of course, have an in-built advantage there.
ratty  gwern  enhancement  scaling-up  genetics  genomics  iq  🌞  CRISPR  futurism  biodet  new-religion  nibble  intervention  🔬  behavioral-gen  faq  chart  ideas  article  multi  twitter  social  commentary  gnon  unaffiliated  prediction  accuracy  technology  QTL  biotech  selection  comparison  scale  magnitude  hard-tech  skunkworks  speedometer  abortion-contraception-embryo 
february 2017 by nhaliday
Performance Trends in AI | Otium
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?

In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.

In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.

In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.

In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.

In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.


The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

ratty  core-rats  summary  prediction  trends  analysis  spock  ai  deep-learning  state-of-art  🤖  deepgoog  games  nlp  computer-vision  nibble  reinforcement  model-class  faq  org:bleg  shift  chart  technology  language  audio  accuracy  speaking  foreign-lang  definite-planning  china  asia  microsoft  google  ideas  article  speedometer  whiggish-hegelian  yvain  ssc  smoothness  data  hsu  scitariat  genetics  iq  enhancement  genetic-load  neuro  neuro-nitgrit  brain-scan  time-series  multiplicative  iteration-recursion  additive  multi 
january 2017 by nhaliday
Faster than Fisher | West Hunter
There’s a simple model of the spread of an advantageous allele:  You take σ, the typical  distance people move in one generation, and s,  the selective advantage: the advantageous allele spreads as a nonlinear wave at speed  σ * √(2s).  The problem is, that’s slow.   Suppose that s = 0.10 (a large advantage), σ = 10 kilometers, and a generation time of 30 years: the allele would take almost 7,000 years to expand out 1000 kilometers.


This big expansion didn’t just happen from peasants marrying the girl next door: it required migrations and conquests. This one looks as if it rode with the Indo-European expansion: I’ll bet it started out in a group that had domesticated only horses.

The same processes, migration and conquest, must explain the wide distribution of many geographically widespread selective sweeps and partial sweeps. They were adaptive, all right, but expanded much faster than possible from purely local diffusion. We already have reason to think that SLC24A5 was carried to Europe by Middle Eastern farmers; the same is probably true for the haplotype that carries the high-activity ergothioniene transporter and the 35delG connexin-26/GJB2 deafness mutation. The Indo-Europeans probably introduced the T-13910 LCT mutation and the delta-F508 cystic fibrosis mutation, so we should see delta-F508 in northwest India and Pakistan – and we do !

To entertain a (possibly mistaken) physical analogy, it sounds like you’re suggested a sort genetic convection through space, as opposed to conduction. I.e. Entire masses of folks, carrying a new selected variant, are displacing others – as opposed to the slow gene flow process of “girl-next-door.” Is that about right? (Hopefully I haven’t revealed my ignorance of basic thermodynamics here…)

Has there been any attempt to estimate sigma from these time periods?

Genetic Convection: https://westhunt.wordpress.com/2015/02/22/genetic-convection/
People are sometimes interested in estimating the point of origin of a sweeping allele: this is probably effectively impossible even if diffusion were the only spread mechanism, since the selective advantage might well vary in both time and space. But that’s ok, since population movements – genetic convection – are real and very important. This means that the difficulties in estimating the origin of a Fisher wave are totally insignificant, compared to the difficulties of estimating the effects of past colonizations, conquests and Völkerwanderungs. So when Yuval Itan and Mark Thomas estimated that 13,910 T LCT allele originated in central Europe, in the early Neolithic, they didn’t just go wrong because of failing to notice that the same allele is fairly common in northern India: no, their whole notion was unsound in the first place. We’re talking turbulence on steroids. Hari Seldon couldn’t figure this one out from the existing geographic distribution.
west-hunter  genetics  population-genetics  street-fighting  levers  evolution  gavisti  🌞  selection  giants  nibble  fisher  speed  gene-flow  scitariat  stylized-facts  methodology  archaeology  waves  frontier  agri-mindset  analogy  visual-understanding  physics  thermo  interdisciplinary  spreading  spatial  geography  poast  multi  volo-avolo  accuracy  estimate  order-disorder  time  homo-hetero  branches  trees  distribution  data  hari-seldon  aphorism  cliometrics  aDNA  mutation  lexical 
november 2016 by nhaliday
natural language processing blog: Debugging machine learning
I've been thinking, mostly in the context of teaching, about how to specifically teach debugging of machine learning. Personally I find it very helpful to break things down in terms of the usual error terms: Bayes error (how much error is there in the best possible classifier), approximation error (how much do you pay for restricting to some hypothesis class), estimation error (how much do you pay because you only have finite samples), optimization error (how much do you pay because you didn't find a global optimum to your optimization problem). I've generally found that trying to isolate errors to one of these pieces, and then debugging that piece in particular (eg., pick a better optimizer versus pick a better hypothesis class) has been useful.
machine-learning  debugging  checklists  best-practices  pragmatic  expert  init  system-design  data-science  acmtariat  error  engineering  clarity  intricacy  model-selection  org:bleg  nibble  noise-structure  signal-noise  knowledge  accuracy  expert-experience  checking 
september 2016 by nhaliday
The Elephant in the Brain: Hidden Motives in Everday Life

A Book Response Prediction: https://www.overcomingbias.com/2017/03/a-book-response-prediction.html
I predict that one of the most common responses will be something like “extraordinary claims require extraordinary evidence.” While the evidence we offer is suggestive, for claims as counterintuitive as ours on topics as important as these, evidence should be held to a higher standard than the one our book meets. We should shut up until we can prove our claims.

I predict that another of the most common responses will be something like “this is all well known.” Wise observers have known and mentioned such things for centuries. Perhaps foolish technocrats who only read in their narrow literatures are ignorant of such things, but our book doesn’t add much to what true scholars and thinkers have long known.


Elephant in the Brain on Religious Hypocrisy:
books  postrat  simler  hanson  impro  anthropology  insight  todo  X-not-about-Y  signaling  🦀  new-religion  psychology  contrarianism  👽  ratty  rationality  hidden-motives  2017  s:**  p:null  ideas  impetus  multi  video  presentation  unaffiliated  review  summary  education  higher-ed  human-capital  propaganda  nationalism-globalism  civic  domestication  medicine  meta:medicine  healthcare  economics  behavioral-econ  supply-demand  roots  questions  charity  hypocrisy  peter-singer  big-peeps  philosophy  morality  ethics  formal-values  cog-psych  evopsych  thinking  conceptual-vocab  intricacy  clarity  accuracy  truth  is-ought  realness  religion  theos  christianity  islam  cultural-dynamics  within-without  neurons  EEA  analysis  article  links  meta-analysis  survey  judaism  compensation  labor  correlation  endogenous-exogenous  causation  critique  politics  government  polisci  political-econ  emotion  health  study  list  class  art  status  effective-altruism  evidence-based  epistemic  error  contradiction  prediction  culture  aphorism  quotes  discovery  no 
august 2016 by nhaliday
The Future of Genetic Enhancement is Not in the West | Quillette

If it becomes possible to safely genetically increase babies’ IQ, it will become inevitable: https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/07/14/if-it-becomes-possible-to-safely-genetically-increase-babies-iq-it-will-become-inevitable/

Baby Genome Sequencing for Sale in China: https://www.technologyreview.com/s/608086/baby-genome-sequencing-for-sale-in-china/
Chinese parents can now decode the genomes of their healthy newborns, revealing disease risks as well as the likelihood of physical traits like male-pattern baldness.

China launches massive genome research initiative: https://news.cgtn.com/news/7767544e34637a6333566d54/share_p.html

research ethics:
First results of CRISPR gene editing of normal embryos released: https://www.newscientist.com/article/2123973-first-results-of-crispr-gene-editing-of-normal-embryos-released/
caveats: https://ipscell.com/2017/08/4-reasons-mitalipov-paper-doesnt-herald-safe-crispr-human-genetic-modification/

So this title is a bit misleading; something like, "cells edited with CRISPR injected into a person for the first time" would be better. While CRISPR is promising for topological treatments, that's not what happened here.
China sprints ahead in CRISPR therapy race: http://science.sciencemag.org/content/358/6359/20
China, Unhampered by Rules, Races Ahead in Gene-Editing Trials: https://www.wsj.com/articles/china-unhampered-by-rules-races-ahead-in-gene-editing-trials-1516562360
U.S. scientists helped devise the Crispr biotechnology tool. First to test it in humans are Chinese doctors



lol: http://www.theonion.com/infographic/pros-and-cons-gene-editing-56740

Japan set to allow gene editing in human embryos [ed.: (for research)]: https://www.nature.com/articles/d41586-018-06847-7
Draft guidelines permit gene-editing tools for research into early human development.
futurism  prediction  enhancement  biotech  essay  china  asia  culture  poll  len:short  new-religion  accelerationism  letters  news  org:mag  org:popup  🌞  sinosphere  🔬  sanctity-degradation  morality  values  democracy  authoritarianism  genetics  CRISPR  scaling-up  orient  multi  org:lite  india  competition  speedometer  org:rec  right-wing  rhetoric  slippery-slope  iq  usa  incentives  technology  org:nat  org:sci  org:biz  trends  current-events  genomics  gnxp  scitariat  commentary  hsu  org:foreign  volo-avolo  regulation  coordination  cooperate-defect  moloch  popsci  announcement  politics  government  policy  science  ethics  :/  org:anglo  cancer  medicine  hn  tech  immune  sapiens  study  summary  bio  disease  critique  regularizer  accuracy  lol  comedy  hard-tech  skunkworks  twitter  social  backup  gnon  🐸  randy-ayndy  civil-liberty  FDA  duplication  left-wing  chart  abortion-contraception-embryo 
august 2016 by nhaliday
Don’t invert that matrix (2010) | Hacker News
However, one of the reasons he's given is not correct: Druinsky and Toledo have shown (http://arxiv.org/abs/1201.6035) that -- despite the very widespread belief to the contrary -- solving a linear system be calculating the inverse can be as accurate (though not nearly as efficient) as solving it directly.
tutorial  programming  commentary  hn  numerics  multi  hmm  techtariat  org:mat  preprint  papers  regularizer  best-practices  accuracy 
may 2016 by nhaliday
How Old Are Fairy Tales? - The Atlantic
Many folklorists disagreed. Some have claimed that many classic fairy tales are recent inventions that followed the advent of mass-printed literature. Others noted that human stories, unlike human genes, aren't just passed down vertically through generations, but horizontally within generations. “They’re passed across societies through trade, exchange, migration, and conquest,” says Tehrani. “The consensus was that these processes would have destroyed any deep signatures of descent from ancient ancestral populations.”

Not so. Tehrani and da Silva found that although neighboring cultures can easily exchange stories, they also often reject the tales of their neighbors. Several stories were less likely to appear in one population if they were told within an adjacent one.

Meanwhile, a quarter of the Tales of Magic showed clear signatures of shared descent from ancient ancestors. “Most people would assume that folktales are rapidly changing and easily exchanged between social groups,” says Simon Greenhill from the Australian National University. “But this shows that many tales are actually surprisingly stable over time and seem to track population history well.” Similarly, a recent study found that flood “myths” among Aboriginal Australians can be traced back to real sea level rises 7,000 years ago.

Many of the Tales of Magic were similarly ancient, as the Grimms suggested. Beauty and the Beast and Rumpelstiltskin were first written down in the 17th and 18th centuries respectively, but they are actually between 2,500 and 6,000 years old—not quite tales as old as time, but perhaps as old as wheels and writing.

The Smith and the Devil is probably 6,000 years old, too. In this story, a crafty blacksmith sells his soul to an evil supernatural entity in exchange for awesome smithing powers, which he then uses to leash the entity to an immovable object. The basic tale has been adapted in everything from Faust to blues lore, but the most ancient version, involving the blacksmith, comes from the Bronze Age! It predates the last common ancestor of all Indo-European languages. “It's constantly being updated and recycled, but it's older than Christianity,” says Tehrani.

This result might help to settle a debate about the origins of Indo-European languages. It rules out the idea that these tongues originated among Neolithic farmers, who lived 9,000 years ago in what is now modern Turkey. After all, how could these people, who hadn’t invented metallurgy, have concocted a story where the hero is a blacksmith? A rival hypothesis becomes far more likely: Indo-European languages emerged 5,000 to 6,000 years ago among pastoralists from the Russian steppes, who knew how to work metal.

The Smith and the Devil: https://en.wikipedia.org/wiki/The_Smith_and_the_Devil
The Smith and the Devil is a European fairy tale. The story is of a smith who makes a pact with a malevolent being—commonly the Devil (in later times), Death or a genie—selling his soul for some power, then tricks the devil out of his prize. In one version, the smith gains the power to weld any materials, then uses this power to stick the devil to an immovable object, allowing the smith to renege on the bargain.[1]


According to George Monbiot, the blacksmith is a motif of folklore throughout (and beyond) Europe associated with malevolence (the medieval vision of Hell may draw upon the image the smith at his forge), and several variant tales tell of smiths entering into a pact with the devil to obtain fire and the means of smelting metal.[6]

According to research applying phylogenetic techniques to linguistics by folklorist Sara Graça da Silva and anthropologist Jamie Tehrani,[7] "The Smith and the Devil" may be one of the oldest European folk tales, with the basic plot stable throughout the Indo-European speaking world from India to Scandinavia, possibly being first told in Indo-European 6,000 years ago in the Bronze Age.[1][8][9] Folklorist John Lindow, however, notes that a word for "smith" may not have existed in Indo-European, and if so the tale may not be that old.[9]

Revealed: how Indigenous Australian storytelling accurately records sea level rises 7,000 years ago: http://www.theguardian.com/australia-news/2015/sep/16/indigenous-australian-storytelling-records-sea-level-rises-over-millenia


I wonder how long oral history lasts. What’s the oldest legend that has some clear fragment of truth in it?

The Black Sea deluge hypothesis, being the origin of the different deluge myths around the Middle East?
People have lived in river valleys for a long time now, and they flood. I mean, deluge myths could also go back to the end of the Ice Age, when many lands went underwater as sea level rose. But how can you tell? Now if there was a one-time thing that had a special identifying trait, say purple rain, that might be convincing.

RE: untangling actual historical events and personages from myth and legend,

Obviously, it’s pretty damn tough. In most cases (THE ILIAD, the Pentateuch, etc), we simply lack the proper controls (literary sources written down at a time reasonably close to the events in question). Hence, we have to rely on a combination of archaeology plus intuition.Was a city sacked at roughly the proper time? Does a given individual appear to be based on a real person?

I’m partial to the notion that the “forbidden fruit” was wheat, making the Garden of Eden a story about the dawn of agriculture, and the story of Cain and Abel the first conflict between settled farmer and semi-nomadic pastoralist. That would make it perhaps 6 millennia old when first written down.
The story of Cain and Abel is indeed the conflict between the agricultural and pastoral ways of life

same conclusion as me: https://pinboard.in/u:nhaliday/b:9130f5f3c17b

great blog: https://biblicalsausage.wordpress.com/

Euhemerus (also spelled Euemeros or Evemerus; Ancient Greek: Εὐήμερος Euhēmeros, "happy; prosperous"; late fourth century BC), was a Greek mythographer at the court of Cassander, the king of Macedon. Euhemerus' birthplace is disputed, with Messina in Sicily as the most probable location, while others suggest Chios or Tegea.[citation needed]

The philosophy attributed to and named for Euhemerus, euhemerism, holds that many mythological tales can be attributed to historical persons and events, the accounts of which have become altered and exaggerated over time.

Euhemerus's work combined elements of fiction and political utopianism. In the ancient world he was considered an atheist. Early Christian writers, such as Lactantius, used Euhemerus's belief that the ancient gods were originally human to confirm their inferiority regarding the Christian God.

In the ancient skeptic philosophical tradition of Theodorus of Cyrene and the Cyrenaics, Euhemerus forged a new method of interpretation for the contemporary religious beliefs. Though his work is lost, the reputation of Euhemerus was that he believed that much of Greek mythology could be interpreted as natural or historical events subsequently given supernatural characteristics through retelling. Subsequently Euhemerus was considered to be an atheist by his opponents, most notably Callimachus.[7]


Euhemerus' views were rooted in the deification of men, usually kings, into gods through apotheosis. In numerous cultures, kings were exalted or venerated into the status of divine beings and worshipped after their death, or sometimes even while they ruled. Dion, the tyrant ruler of Syracuse, was deified while he was alive and modern scholars consider his apotheosis to have influenced Euhemerus' views on the origin of all gods.[8] Euhemerus was also living during the contemporaneous deification of the Seleucids and "pharaoization" of the Ptolemies in a fusion of Hellenic and Egyptian traditions.


Hostile to paganism, the early Christians, such as the Church Fathers, embraced euhemerism in attempt to undermine the validity of pagan gods.[13] The usefulness of euhemerist views to early Christian apologists may be summed up in Clement of Alexandria's triumphant cry in Cohortatio ad gentes: "Those to whom you bow were once men like yourselves."[14]

culture  history  cocktail  anthropology  news  myth  org:mag  narrative  roots  spreading  theos  archaeology  tradition  multi  climate-change  environment  oceans  h2o  org:lite  anglo  org:anglo  west-hunter  scitariat  accuracy  truth  trees  ed-yong  sapiens  farmers-and-foragers  fluid  trivia  nihil  flux-stasis  time  antiquity  retention  age-generation  estimate  epidemiology  evolution  migration  cultural-dynamics  language  gavisti  foreign-lang  wormholes  religion  christianity  interdisciplinary  fiction  speculation  poast  discussion  writing  speaking  communication  thick-thin  whole-partial-many  literature  analysis  nitty-gritty  blog  stream  deep-materialism  new-religion  apollonian-dionysian  subjective-objective  absolute-relative  hmm  big-peeps  iron-age  the-classics  mediterranean  antidemos  leviathan  sanctity-degradation  signal-noise  stylized-facts  conquest-empire  the-devil  god-man-beast-victim  ideology  illusion  intricacy  tip-of-tongue  exegesis-hermeneutics  interpretation  linguistics  traces  bible  judaism  realness  paganism 
january 2016 by nhaliday

bundles : abstractacmprediction

related tags

:/  ability-competence  abortion-contraception-embryo  absolute-relative  abstraction  academia  accelerationism  accuracy  acm  acmtariat  additive  aDNA  aesthetics  africa  age-generation  agri-mindset  ai  ai-control  alien-character  alignment  alt-inst  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anomie  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  arbitrage  archaeology  arms  arrows  art  article  asia  attention  audio  authoritarianism  axelrod  axioms  backup  bangbang  behavioral-econ  behavioral-gen  being-becoming  being-right  benchmarks  berkeley  best-practices  bias-variance  biases  bible  big-peeps  big-picture  bio  biodet  bioinformatics  biophysical-econ  biotech  blog  boltzmann  books  bostrom  bounded-cognition  brain-scan  branches  brands  broad-econ  caching  calculation  cancer  candidate-gene  canon  career  causation  charity  chart  cheatsheet  checking  checklists  chemistry  china  christianity  civic  civil-liberty  civilization  clarity  class  classification  climate-change  cliometrics  coalitions  coarse-fine  cocktail  cog-psych  comedy  commentary  communication  comparison  compensation  competition  complex-systems  complexity  composition-decomposition  computation  computer-vision  concept  conceptual-vocab  concrete  concurrency  confidence  confounding  conquest-empire  consilience  contradiction  contrarianism  control  convexity-curvature  cooperate-defect  coordination  core-rats  correlation  cost-benefit  creative  criminal-justice  criminology  CRISPR  critique  crooked  cs  cultural-dynamics  culture  curiosity  current-events  dan-luu  dark-arts  darwinian  data  data-science  database  death  debate  debt  debugging  deep-learning  deep-materialism  deepgoog  defense  definite-planning  degrees-of-freedom  democracy  demographic-transition  demographics  dennett  density  dependence-independence  descriptive  detail-architecture  developing-world  developmental  differential  dimensionality  direction  discovery  discrete  discrimination  discussion  disease  distribution  domestication  drugs  duplication  duty  early-modern  econometrics  economics  econotariat  ed-yong  eden-heaven  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  elections  elegance  elite  embeddings  embodied  embodied-pack  emotion  empirical  ems  end-times  endo-exo  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  environment  environmental-effects  epidemiology  epigenetics  epistemic  equilibrium  error  essay  essence-existence  estimate  ethanol  ethics  europe  evidence  evidence-based  evolution  evopsych  examples  exegesis-hermeneutics  existence  expert  expert-experience  explanans  explanation  extrema  faq  farmers-and-foragers  fashun  FDA  features  fertility  fiction  field-study  fighting  finance  fisher  flexibility  fluid  flux-stasis  foreign-lang  formal-values  free-riding  frisson  frontier  futurism  games  garett-jones  gavisti  gender  gender-diff  gene-flow  generalization  generative  genetic-load  genetics  genomics  geography  germanic  giants  gibbon  gnon  gnosis-logos  gnxp  god-man-beast-victim  google  gotchas  government  graphics  graphs  great-powers  growth-econ  guide  GWAS  gwern  h2o  hanson  hard-tech  hardware  hari-seldon  health  healthcare  heuristic  hi-order-bits  hidden-motives  higher-ed  history  hive-mind  hmm  hn  homo-hetero  housing  howto  hsu  human-capital  hypocrisy  hypothesis-testing  ideas  identity  identity-politics  ideology  idk  illusion  immune  impetus  impro  incentives  increase-decrease  india  inequality  inference  info-dynamics  info-foraging  init  innovation  insight  institutions  intel  intelligence  interdisciplinary  interests  interpretation  intervention  intricacy  intuition  iq  iron-age  is-ought  islam  isotropy  iteration-recursion  jargon  judaism  knowledge  kumbaya-kult  labor  language  latent-variables  law  leadership  learning  left-wing  legacy  len:short  lens  let-me-see  letters  levers  leviathan  lexical  libraries  linear-algebra  linear-models  linearity  linguistics  links  list  literature  local-global  logic  lol  long-short-run  lower-bounds  machiavelli  machine-learning  macro  magnitude  malthus  management  map-territory  marginal  marginal-rev  market-failure  markets  markov  matching  math  math.NT  matrix-factorization  measure  measurement  media  medicine  medieval  mediterranean  memory-management  mental-math  meta-analysis  meta:medicine  meta:prediction  meta:rhetoric  meta:science  metabuch  metameta  methodology  metric-space  metrics  microsoft  migration  military  model-class  model-selection  models  modernity  moloch  moments  money  monte-carlo  morality  mostly-modern  multi  multiplicative  music  mutation  mystic  myth  n-factor  narrative  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nihil  nitty-gritty  nlp  no-go  noahpinion  noise-structure  nonlinearity  novelty  numerics  obesity  objektbuch  occam  occident  oceans  off-convex  offense-defense  optimate  optimism  optimization  order-disorder  orders  org:anglo  org:biz  org:bleg  org:bv  org:data  org:edu  org:foreign  org:gov  org:junk  org:lite  org:mag  org:mat  org:nat  org:ngo  org:popup  org:rec  org:sci  org:theos  organizing  orient  os  other-xtian  outcome-risk  outliers  overflow  p:null  paganism  paleocon  papers  paradox  parallax  parsimony  path-dependence  patience  paying-rent  pdf  peace-violence  people  performance  pessimism  peter-singer  philosophy  physics  pigeonhole-markov  piketty  piracy  plots  poast  policy  polisci  political-econ  politics  poll  popsci  population  population-genetics  postrat  pragmatic  pre-2013  prediction  predictive-processing  preprint  presentation  priors-posteriors  privacy  probability  problem-solving  programming  progression  project  propaganda  protestant-catholic  prudence  pseudoE  psychiatry  psychology  python  q-n-a  QTL  quality  quantitative-qualitative  questions  quotes  race  random  randy-ayndy  ranking  rant  rationality  ratty  reading  realness  reason  recommendations  recruiting  reddit  reduction  reference  reflection  regression  regression-to-mean  regularization  regularizer  regulation  reinforcement  religion  replication  research  responsibility  retention  review  rhetoric  right-wing  rigor  risk  roots  rot  rounding  s:*  s:**  sampling  sampling-bias  sanctity-degradation  sanjeev-arora  sapiens  scale  scaling-up  scholar  science  scitariat  search  selection  sequential  sex  shift  signal-noise  signaling  similarity  simler  simulation  singularity  sinosphere  skunkworks  sleep  slippery-slope  smoothness  social  social-psych  social-science  social-structure  society  sociology  software  spatial  speaking  speculation  speed  speedometer  spengler  spock  sports  spreading  ssc  stat-mech  stat-power  state-of-art  stats  status  stochastic-processes  stories  strategy  straussian  stream  street-fighting  stress  structure  study  stylized-facts  subculture  subjective-objective  summary  supply-demand  survey  symmetry  system-design  systematic-ad-hoc  systems  tcs  teaching  tech  technology  techtariat  telos-atelos  temperature  tetlock  the-classics  the-devil  the-great-west-whale  the-self  the-trenches  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  things  thinking  threat-modeling  tidbits  time  time-preference  time-series  tip-of-tongue  todo  tools  top-n  traces  track-record  tradeoffs  tradition  trees  trends  tribalism  trivia  trust  truth  turing  tutorial  twitter  unaffiliated  uncertainty  unintended-consequences  uniqueness  universalism-particularism  us-them  usa  utopia-dystopia  values  vampire-squid  variance-components  video  virtualization  visual-understanding  visuo  volo-avolo  walter-scheidel  war  waves  west-hunter  whiggish-hegelian  whole-partial-many  wiki  wire-guided  within-without  wonkish  world  wormholes  writing  X-not-about-Y  yoga  yvain  zeitgeist  🌞  🎓  🎩  🐸  👽  🔬  🖥  🤖  🦀 

Copy this bookmark: