nhaliday + examples   74

Lateralization of brain function - Wikipedia
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]


Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.


Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.


RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.


Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.


We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.


Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Prisoner's dilemma - Wikipedia
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).


For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:


analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.


Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.


We will show that the interaction between selfish and strongly reciprocal … [more]
concept  conceptual-vocab  wiki  reference  article  models  GT-101  game-theory  anthropology  cultural-dynamics  trust  cooperate-defect  coordination  iteration-recursion  sequential  axelrod  discrete  smoothness  evolution  evopsych  EGT  economics  behavioral-econ  sociology  new-religion  deep-materialism  volo-avolo  characterization  hsu  scitariat  altruism  justice  group-selection  decision-making  tribalism  organizing  hari-seldon  theory-practice  applicability-prereqs  bio  finiteness  multi  history  science  social-science  decision-theory  commentary  study  summary  giants  the-trenches  zero-positive-sum  🔬  bounded-cognition  info-dynamics  org:edge  explanation  exposition  org:nat  eden  retention  long-short-run  darwinian  markov  equilibrium  linear-algebra  nitty-gritty  competition  war  explanans  n-factor  europe  the-great-west-whale  occident  china  asia  sinosphere  orient  decentralized  markets  market-failure  cohesion  metabuch  stylized-facts  interdisciplinary  physics  pdf  pessimism  time  insight  the-basilisk  noblesse-oblige  the-watchers  ideas  l 
march 2018 by nhaliday
The “Hearts and Minds” Fallacy: Violence, Coercion, and Success in Counterinsurgency Warfare | International Security | MIT Press Journals
The U.S. prescription for success has had two main elements: to support liberalizing, democratizing reforms to reduce popular grievances; and to pursue a military strategy that carefully targets insurgents while avoiding harming civilians. An analysis of contemporaneous documents and interviews with participants in three cases held up as models of the governance approach—Malaya, Dhofar, and El Salvador—shows that counterinsurgency success is the result of a violent process of state building in which elites contest for power, popular interests matter little, and the government benefits from uses of force against civilians.

this is why liberal states mostly fail in counterinsurgency wars


contrary study:
Nation Building Through Foreign Intervention: Evidence from Discontinuities in Military Strategies: https://academic.oup.com/qje/advance-article/doi/10.1093/qje/qjx037/4110419
This study uses discontinuities in U.S. strategies employed during the Vietnam War to estimate their causal impacts. It identifies the effects of bombing by exploiting rounding thresholds in an algorithm used to target air strikes. Bombing increased the military and political activities of the communist insurgency, weakened local governance, and reduced noncommunist civic engagement. The study also exploits a spatial discontinuity across neighboring military regions that pursued different counterinsurgency strategies. A strategy emphasizing overwhelming firepower plausibly increased insurgent attacks and worsened attitudes toward the U.S. and South Vietnamese government, relative to a more hearts-and-minds-oriented approach. JEL Codes: F35, F51, F52

Military Adventurer Raymond Westerling On How To Defeat An Insurgency: http://www.socialmatter.net/2018/03/12/military-adventurer-raymond-westerling-on-how-to-defeat-an-insurgency/
study  war  meta:war  military  defense  terrorism  MENA  strategy  tactics  cynicism-idealism  civil-liberty  kumbaya-kult  foreign-policy  realpolitik  usa  the-great-west-whale  occident  democracy  antidemos  institutions  leviathan  government  elite  realness  multi  twitter  social  commentary  stylized-facts  evidence-based  objektbuch  attaq  chart  contrarianism  scitariat  authoritarianism  nl-and-so-can-you  westminster  iraq-syria  polisci  🎩  conquest-empire  news  org:lite  power  backup  martial  nietzschean  pdf  piracy  britain  asia  developing-world  track-record  expansionism  peace-violence  interests  china  race  putnam-like  anglosphere  latin-america  volo-avolo  cold-war  endogenous-exogenous  shift  natural-experiment  rounding  gnon  org:popup  europe  germanic  japan  history  mostly-modern  world-war  examples  death  nihil  dominant-minority  tribalism  ethnocentrism  us-them  letters 
august 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.


compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.


If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.


Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
List of games in game theory - Wikipedia
The most important patterns:

1. Prisoner's Dilemma
2. Race to the Bottom
3. Free Rider Problem / Tragedy of the Commons / Collective Action
4. Zero Sum vs. Non-Zero Sum
5. Externalities / Principal Agent
6. Diminishing Returns
7. Evolutionarily Stable Strategy / Nash Equilibrium
concept  economics  micro  models  examples  list  game-theory  GT-101  wiki  reference  cooperate-defect  multi  twitter  social  discussion  backup  journos-pundits  coordination  competition  free-riding  zero-positive-sum  externalities  rent-seeking  marginal  convexity-curvature  nonlinearity  equilibrium  top-n  metabuch  conceptual-vocab  alignment  contracts 
february 2017 by nhaliday
probability - Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? - Cross Validated
The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in 100p% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability pp given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself.


PS. Note that my question is not about the ban itself; it is about the suggested approach. I am not asking about frequentist vs. Bayesian inference either. The Editorial is pretty negative about Bayesian methods too; so it is essentially about using statistics vs. not using statistics at all.


q-n-a  overflow  nibble  stats  data-science  science  methodology  concept  confidence  conceptual-vocab  confusion  explanation  thinking  hypothesis-testing  jargon  multi  meta:science  best-practices  error  discussion  bayesian  frequentist  hmm  publishing  intricacy  wut  comparison  motivation  clarity  examples  robust  metabuch  🔬  info-dynamics  reference 
february 2017 by nhaliday
interpretation - How to understand degrees of freedom? - Cross Validated
From Wikipedia, there are three interpretations of the degrees of freedom of a statistic:

In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.

Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (which, in sample variance, is one, since the sample mean is the only intermediate step).

Mathematically, degrees of freedom is the dimension of the domain of a random vector, or essentially the number of 'free' components: how many components need to be known before the vector is fully determined.


This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests.

Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are:

- The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances).
- The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance.
- The F-test (of ratios of estimated variances).
- The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates.

In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it.


Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test:


This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter νν often referred to as the "degrees of freedom." The standard reasoning about how to determine νν goes like this

I have kk counts. That's kk pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal nn. That's one relationship. I estimated two (or pp, generally) parameters from the data. That's two (or pp) additional relationships, giving p+1p+1 total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only k−p−1k−p−1 (functionally) independent "degrees of freedom": that's the value to use for νν.

The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question.


Things went wrong because I violated two requirements of the Chi-squared test:

1. You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.)
2. You must base that estimate on the counts, not on the actual data! (This is crucial.)


The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature.

We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.)
q-n-a  overflow  stats  data-science  concept  jargon  explanation  methodology  things  nibble  degrees-of-freedom  clarity  curiosity  manifolds  dimensionality  ground-up  intricacy  hypothesis-testing  examples  list  ML-MAP-E  gotchas 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Logicians on safari
So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

the sequel: http://www.scottaaronson.com/blog/?p=153
tcstariat  aaronson  tcs  computation  complexity  aphorism  examples  list  reflection  philosophy  multi  summary  synthesis  hi-order-bits  interdisciplinary  lens  big-picture  survey  nibble  org:bleg  applications  big-surf  s:*  p:whenever  ideas 
january 2017 by nhaliday
Dvoretzky's theorem - Wikipedia
In mathematics, Dvoretzky's theorem is an important structural theorem about normed vector spaces proved by Aryeh Dvoretzky in the early 1960s, answering a question of Alexander Grothendieck. In essence, it says that every sufficiently high-dimensional normed vector space will have low-dimensional subspaces that are approximately Euclidean. Equivalently, every high-dimensional bounded symmetric convex set has low-dimensional sections that are approximately ellipsoids.

math  math.FA  inner-product  levers  characterization  geometry  math.MG  concentration-of-measure  multi  q-n-a  overflow  intuition  examples  proofs  dimensionality  gowers  mathtariat  tcstariat  quantum  quantum-info  norms  nibble  high-dimension  wiki  reference  curvature  convexity-curvature  tcs 
january 2017 by nhaliday
Existence of the moment generating function and variance - Cross Validated
This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).

In the answer below, we do the following:
1. Show that if the mgf is finite for at least one (strictly) positive value and one negative value, then all positive moments of X are finite (including nonintegral moments).
2. Prove that the condition in the first item above is equivalent to the distribution of X having exponentially bounded tails. In other words, the tails of X fall off at least as fast as those of an exponential random variable Z (up to a constant).
3. Provide a quick note on the characterization of the distribution by its mgf provided it satisfies the condition in item 1.
4. Explore some examples and counterexamples to aid our intuition and, particularly, to show that we should not read undue importance into the lack of finiteness of the mgf.
q-n-a  overflow  math  stats  acm  probability  characterization  concept  moments  distribution  examples  counterexample  tails  rigidity  nibble  existence  s:null  convergence  series 
january 2017 by nhaliday
"Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character ... - Richard P. Feynman - Google Books
Actually, there was a certain amount of genuine quality to my guesses. I had a scheme, which I still use today when somebody is explaining something that l’m trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)—disjoint (two balls). Then the balls tum colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, “False!"
physics  math  feynman  thinking  empirical  examples  lens  intuition  operational  stories  metabuch  visual-understanding  thurston  hi-order-bits  geometry  topology  cartoons  giants  👳  nibble  the-trenches  metameta  meta:math  s:**  quotes  gbooks 
january 2017 by nhaliday
soft question - Thinking and Explaining - MathOverflow
- good question from Bill Thurston
- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:
- symmetry as blurring/vibrating/wobbling, scale invariance
- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy
comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic
comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy
I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.
To remind myself later:
- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally
- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)
q-n-a  soft-question  big-list  intuition  communication  teaching  math  thinking  writing  thurston  lens  overflow  synthesis  hi-order-bits  👳  insight  meta:math  clarity  nibble  giants  cartoons  gowers  mathtariat  better-explained  stories  the-trenches  problem-solving  homogeneity  symmetry  fedja  examples  philosophy  big-picture  vague  isotropy  reflection  spatial  ground-up  visual-understanding  polynomials  dimensionality  math.GR  worrydream  scholar  🎓  neurons  metabuch  yoga  retrofit  mental-math  metameta  wisdom  wordlessness  oscillation  operational  adversarial  quantifiers-sums  exposition  explanation  tricki  concrete  s:***  manifolds  invariance  dynamical  info-dynamics  cool  direction 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition 
january 2017 by nhaliday
Convex Optimization Applications
there was a problem in ACM113 related to this (the portfolio optimization SDP stuff)
pdf  slides  exposition  finance  investing  optimization  methodology  examples  IEEE  acm  ORFE  nibble  curvature  talks  convexity-curvature 
december 2016 by nhaliday
Ethnic fractionalization and growth | Dietrich Vollrath
Garett Jones did a podcast with The Economics Detective recently on the costs of ethnic diversity. It is particularly worth listening to given that racial identity has re-emerged as a salient element of politics. A quick summary - and the link above includes a nice write-up of relevant sources - would be that diversity within workplaces does not appear to improve outcomes (however those outcomes are measured).

At the same time, there is a parallel literature, touched on in the podcast, about ethnic diversity (or fractionalization, as it is termed in that literature) and economic growth. But one has to be careful drawing a bright line between the two literatures. It does not follow that the results for workplace diversity imply the results regarding economic growth. And this is because the growth results, to the extent that you believe they are robust, all operate through political systems.

So here let me walk through some of the core empirical relationships that have been found regarding ethnic fractionalization and economic growth, and then talk about why you need to take care with over-interpreting them. This is not a thorough literature review, and I realize there are other papers in the same vein. What I’m after is characterizing the essential results.


- objection about sensitivity of measure to definition of clusters seems dumb to me (point is to fix definitions than compare different polities. as long as direction and strength of correlation is fairly robust to changes in clustering, this is a stupid critique)
- also, could probably define a less arbitrary notion of fractionalization (w/o fixed clustering or # of clusters) if using points in a metric/vector/euclidean space (eg, genomes)
- eg, A Generalized Index of Ethno-Linguistic Fractionalization: http://www-3.unipv.it/webdept/prin/workpv02.pdf
So like -E_{A, B ~ X} d(A, B). Or maybe -E_{A, B ~ X} f(d(A, B)) for f an increasing function (in particular, f(x) = x^2).

Note that E ||A - B|| = Θ(E ||E[A] - A||), and E ||A - B||^2 = 2Var A,
for A, B ~ X, so this is just quantifying deviation from mean for Euclidean spaces.

In the case that you have a bunch of difference clusters w/ centers equidistant (so n+1 in R^n), measures p_i, and internal variances σ_i^2, you get E ||A - B||^2 = -2∑_i p_i^2σ_i^2 - ∑_{i≠j} p_ip_j(1 + σ_i^2 + σ_j^2) = -2∑_i p_i^2σ_i^2 - ∑_{i≠j} p_ip_j(1 + σ_i^2 + σ_j^2) = -∑_i p_i^2(1 + 2σ_i^2) - ∑_i 2p_i(1-p_i)σ_i^2
(inter-center distance scaled to 1 wlog).
(in general, if you allow _approximate_ equidistance, you can pack in exp(O(n)) clusters via JL lemma)
econotariat  economics  growth-econ  diversity  spearhead  study  summary  list  survey  cracker-econ  hive-mind  stylized-facts  🎩  garett-jones  wonkish  populism  easterly  putnam-like  metric-space  similarity  dimensionality  embeddings  examples  metrics  sociology  polarization  big-peeps  econ-metrics  s:*  corruption  cohesion  government  econ-productivity  religion  broad-econ  social-capital  madisonian  chart  article  wealth-of-nations  the-bones  political-econ  public-goodish  microfoundations  alesina  🌞  multi  pdf  concept  conceptual-vocab  definition  hari-seldon 
december 2016 by nhaliday
The Son Also Rises | West Hunter
It turns out that you can predict a kid’s social status better if you take into account the grandparents as well as the parents – and the nieces/nephews, cousins, etc. Which means that you’re estimating the breeding value for moxie – which means that Clark needs to read Falconer right now. I’d guess that taking into account grandparents that the kids never even met, ones that died before their birth, will improve prediction. Let the sociologists chew on that.


If culture was the driver, a group could just adopt a different culture (it happens) and decide to be the new upper class by doing all that shit Amy Chua pushes, or possibly by playing cricket. I don’t believe that this ever actually occurs. Although with genetic engineering on the horizon, it may be possible. Of course that would be cheating.

It is hard to change these patterns very much. Universal public education, fluoridation, democracy, haven’t made much difference. I do think that shooting enough people would. Or a massive application of droit de seigneur, or its opposite.


If moxie is genetic, most economists must be wrong about human capital formation. Having fewer kids and spending more money on their education has only a modest effect: this must be the case, given slow long-run social mobility. It seems that social status is transmitted within families largely independently of the resources available to parents. Which is why Ashkenazi Jews could show up at Ellis Island flat broke, with no English, and have so many kids in the Ivy League by the 1920s that they imposed quotas. I’ve never understood why economists ever believed in this.

Moxie is not the same thing as IQ, although IQ must be a component. It is also worth remembering that this trait helps you acquire status – it is probably not quite the same thing as being saintly, honest, or incredibly competent at doing your damn job.

books  summary  west-hunter  review  mobility  🌞  c:**  🎩  2014  spearhead  gregory-clark  biodet  legacy  assortative-mating  long-short-run  signal-noise  latent-variables  age-generation  scitariat  broad-econ  s-factor  flux-stasis  multi  models  microfoundations  honor  integrity  ability-competence  impact  regression-to-mean  agri-mindset  alt-inst  economics  human-capital  interdisciplinary  social-science  sociology  sports  analogy  examples  class  inequality  britain  europe  nordic  japan  korea  china  asia  latin-america 
november 2016 by nhaliday
Why Information Grows – Paul Romer
thinking like a physicist:

The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/
books  summary  review  economics  growth-econ  interdisciplinary  hmm  physics  thinking  feynman  tradeoffs  paul-romer  econotariat  🎩  🎓  scholar  aphorism  lens  signal-noise  cartoons  skeleton  s:**  giants  electromag  mutation  genetics  genomics  bits  nibble  stories  models  metameta  metabuch  problem-solving  composition-decomposition  structure  abstraction  zooming  examples  knowledge  human-capital  behavioral-econ  network-structure  info-econ  communication  learning  information-theory  applications  volo-avolo  map-territory  externalities  duplication  spreading  property-rights  lattice  multi  government  polisci  policy  counterfactual  insight  paradox  parallax  reduction  empirical  detail-architecture  methodology  crux  visual-understanding  theory-practice  matching  analytical-holistic  branches  complement-substitute  local-global  internet  technology  cost-benefit  investing  micro  signaling  limits  public-goodish  interpretation 
september 2016 by nhaliday
Learn Difficult Concepts with the ADEPT Method – BetterExplained
Make explanations ADEPT: Use an Analogy, Diagram, Example, Plain-English description, and then a Technical description.
thinking  education  learning  teaching  tutoring  better-explained  analogy  visual-understanding  examples 
july 2016 by nhaliday
For potential Ph.D. students
Ravi Vakil's advice for PhD students

General advice:
Think actively about the creative process. A subtle leap is required from undergraduate thinking to active research (even if you have done undergraduate research). Think explicitly about the process, and talk about it (with me, and with others). For example, in an undergraduate class any Ph.D. student at Stanford will have tried to learn absolutely all the material flawlessly. But in order to know everything needed to tackle an important problem on the frontier of human knowledge, one would have to spend years reading many books and articles. So you'll have to learn differently. But how?

Don't be narrow and concentrate only on your particular problem. Learn things from all over the field, and beyond. The facts, methods, and insights from elsewhere will be much more useful than you might realize, possibly in your thesis, and most definitely afterwards. Being broad is a good way of learning to develop interesting questions.

When you learn the theory, you should try to calculate some toy cases, and think of some explicit basic examples.

Talk to other graduate students. A lot. Organize reading groups. Also talk to post-docs, faculty, visitors, and people you run into on the street. I learn the most from talking with other people. Maybe that's true for you too.

Specific topics:
- seminars
- giving talks
- writing
- links to other advice
advice  reflection  learning  thinking  math  phd  expert  stanford  grad-school  academia  insight  links  strategy  long-term  growth  🎓  scholar  metabuch  org:edu  success  tactics  math.AG  tricki  meta:research  examples  concrete  s:*  info-dynamics  s-factor  prof  org:junk  expert-experience 
may 2016 by nhaliday

bundles : mathmeta

related tags

aaronson  ability-competence  abstraction  academia  accretion  accuracy  acm  acmtariat  additive  adversarial  advertising  advice  age-generation  aggregator  agri-mindset  agriculture  ai  ai-control  alesina  alg-combo  algebra  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  AMT  analogy  analysis  analytical-holistic  anglo  anglosphere  anonymity  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  aristos  arms  arrows  art  article  ascetic  asia  assortative-mating  atmosphere  atoms  attaq  attention  authoritarianism  axelrod  axioms  backup  barons  bayesian  behavioral-econ  being-becoming  ben-recht  benchmarks  benevolence  best-practices  better-explained  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  books  bostrom  bounded-cognition  brain-scan  branches  brands  britain  broad-econ  business  business-models  c:**  caching  calculation  california  cancer  canon  capital  capitalism  cartoons  causation  chapman  characterization  charity  chart  cheatsheet  checklists  chemistry  china  christianity  civil-liberty  civilization  clarity  class  clever-rats  climate-change  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  cold-war  collaboration  commentary  communication  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  compressed-sensing  compression  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confusion  conquest-empire  consilience  contracts  contradiction  contrarianism  convergence  convexity-curvature  cool  cooperate-defect  coordination  correlation  corruption  cost-benefit  counterexample  counterfactual  courage  course  cracker-econ  creative  crime  critique  crooked  crosstab  crux  cs  cultural-dynamics  curiosity  curvature  cybernetics  cycles  cynicism-idealism  dark-arts  darwinian  data  data-science  data-structures  database  dataviz  death  debate  debt  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  defense  definite-planning  definition  degrees-of-freedom  democracy  demographics  dennett  density  detail-architecture  developing-world  differential  dimensionality  direction  dirty-hands  discrete  discussion  disease  distribution  diversity  domestication  dominant-minority  drugs  duality  duplication  dynamic  dynamical  early-modern  easterly  ecology  econ-metrics  econ-productivity  economics  econotariat  eden  eden-heaven  education  EEA  efficiency  egalitarianism-hierarchy  EGT  einstein  electromag  elite  embeddings  embodied  emotion  empirical  ems  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  entrepreneurialism  entropy-like  environment  envy  epistemic  equilibrium  error  essay  essence-existence  estimate  ethics  ethnocentrism  europe  evidence-based  evolution  evopsych  examples  existence  expanders  expansionism  expectancy  expert  expert-experience  explanans  explanation  exploratory  exposition  externalities  extra-introversion  extrema  facebook  farmers-and-foragers  fashun  FDA  fedja  fertility  feudal  feynman  fiction  fields  finance  finiteness  flexibility  fluid  flux-stasis  focus  foreign-policy  formal-values  forms-instances  fourier  free-riding  frequency  frequentist  frontier  functional  futurism  gallic  game-theory  games  garett-jones  gavisti  gbooks  gedanken  generative  genetics  genomics  geoengineering  geography  geometry  germanic  giants  gnon  gnosis-logos  god-man-beast-victim  google  gotchas  government  gowers  grad-school  gradient-descent  graph-theory  graphical-models  graphs  gravity  gray-econ  gregory-clark  ground-up  group-selection  growth  growth-econ  GT-101  guilt-shame  hanson  hard-tech  hardness  hardware  hari-seldon  harvard  henrich  heterodox  heuristic  hi-order-bits  hidden-motives  high-dimension  high-variance  higher-ed  history  hive-mind  hmm  homo-hetero  homogeneity  honor  howto  hsu  huge-data-the-biggest  human-capital  human-ml  humanity  humility  hypocrisy  hypothesis-testing  ideas  identity  ideology  IEEE  illusion  impact  impetus  impro  incentives  increase-decrease  individualism-collectivism  industrial-revolution  inequality  inference  info-dynamics  info-econ  information-theory  init  inner-product  innovation  insight  instinct  institutions  integral  integrity  intel  intelligence  interdisciplinary  interests  internet  interpretation  intersection-connectedness  intervention  intricacy  intuition  invariance  investing  iraq-syria  iron-age  isotropy  iteration-recursion  janus  japan  jargon  journos-pundits  justice  kernels  knowledge  korea  kumbaya-kult  labor  language  latent-variables  latin-america  lattice  law  leadership  learning  learning-theory  lecture-notes  legacy  len:long  lens  lesswrong  letters  levers  leviathan  limits  linear-algebra  linearity  liner-notes  links  list  literature  local-global  logic  long-short-run  long-term  longevity  love-hate  lower-bounds  machine-learning  macro  madisonian  magnitude  management  manifolds  map-territory  marginal  market-failure  market-power  markets  markov  martial  martingale  matching  math  math.AC  math.AG  math.AT  math.CA  math.CO  math.CT  math.CV  math.DS  math.FA  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  MENA  mental-math  meta:math  meta:prediction  meta:research  meta:rhetoric  meta:science  meta:war  metabuch  metameta  methodology  metric-space  metrics  michael-jordan  micro  microfoundations  microsoft  migration  military  miri-cfar  mit  ML-MAP-E  mobile  mobility  model-class  models  moments  monetary-fiscal  morality  mostly-modern  motivation  multi  multiplicative  musk  mutation  myth  n-factor  narrative  nationalism-globalism  natural-experiment  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  no-go  noble-lie  noblesse-oblige  noise-structure  nonlinearity  nordic  norms  northeast  novelty  nuclear  number  numerics  nutrition  nyc  objektbuch  occident  ocw  old-anglo  oly  oly-programming  online-learning  open-closed  operational  opsec  optics  optimate  optimism  optimization  order-disorder  orders  ORFE  org:bleg  org:edge  org:edu  org:junk  org:lite  org:mat  org:nat  org:popup  organizing  orient  oscillation  outcome-risk  outliers  overflow  p:whenever  PAC  paleocon  papers  paradox  parallax  parasites-microbiome  parsimony  patho-altruism  patience  paul-romer  pdf  peace-violence  people  personality  perturbation  pessimism  phalanges  pharma  phase-transition  phd  philosophy  photography  phys-energy  physics  pic  piracy  plots  pls  polanyi-marx  polarization  policy  polisci  political-econ  politics  polynomials  population  populism  positivity  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  predictive-processing  primitivism  princeton  priors-posteriors  privacy  pro-rata  probability  problem-solving  prof  programming  proofs  properties  property-rights  prudence  psych-architecture  psychiatry  psychology  public-goodish  publishing  putnam-like  puzzles  q-n-a  qra  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quotes  race  rand-approx  random  randy-ayndy  ranking  rant  rationality  ratty  realness  realpolitik  reason  rec-math  recruiting  redistribution  reduction  reference  reflection  regression  regression-to-mean  regulation  reinforcement  relativity  religion  rent-seeking  replication  reputation  research  research-program  responsibility  retention  retrofit  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  risk  ritual  robotics  robust  roots  rot  rounding  s-factor  s:*  s:**  s:***  s:null  sample-complexity  sapiens  scale  scholar  science  scifi-fantasy  scitariat  search  securities  security  selection  self-interest  sequential  series  shakespeare  shift  signal-noise  signaling  signum  similarity  simplex  simulation  singularity  sinosphere  skeleton  skunkworks  slides  slippery-slope  smoothness  social  social-capital  social-choice  social-norms  social-psych  social-science  sociality  sociology  socs-and-mops  soft-question  software  space  sparsity  spatial  speaking  spearhead  spectral  speculation  speed  speedometer  spengler  spock  sports  spreading  stagnation  stanford  startups  stat-mech  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  street-fighting  stress  strings  structure  study  studying  stylized-facts  subjective-objective  success  sum-of-squares  summary  survey  sv  symmetry  synchrony  synthesis  systematic-ad-hoc  tactics  tails  talks  tcs  tcstariat  teaching  tech  technology  techtariat  telos-atelos  temperance  temperature  terrorism  tetlock  the-basilisk  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  thurston  tidbits  tightness  time  time-preference  tip-of-tongue  tools  top-n  topology  track-record  trade  tradeoffs  tradition  transportation  tribalism  tricki  tricks  trivia  trust  truth  turing  tutorial  tutoring  twitter  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  urban-rural  us-them  usa  vague  values  vampire-squid  vc-dimension  venture  video  visual-understanding  visualization  visuo  vitality  volo-avolo  walls  walter-scheidel  war  waves  wealth  wealth-of-nations  web  welfare-state  west-hunter  westminster  whole-partial-many  wiki  winner-take-all  wire-guided  wisdom  within-without  wonkish  wordlessness  world-war  wormholes  worrydream  writing  wut  X-not-about-Y  yak-shaving  yoga  zero-positive-sum  zooming  🌞  🎓  🎩  👳  🔬 

Copy this bookmark: