Information Processing: PanOpticon in my Pocket: 0.35GB/month of surveillance, no charge!

hsu scitariat commentary links data cocktail intel privacy opsec google time density spatial mobile finance tech network-structure anonymity identity advertising huge-data-the-biggest security threat-modeling labor speculation examples inference open-closed

september 2018 by nhaliday

hsu scitariat commentary links data cocktail intel privacy opsec google time density spatial mobile finance tech network-structure anonymity identity advertising huge-data-the-biggest security threat-modeling labor speculation examples inference open-closed

september 2018 by nhaliday

Lateralization of brain function - Wikipedia

september 2018 by nhaliday

Language

Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing

The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/

In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]

gnon
reflection
books
summary
review
neuro
neuro-nitgrit
things
thinking
metabuch
order-disorder
apollonian-dionysian
bio
examples
near-far
symmetry
homo-hetero
logic
inference
intuition
problem-solving
analytical-holistic
n-factor
europe
the-great-west-whale
occident
alien-character
detail-architecture
art
theory-practice
philosophy
being-becoming
essence-existence
language
psychology
cog-psych
egalitarianism-hierarchy
direction
reason
learning
novelty
science
anglo
anglosphere
coarse-fine
neurons
truth
contradiction
matching
empirical
volo-avolo
curiosity
uncertainty
theos
axioms
intricacy
computation
analogy
essay
rhetoric
deep-materialism
new-religion
knowledge
expert-experience
confidence
biases
optimism
pessimism
realness
whole-partial-many
theory-of-mind
values
competition
reduction
subjective-objective
communication
telos-atelos
ends-means
turing
fiction
increase-decrease
innovation
creative
thick-thin
spengler
multi
ratty
hanson
complex-systems
structure
concrete
abstraction
network-s
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing

The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/

In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]

september 2018 by nhaliday

Laryngeal nerve - RationalWiki

august 2018 by nhaliday

Giraffe neck nerve that takes circuitous route around heart ("evolution has no foresight")

wiki
examples
bio
evolution
neuro
counterexample
religion
theos
volo-avolo
degrees-of-freedom
selection
telos-atelos
local-global
unintended-consequences
optimization
manifolds
tip-of-tongue
embodied
eden-heaven
august 2018 by nhaliday

8 PCA – A Powerful Method for Analyze Ecological Niches

august 2018 by nhaliday

Influences of ecology and biogeography on shaping the distributions of cryptic species: three bat tales in Iberia: https://academic.oup.com/biolinnean/article/112/1/150/2415750

Combining Historical Biogeography with Niche Modeling in theCaprifoliumClade ofLonicera(Caprifoliaceae, Dipsacales): https://watermark.silverchair.com/syq011.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAagwggGkBgkqhkiG9w0BBwagggGVMIIBkQIBADCCAYoGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMnQcew1QnnjkjJSlVAgEQgIIBW-Nu-4L3xpOdRIb27NdbMbhPjaeByMM3g6H1bpeMMK4OJ9gBOH7V5WfuKGlHlsgsStQQLC_s2YGVu5KDOtwhudWOPFqrXmYlAXjhFNi5hFNpCxjNT-4tTJlRJHU5plgPE2BWZht5okuM2sngjX3t5dDScmz0oTBvu7xnUXo3sbGkad6gw-za6Rpyl5_3-nnnbOpz6WeqfxcR7NDGwPd741QVJKjjp-FHPf8JdWN3mcsLMVJ6p11FoeMeQdA7gsyXhKDPfE8sJ2Xamjxk5uSaGkfi1bi71OB1Ag0UvV2xlON1UwWD9V8tE7e3JJQanv_aKgKyppuXQikoMhH05x_nCFsiVif-_-26Yyx0CMIHv4so81sOpwN5YM_BISyUp_RoT2yfjiEhZpcJlyWX4z6ZeKAUEICloT8evsOX8Ll4FUocBHARhnqZgRlc8w33b_J3wslXv-PVBvvXNs0h

pdf
article
study
methodology
bio
ecology
data
analysis
stats
exploratory
matrix-factorization
geography
environment
time
crosstab
history
letters
correlation
evolution
distribution
examples
high-dimension
multi
chart
howto
objektbuch
metabuch
nibble
data-science
things
Combining Historical Biogeography with Niche Modeling in theCaprifoliumClade ofLonicera(Caprifoliaceae, Dipsacales): https://watermark.silverchair.com/syq011.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAagwggGkBgkqhkiG9w0BBwagggGVMIIBkQIBADCCAYoGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMnQcew1QnnjkjJSlVAgEQgIIBW-Nu-4L3xpOdRIb27NdbMbhPjaeByMM3g6H1bpeMMK4OJ9gBOH7V5WfuKGlHlsgsStQQLC_s2YGVu5KDOtwhudWOPFqrXmYlAXjhFNi5hFNpCxjNT-4tTJlRJHU5plgPE2BWZht5okuM2sngjX3t5dDScmz0oTBvu7xnUXo3sbGkad6gw-za6Rpyl5_3-nnnbOpz6WeqfxcR7NDGwPd741QVJKjjp-FHPf8JdWN3mcsLMVJ6p11FoeMeQdA7gsyXhKDPfE8sJ2Xamjxk5uSaGkfi1bi71OB1Ag0UvV2xlON1UwWD9V8tE7e3JJQanv_aKgKyppuXQikoMhH05x_nCFsiVif-_-26Yyx0CMIHv4so81sOpwN5YM_BISyUp_RoT2yfjiEhZpcJlyWX4z6ZeKAUEICloT8evsOX8Ll4FUocBHARhnqZgRlc8w33b_J3wslXv-PVBvvXNs0h

august 2018 by nhaliday

The Physics of Information Processing Superobjects: Daily Life Among the Jupiter Brains

nibble pdf study article essay ratty bostrom physics lower-bounds interdisciplinary computation frontier singularity civilization communication time phys-energy thermo entropy-like lens intelligence futurism philosophy software hardware enhancement no-go data scale magnitude network-structure structure complex-systems concurrency density bits retention mechanics electromag quantum quantum-info speed information-theory measure chemistry gravity relativity the-world-is-just-atoms dirty-hands skunkworks gedanken ideas hard-tech nitty-gritty intricacy len:long spatial whole-partial-many frequency neuro internet web trivia cocktail humanity composition-decomposition instinct reason illusion the-self psychology cog-psych dennett within-without signal-noise coding-theory quotes scifi-fantasy fiction giants death long-short-run janus eden-heaven efficiency finiteness iteration-recursion cycles nietzschean big-peeps examples

april 2018 by nhaliday

nibble pdf study article essay ratty bostrom physics lower-bounds interdisciplinary computation frontier singularity civilization communication time phys-energy thermo entropy-like lens intelligence futurism philosophy software hardware enhancement no-go data scale magnitude network-structure structure complex-systems concurrency density bits retention mechanics electromag quantum quantum-info speed information-theory measure chemistry gravity relativity the-world-is-just-atoms dirty-hands skunkworks gedanken ideas hard-tech nitty-gritty intricacy len:long spatial whole-partial-many frequency neuro internet web trivia cocktail humanity composition-decomposition instinct reason illusion the-self psychology cog-psych dennett within-without signal-noise coding-theory quotes scifi-fantasy fiction giants death long-short-run janus eden-heaven efficiency finiteness iteration-recursion cycles nietzschean big-peeps examples

april 2018 by nhaliday

Argument, intuition, and recursion

ratty lesswrong clever-rats acmtariat nibble reflection thinking metameta metabuch skeleton reason math thick-thin empirical science rationality epistemic intuition logic economics models theory-practice applicability-prereqs heuristic problem-solving analytical-holistic futurism lens speedometer frontier caching universalism-particularism duality fourier examples ai risk speed robust reinforcement machine-learning social-science tricki meta:rhetoric debate crux composition-decomposition structure convergence zooming neurons checklists advice strategy meta:prediction tetlock

april 2018 by nhaliday

ratty lesswrong clever-rats acmtariat nibble reflection thinking metameta metabuch skeleton reason math thick-thin empirical science rationality epistemic intuition logic economics models theory-practice applicability-prereqs heuristic problem-solving analytical-holistic futurism lens speedometer frontier caching universalism-particularism duality fourier examples ai risk speed robust reinforcement machine-learning social-science tricki meta:rhetoric debate crux composition-decomposition structure convergence zooming neurons checklists advice strategy meta:prediction tetlock

april 2018 by nhaliday

Prisoner's dilemma - Wikipedia

march 2018 by nhaliday

caveat to result below:

An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/

Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/

To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html

The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):

... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:

http://www.pnas.org/content/109/26/10409.full

http://www.pnas.org/content/109/26/10409.full.pdf

https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043

- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]

concept
conceptual-vocab
wiki
reference
article
models
GT-101
game-theory
anthropology
cultural-dynamics
trust
cooperate-defect
coordination
iteration-recursion
sequential
axelrod
discrete
smoothness
evolution
evopsych
EGT
economics
behavioral-econ
sociology
new-religion
deep-materialism
volo-avolo
characterization
hsu
scitariat
altruism
justice
group-selection
decision-making
tribalism
organizing
hari-seldon
theory-practice
applicability-prereqs
bio
finiteness
multi
history
science
social-science
decision-theory
commentary
study
summary
giants
the-trenches
zero-positive-sum
🔬
bounded-cognition
info-dynamics
org:edge
explanation
exposition
org:nat
eden
retention
long-short-run
darwinian
markov
equilibrium
linear-algebra
nitty-gritty
competition
war
explanans
n-factor
europe
the-great-west-whale
occident
china
asia
sinosphere
orient
decentralized
markets
market-failure
cohesion
metabuch
stylized-facts
interdisciplinary
physics
pdf
pessimism
time
insight
the-basilisk
noblesse-oblige
the-watchers
ideas
l
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/

Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/

To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html

The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):

... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:

http://www.pnas.org/content/109/26/10409.full

http://www.pnas.org/content/109/26/10409.full.pdf

https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043

- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]

march 2018 by nhaliday

Reflections on Random Kitchen Sinks – arg min blog

acmtariat ben-recht org:bleg nibble talks video reflection success ranking machine-learning acm papers liner-notes research stories random kernels approximation frontier rigor michael-jordan estimate summary tightness linear-algebra replication science the-trenches realness deep-learning model-class concept exposition tricks gradient-descent optimization composition-decomposition parsimony examples reduction systematic-ad-hoc numerics intricacy robust perturbation empirical rounding

december 2017 by nhaliday

acmtariat ben-recht org:bleg nibble talks video reflection success ranking machine-learning acm papers liner-notes research stories random kernels approximation frontier rigor michael-jordan estimate summary tightness linear-algebra replication science the-trenches realness deep-learning model-class concept exposition tricks gradient-descent optimization composition-decomposition parsimony examples reduction systematic-ad-hoc numerics intricacy robust perturbation empirical rounding

december 2017 by nhaliday

Physics 152: Gravity, Fluids, Waves, Heat

september 2017 by nhaliday

lots of good lecture notes with pictures, worked examples, and simulations

unit
org:edu
org:junk
course
physics
mechanics
gravity
tidbits
symmetry
calculation
examples
lecture-notes
simulation
dynamic
dynamical
visualization
visual-understanding
ground-up
fluid
waves
oscillation
thermo
stat-mech
p:whenever
accretion
math.CA
hi-order-bits
nitty-gritty
linearity
spatial
space
entropy-like
temperature
proofs
yoga
plots
september 2017 by nhaliday

The “Hearts and Minds” Fallacy: Violence, Coercion, and Success in Counterinsurgency Warfare | International Security | MIT Press Journals

august 2017 by nhaliday

The U.S. prescription for success has had two main elements: to support liberalizing, democratizing reforms to reduce popular grievances; and to pursue a military strategy that carefully targets insurgents while avoiding harming civilians. An analysis of contemporaneous documents and interviews with participants in three cases held up as models of the governance approach—Malaya, Dhofar, and El Salvador—shows that counterinsurgency success is the result of a violent process of state building in which elites contest for power, popular interests matter little, and the government benefits from uses of force against civilians.

https://twitter.com/foxyforecaster/status/893049155337244672

https://archive.is/zhOXD

this is why liberal states mostly fail in counterinsurgency wars

http://www.cbsnews.com/news/commentary-why-are-we-still-in-afghanistan/

contrary study:

Nation Building Through Foreign Intervention: Evidence from Discontinuities in Military Strategies: https://academic.oup.com/qje/advance-article/doi/10.1093/qje/qjx037/4110419

This study uses discontinuities in U.S. strategies employed during the Vietnam War to estimate their causal impacts. It identifies the effects of bombing by exploiting rounding thresholds in an algorithm used to target air strikes. Bombing increased the military and political activities of the communist insurgency, weakened local governance, and reduced noncommunist civic engagement. The study also exploits a spatial discontinuity across neighboring military regions that pursued different counterinsurgency strategies. A strategy emphasizing overwhelming firepower plausibly increased insurgent attacks and worsened attitudes toward the U.S. and South Vietnamese government, relative to a more hearts-and-minds-oriented approach. JEL Codes: F35, F51, F52

anecdote:

Military Adventurer Raymond Westerling On How To Defeat An Insurgency: http://www.socialmatter.net/2018/03/12/military-adventurer-raymond-westerling-on-how-to-defeat-an-insurgency/

study
war
meta:war
military
defense
terrorism
MENA
strategy
tactics
cynicism-idealism
civil-liberty
kumbaya-kult
foreign-policy
realpolitik
usa
the-great-west-whale
occident
democracy
antidemos
institutions
leviathan
government
elite
realness
multi
twitter
social
commentary
stylized-facts
evidence-based
objektbuch
attaq
chart
contrarianism
scitariat
authoritarianism
nl-and-so-can-you
westminster
iraq-syria
polisci
🎩
conquest-empire
news
org:lite
power
backup
martial
nietzschean
pdf
piracy
britain
asia
developing-world
track-record
expansionism
peace-violence
interests
china
race
putnam-like
anglosphere
latin-america
volo-avolo
cold-war
endogenous-exogenous
shift
natural-experiment
rounding
gnon
org:popup
europe
germanic
japan
history
mostly-modern
world-war
examples
death
nihil
dominant-minority
tribalism
ethnocentrism
us-them
letters
https://twitter.com/foxyforecaster/status/893049155337244672

https://archive.is/zhOXD

this is why liberal states mostly fail in counterinsurgency wars

http://www.cbsnews.com/news/commentary-why-are-we-still-in-afghanistan/

contrary study:

Nation Building Through Foreign Intervention: Evidence from Discontinuities in Military Strategies: https://academic.oup.com/qje/advance-article/doi/10.1093/qje/qjx037/4110419

This study uses discontinuities in U.S. strategies employed during the Vietnam War to estimate their causal impacts. It identifies the effects of bombing by exploiting rounding thresholds in an algorithm used to target air strikes. Bombing increased the military and political activities of the communist insurgency, weakened local governance, and reduced noncommunist civic engagement. The study also exploits a spatial discontinuity across neighboring military regions that pursued different counterinsurgency strategies. A strategy emphasizing overwhelming firepower plausibly increased insurgent attacks and worsened attitudes toward the U.S. and South Vietnamese government, relative to a more hearts-and-minds-oriented approach. JEL Codes: F35, F51, F52

anecdote:

Military Adventurer Raymond Westerling On How To Defeat An Insurgency: http://www.socialmatter.net/2018/03/12/military-adventurer-raymond-westerling-on-how-to-defeat-an-insurgency/

august 2017 by nhaliday

Logic | West Hunter

may 2017 by nhaliday

All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html

http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html

And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996

The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.

west-hunter
scitariat
discussion
rant
thinking
rationality
metabuch
critique
systematic-ad-hoc
analytical-holistic
metameta
ideology
philosophy
info-dynamics
aphorism
darwinian
prudence
pragmatic
insight
tradition
s:*
2016
multi
gnon
right-wing
formal-values
values
slippery-slope
axioms
alt-inst
heuristic
anglosphere
optimate
flux-stasis
flexibility
paleocon
polisci
universalism-particularism
ratty
hanson
list
examples
migration
fertility
intervention
demographics
population
biotech
enhancement
energy-resources
biophysical-econ
nature
military
inequality
age-generation
time
ideas
debate
meta:rhetoric
local-global
long-short-run
gnosis-logos
gavisti
stochastic-processes
eden-heaven
politics
equilibrium
hive-mind
genetics
defense
competition
arms
peace-violence
walter-scheidel
speed
marginal
optimization
search
time-preference
patience
futurism
meta:prediction
accuracy
institutions
tetlock
theory-practice
wire-guided
priors-posteriors
distribution
moments
biases
epistemic
nea
No, we don’t.

http://www.amnation.com/vfr/archives/005864.html

http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html

And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996

The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.

may 2017 by nhaliday

Learning by flip-flopping · The File Drawer

techtariat reflection thinking rationality epistemic realness dynamical meta:rhetoric debate economics government policy randy-ayndy markets market-failure labor macro cycles temperance humility meaningness chapman sequential unaffiliated ratty spock nitty-gritty growth-econ evidence-based metameta examples info-dynamics insight oscillation gray-econ

may 2017 by nhaliday

techtariat reflection thinking rationality epistemic realness dynamical meta:rhetoric debate economics government policy randy-ayndy markets market-failure labor macro cycles temperance humility meaningness chapman sequential unaffiliated ratty spock nitty-gritty growth-econ evidence-based metameta examples info-dynamics insight oscillation gray-econ

may 2017 by nhaliday

Fourier transform - Wikipedia

april 2017 by nhaliday

https://en.wikipedia.org/wiki/Fourier_transform#Properties_of_the_Fourier_transform

https://en.wikipedia.org/wiki/Fourier_transform#Tables_of_important_Fourier_transforms

nibble
math
acm
math.CA
fourier
list
identity
duality
math.CV
wiki
reference
multi
objektbuch
cheatsheet
calculation
nitty-gritty
concept
examples
integral
AMT
ground-up
IEEE
properties
https://en.wikipedia.org/wiki/Fourier_transform#Tables_of_important_Fourier_transforms

april 2017 by nhaliday

List of games in game theory - Wikipedia

february 2017 by nhaliday

https://twitter.com/BretWeinstein/status/961503023854833665

https://archive.is/qLsD4

The most important patterns:

1. Prisoner's Dilemma

2. Race to the Bottom

3. Free Rider Problem / Tragedy of the Commons / Collective Action

4. Zero Sum vs. Non-Zero Sum

5. Externalities / Principal Agent

6. Diminishing Returns

7. Evolutionarily Stable Strategy / Nash Equilibrium

concept
economics
micro
models
examples
list
game-theory
GT-101
wiki
reference
cooperate-defect
multi
twitter
social
discussion
backup
journos-pundits
coordination
competition
free-riding
zero-positive-sum
externalities
rent-seeking
marginal
convexity-curvature
nonlinearity
equilibrium
top-n
metabuch
conceptual-vocab
alignment
contracts
https://archive.is/qLsD4

The most important patterns:

1. Prisoner's Dilemma

2. Race to the Bottom

3. Free Rider Problem / Tragedy of the Commons / Collective Action

4. Zero Sum vs. Non-Zero Sum

5. Externalities / Principal Agent

6. Diminishing Returns

7. Evolutionarily Stable Strategy / Nash Equilibrium

february 2017 by nhaliday

WARNING: Physics Envy May Be Hazardous To Your Wealth!∗

essay study thinking risk uncertainty epistemic rationality metabuch map-territory complex-systems economics physics interdisciplinary models comparison error 🎩 lens gedanken analogy oscillation examples s:* signal-noise noise-structure finance ORFE info-dynamics theory-practice

february 2017 by nhaliday

essay study thinking risk uncertainty epistemic rationality metabuch map-territory complex-systems economics physics interdisciplinary models comparison error 🎩 lens gedanken analogy oscillation examples s:* signal-noise noise-structure finance ORFE info-dynamics theory-practice

february 2017 by nhaliday

List of Laplace transforms - Wikipedia

february 2017 by nhaliday

= moment-generating function

concept
math
acm
math.CA
probability
moments
wiki
reference
calculation
objektbuch
list
examples
nibble
integral
cheatsheet
identity
AMT
properties
february 2017 by nhaliday

probability - Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? - Cross Validated

february 2017 by nhaliday

The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in 100p% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability pp given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself.

http://stats.stackexchange.com/questions/139290/a-psychology-journal-banned-p-values-and-confidence-intervals-is-it-indeed-wise

PS. Note that my question is not about the ban itself; it is about the suggested approach. I am not asking about frequentist vs. Bayesian inference either. The Editorial is pretty negative about Bayesian methods too; so it is essentially about using statistics vs. not using statistics at all.

wut

http://stats.stackexchange.com/questions/6966/why-continue-to-teach-and-use-hypothesis-testing-when-confidence-intervals-are

http://stats.stackexchange.com/questions/2356/are-there-any-examples-where-bayesian-credible-intervals-are-obviously-inferior

http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval

http://stats.stackexchange.com/questions/6652/what-precisely-is-a-confidence-interval

http://stats.stackexchange.com/questions/1164/why-havent-robust-and-resistant-statistics-replaced-classical-techniques/

http://stats.stackexchange.com/questions/16312/what-is-the-difference-between-confidence-intervals-and-hypothesis-testing

http://stats.stackexchange.com/questions/31679/what-is-the-connection-between-credible-regions-and-bayesian-hypothesis-tests

http://stats.stackexchange.com/questions/11609/clarification-on-interpreting-confidence-intervals

http://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals

q-n-a
overflow
nibble
stats
data-science
science
methodology
concept
confidence
conceptual-vocab
confusion
explanation
thinking
hypothesis-testing
jargon
multi
meta:science
best-practices
error
discussion
bayesian
frequentist
hmm
publishing
intricacy
wut
comparison
motivation
clarity
examples
robust
metabuch
🔬
info-dynamics
reference
http://stats.stackexchange.com/questions/139290/a-psychology-journal-banned-p-values-and-confidence-intervals-is-it-indeed-wise

PS. Note that my question is not about the ban itself; it is about the suggested approach. I am not asking about frequentist vs. Bayesian inference either. The Editorial is pretty negative about Bayesian methods too; so it is essentially about using statistics vs. not using statistics at all.

wut

http://stats.stackexchange.com/questions/6966/why-continue-to-teach-and-use-hypothesis-testing-when-confidence-intervals-are

http://stats.stackexchange.com/questions/2356/are-there-any-examples-where-bayesian-credible-intervals-are-obviously-inferior

http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval

http://stats.stackexchange.com/questions/6652/what-precisely-is-a-confidence-interval

http://stats.stackexchange.com/questions/1164/why-havent-robust-and-resistant-statistics-replaced-classical-techniques/

http://stats.stackexchange.com/questions/16312/what-is-the-difference-between-confidence-intervals-and-hypothesis-testing

http://stats.stackexchange.com/questions/31679/what-is-the-connection-between-credible-regions-and-bayesian-hypothesis-tests

http://stats.stackexchange.com/questions/11609/clarification-on-interpreting-confidence-intervals

http://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals

february 2017 by nhaliday

pr.probability - Identities and inequalities in analysis and probability - MathOverflow

february 2017 by nhaliday

interesting approach to proving Cauchy-Schwarz (symmetry+sum of squares)

q-n-a
overflow
math
math.CA
math.FA
probability
list
big-list
estimate
yoga
synthesis
structure
examples
identity
nibble
sum-of-squares
positivity
tricki
inner-product
wisdom
integral
quantifiers-sums
tidbits
p:whenever
s:null
signum
february 2017 by nhaliday

Do grad school students remember everything they were taught in college all the time? - Quora

q-n-a qra grad-school learning synthesis hi-order-bits neurons physics lens analogy cartoons links 🎓 scholar gowers mathtariat feynman giants quotes games nibble thinking zooming retention meta:research big-picture skeleton s:** p:whenever wire-guided narrative intuition lesswrong commentary ground-up limits examples problem-solving info-dynamics knowledge studying ideas the-trenches chart

february 2017 by nhaliday

q-n-a qra grad-school learning synthesis hi-order-bits neurons physics lens analogy cartoons links 🎓 scholar gowers mathtariat feynman giants quotes games nibble thinking zooming retention meta:research big-picture skeleton s:** p:whenever wire-guided narrative intuition lesswrong commentary ground-up limits examples problem-solving info-dynamics knowledge studying ideas the-trenches chart

february 2017 by nhaliday

at.algebraic topology - Teaching homology via everyday examples - MathOverflow

february 2017 by nhaliday

like the handcuff and Russian examples

the solution (I'm slow w/ spatial/topological visualization): http://britton.disted.camosun.bc.ca/jbhandcuff.htm

q-n-a
overflow
math
topology
math.AT
motivation
teaching
examples
visual-understanding
synthesis
bio
interdisciplinary
physics
electromag
nibble
applications
concrete
multi
org:junk
the solution (I'm slow w/ spatial/topological visualization): http://britton.disted.camosun.bc.ca/jbhandcuff.htm

february 2017 by nhaliday

ho.history overview - History of the high-dimensional volume paradox - MathOverflow

q-n-a overflow math math.MG geometry spatial dimensionality limits measure concentration-of-measure history stories giants cartoons soft-question nibble paradox novelty high-dimension examples gotchas

january 2017 by nhaliday

q-n-a overflow math math.MG geometry spatial dimensionality limits measure concentration-of-measure history stories giants cartoons soft-question nibble paradox novelty high-dimension examples gotchas

january 2017 by nhaliday

interpretation - How to understand degrees of freedom? - Cross Validated

january 2017 by nhaliday

From Wikipedia, there are three interpretations of the degrees of freedom of a statistic:

In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.

Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (which, in sample variance, is one, since the sample mean is the only intermediate step).

Mathematically, degrees of freedom is the dimension of the domain of a random vector, or essentially the number of 'free' components: how many components need to be known before the vector is fully determined.

...

This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests.

Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are:

- The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances).

- The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance.

- The F-test (of ratios of estimated variances).

- The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates.

In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it.

...

Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test:

...

This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter νν often referred to as the "degrees of freedom." The standard reasoning about how to determine νν goes like this

I have kk counts. That's kk pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal nn. That's one relationship. I estimated two (or pp, generally) parameters from the data. That's two (or pp) additional relationships, giving p+1p+1 total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only k−p−1k−p−1 (functionally) independent "degrees of freedom": that's the value to use for νν.

The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question.

...

Things went wrong because I violated two requirements of the Chi-squared test:

1. You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.)

2. You must base that estimate on the counts, not on the actual data! (This is crucial.)

...

The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature.

We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.)

q-n-a
overflow
stats
data-science
concept
jargon
explanation
methodology
things
nibble
degrees-of-freedom
clarity
curiosity
manifolds
dimensionality
ground-up
intricacy
hypothesis-testing
examples
list
ML-MAP-E
gotchas
In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.

Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (which, in sample variance, is one, since the sample mean is the only intermediate step).

Mathematically, degrees of freedom is the dimension of the domain of a random vector, or essentially the number of 'free' components: how many components need to be known before the vector is fully determined.

...

This is a subtle question. It takes a thoughtful person not to understand those quotations! Although they are suggestive, it turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests.

Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are:

- The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances).

- The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance.

- The F-test (of ratios of estimated variances).

- The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates.

In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it.

...

Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test:

...

This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions. They are differentiated by a parameter νν often referred to as the "degrees of freedom." The standard reasoning about how to determine νν goes like this

I have kk counts. That's kk pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the counts must equal nn. That's one relationship. I estimated two (or pp, generally) parameters from the data. That's two (or pp) additional relationships, giving p+1p+1 total relationships. Presuming they (the parameters) are all (functionally) independent, that leaves only k−p−1k−p−1 (functionally) independent "degrees of freedom": that's the value to use for νν.

The problem with this reasoning (which is the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question.

...

Things went wrong because I violated two requirements of the Chi-squared test:

1. You must use the Maximum Likelihood estimate of the parameters. (This requirement can, in practice, be slightly violated.)

2. You must base that estimate on the counts, not on the actual data! (This is crucial.)

...

The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions, or in terms of functional independence, or as counts of parameters, or anything else of this nature.

We are led to see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.)

january 2017 by nhaliday

computational complexity - What is the easiest randomized algorithm to motivate to the layperson? - MathOverflow

january 2017 by nhaliday

- volume of shape in R^n

- polynomial identity testing

q-n-a
overflow
tcs
algorithms
rand-approx
random
motivation
list
examples
aaronson
tcstariat
gowers
spatial
geometry
polynomials
teaching
nibble
- polynomial identity testing

january 2017 by nhaliday

Shtetl-Optimized » Blog Archive » Logicians on safari

january 2017 by nhaliday

So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

the sequel: http://www.scottaaronson.com/blog/?p=153

tcstariat
aaronson
tcs
computation
complexity
aphorism
examples
list
reflection
philosophy
multi
summary
synthesis
hi-order-bits
interdisciplinary
lens
big-picture
survey
nibble
org:bleg
applications
big-surf
s:*
p:whenever
ideas
the sequel: http://www.scottaaronson.com/blog/?p=153

january 2017 by nhaliday

Dvoretzky's theorem - Wikipedia

january 2017 by nhaliday

In mathematics, Dvoretzky's theorem is an important structural theorem about normed vector spaces proved by Aryeh Dvoretzky in the early 1960s, answering a question of Alexander Grothendieck. In essence, it says that every sufficiently high-dimensional normed vector space will have low-dimensional subspaces that are approximately Euclidean. Equivalently, every high-dimensional bounded symmetric convex set has low-dimensional sections that are approximately ellipsoids.

http://mathoverflow.net/questions/143527/intuitive-explanation-of-dvoretzkys-theorem

http://mathoverflow.net/questions/46278/unexpected-applications-of-dvoretzkys-theorem

math
math.FA
inner-product
levers
characterization
geometry
math.MG
concentration-of-measure
multi
q-n-a
overflow
intuition
examples
proofs
dimensionality
gowers
mathtariat
tcstariat
quantum
quantum-info
norms
nibble
high-dimension
wiki
reference
curvature
convexity-curvature
tcs
http://mathoverflow.net/questions/143527/intuitive-explanation-of-dvoretzkys-theorem

http://mathoverflow.net/questions/46278/unexpected-applications-of-dvoretzkys-theorem

january 2017 by nhaliday

soft question - Fundamental Examples - MathOverflow

q-n-a overflow math examples list big-list ground-up synthesis big-picture nibble database top-n hi-order-bits logic physics math.CA math.CV differential math.FA algebra math.NT probability math.DS geometry topology graph-theory math.CO tcs cs social-science game-theory GT-101 stats

january 2017 by nhaliday

q-n-a overflow math examples list big-list ground-up synthesis big-picture nibble database top-n hi-order-bits logic physics math.CA math.CV differential math.FA algebra math.NT probability math.DS geometry topology graph-theory math.CO tcs cs social-science game-theory GT-101 stats

january 2017 by nhaliday

Existence of the moment generating function and variance - Cross Validated

january 2017 by nhaliday

This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).

In the answer below, we do the following:

1. Show that if the mgf is finite for at least one (strictly) positive value and one negative value, then all positive moments of X are finite (including nonintegral moments).

2. Prove that the condition in the first item above is equivalent to the distribution of X having exponentially bounded tails. In other words, the tails of X fall off at least as fast as those of an exponential random variable Z (up to a constant).

3. Provide a quick note on the characterization of the distribution by its mgf provided it satisfies the condition in item 1.

4. Explore some examples and counterexamples to aid our intuition and, particularly, to show that we should not read undue importance into the lack of finiteness of the mgf.

q-n-a
overflow
math
stats
acm
probability
characterization
concept
moments
distribution
examples
counterexample
tails
rigidity
nibble
existence
s:null
convergence
series
In the answer below, we do the following:

1. Show that if the mgf is finite for at least one (strictly) positive value and one negative value, then all positive moments of X are finite (including nonintegral moments).

2. Prove that the condition in the first item above is equivalent to the distribution of X having exponentially bounded tails. In other words, the tails of X fall off at least as fast as those of an exponential random variable Z (up to a constant).

3. Provide a quick note on the characterization of the distribution by its mgf provided it satisfies the condition in item 1.

4. Explore some examples and counterexamples to aid our intuition and, particularly, to show that we should not read undue importance into the lack of finiteness of the mgf.

january 2017 by nhaliday

"Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character ... - Richard P. Feynman - Google Books

january 2017 by nhaliday

Actually, there was a certain amount of genuine quality to my guesses. I had a scheme, which I still use today when somebody is explaining something that l’m trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)—disjoint (two balls). Then the balls tum colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, “False!"

physics
math
feynman
thinking
empirical
examples
lens
intuition
operational
stories
metabuch
visual-understanding
thurston
hi-order-bits
geometry
topology
cartoons
giants
👳
nibble
the-trenches
metameta
meta:math
s:**
quotes
gbooks
january 2017 by nhaliday

soft question - Thinking and Explaining - MathOverflow

january 2017 by nhaliday

- good question from Bill Thurston

- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:

- symmetry as blurring/vibrating/wobbling, scale invariance

- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy

comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic

comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy

I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.

To remind myself later:

- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally

- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)

q-n-a
soft-question
big-list
intuition
communication
teaching
math
thinking
writing
thurston
lens
overflow
synthesis
hi-order-bits
👳
insight
meta:math
clarity
nibble
giants
cartoons
gowers
mathtariat
better-explained
stories
the-trenches
problem-solving
homogeneity
symmetry
fedja
examples
philosophy
big-picture
vague
isotropy
reflection
spatial
ground-up
visual-understanding
polynomials
dimensionality
math.GR
worrydream
scholar
🎓
neurons
metabuch
yoga
retrofit
mental-math
metameta
wisdom
wordlessness
oscillation
operational
adversarial
quantifiers-sums
exposition
explanation
tricki
concrete
s:***
manifolds
invariance
dynamical
info-dynamics
cool
direction
- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:

- symmetry as blurring/vibrating/wobbling, scale invariance

- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy

comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic

comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy

I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.

To remind myself later:

- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally

- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)

january 2017 by nhaliday

Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)

january 2017 by nhaliday

In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.

aaronson
tcstariat
philosophy
dennett
interdisciplinary
critique
nibble
org:bleg
within-without
the-self
neuro
psychology
cog-psych
metrics
nitty-gritty
composition-decomposition
complex-systems
cybernetics
bits
information-theory
entropy-like
forms-instances
empirical
walls
arrows
math.DS
structure
causation
quantitative-qualitative
number
extrema
optimization
abstraction
explanation
summary
degrees-of-freedom
whole-partial-many
network-structure
systematic-ad-hoc
tcs
complexity
hardness
no-go
computation
measurement
intricacy
examples
counterexample
coding-theory
linear-algebra
fields
graphs
graph-theory
expanders
math
math.CO
properties
local-global
intuition
error
definition
Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.

january 2017 by nhaliday

Convex Optimization Applications

december 2016 by nhaliday

there was a problem in ACM113 related to this (the portfolio optimization SDP stuff)

pdf
slides
exposition
finance
investing
optimization
methodology
examples
IEEE
acm
ORFE
nibble
curvature
talks
convexity-curvature
december 2016 by nhaliday

Ethnic fractionalization and growth | Dietrich Vollrath

december 2016 by nhaliday

Garett Jones did a podcast with The Economics Detective recently on the costs of ethnic diversity. It is particularly worth listening to given that racial identity has re-emerged as a salient element of politics. A quick summary - and the link above includes a nice write-up of relevant sources - would be that diversity within workplaces does not appear to improve outcomes (however those outcomes are measured).

At the same time, there is a parallel literature, touched on in the podcast, about ethnic diversity (or fractionalization, as it is termed in that literature) and economic growth. But one has to be careful drawing a bright line between the two literatures. It does not follow that the results for workplace diversity imply the results regarding economic growth. And this is because the growth results, to the extent that you believe they are robust, all operate through political systems.

So here let me walk through some of the core empirical relationships that have been found regarding ethnic fractionalization and economic growth, and then talk about why you need to take care with over-interpreting them. This is not a thorough literature review, and I realize there are other papers in the same vein. What I’m after is characterizing the essential results.

--

- objection about sensitivity of measure to definition of clusters seems dumb to me (point is to fix definitions than compare different polities. as long as direction and strength of correlation is fairly robust to changes in clustering, this is a stupid critique)

- also, could probably define a less arbitrary notion of fractionalization (w/o fixed clustering or # of clusters) if using points in a metric/vector/euclidean space (eg, genomes)

- eg, A Generalized Index of Ethno-Linguistic Fractionalization: http://www-3.unipv.it/webdept/prin/workpv02.pdf

So like -E_{A, B ~ X} d(A, B). Or maybe -E_{A, B ~ X} f(d(A, B)) for f an increasing function (in particular, f(x) = x^2).

Note that E ||A - B|| = Θ(E ||E[A] - A||), and E ||A - B||^2 = 2Var A,

for A, B ~ X, so this is just quantifying deviation from mean for Euclidean spaces.

In the case that you have a bunch of difference clusters w/ centers equidistant (so n+1 in R^n), measures p_i, and internal variances σ_i^2, you get E ||A - B||^2 = -2∑_i p_i^2σ_i^2 - ∑_{i≠j} p_ip_j(1 + σ_i^2 + σ_j^2) = -2∑_i p_i^2σ_i^2 - ∑_{i≠j} p_ip_j(1 + σ_i^2 + σ_j^2) = -∑_i p_i^2(1 + 2σ_i^2) - ∑_i 2p_i(1-p_i)σ_i^2

(inter-center distance scaled to 1 wlog).

(in general, if you allow _approximate_ equidistance, you can pack in exp(O(n)) clusters via JL lemma)

econotariat
economics
growth-econ
diversity
spearhead
study
summary
list
survey
cracker-econ
hive-mind
stylized-facts
🎩
garett-jones
wonkish
populism
easterly
putnam-like
metric-space
similarity
dimensionality
embeddings
examples
metrics
sociology
polarization
big-peeps
econ-metrics
s:*
corruption
cohesion
government
econ-productivity
religion
broad-econ
social-capital
madisonian
chart
article
wealth-of-nations
the-bones
political-econ
public-goodish
microfoundations
alesina
🌞
multi
pdf
concept
conceptual-vocab
definition
hari-seldon
At the same time, there is a parallel literature, touched on in the podcast, about ethnic diversity (or fractionalization, as it is termed in that literature) and economic growth. But one has to be careful drawing a bright line between the two literatures. It does not follow that the results for workplace diversity imply the results regarding economic growth. And this is because the growth results, to the extent that you believe they are robust, all operate through political systems.

So here let me walk through some of the core empirical relationships that have been found regarding ethnic fractionalization and economic growth, and then talk about why you need to take care with over-interpreting them. This is not a thorough literature review, and I realize there are other papers in the same vein. What I’m after is characterizing the essential results.

--

- objection about sensitivity of measure to definition of clusters seems dumb to me (point is to fix definitions than compare different polities. as long as direction and strength of correlation is fairly robust to changes in clustering, this is a stupid critique)

- also, could probably define a less arbitrary notion of fractionalization (w/o fixed clustering or # of clusters) if using points in a metric/vector/euclidean space (eg, genomes)

- eg, A Generalized Index of Ethno-Linguistic Fractionalization: http://www-3.unipv.it/webdept/prin/workpv02.pdf

So like -E_{A, B ~ X} d(A, B). Or maybe -E_{A, B ~ X} f(d(A, B)) for f an increasing function (in particular, f(x) = x^2).

Note that E ||A - B|| = Θ(E ||E[A] - A||), and E ||A - B||^2 = 2Var A,

for A, B ~ X, so this is just quantifying deviation from mean for Euclidean spaces.

In the case that you have a bunch of difference clusters w/ centers equidistant (so n+1 in R^n), measures p_i, and internal variances σ_i^2, you get E ||A - B||^2 = -2∑_i p_i^2σ_i^2 - ∑_{i≠j} p_ip_j(1 + σ_i^2 + σ_j^2) = -2∑_i p_i^2σ_i^2 - ∑_{i≠j} p_ip_j(1 + σ_i^2 + σ_j^2) = -∑_i p_i^2(1 + 2σ_i^2) - ∑_i 2p_i(1-p_i)σ_i^2

(inter-center distance scaled to 1 wlog).

(in general, if you allow _approximate_ equidistance, you can pack in exp(O(n)) clusters via JL lemma)

december 2016 by nhaliday

The Son Also Rises | West Hunter

november 2016 by nhaliday

It turns out that you can predict a kid’s social status better if you take into account the grandparents as well as the parents – and the nieces/nephews, cousins, etc. Which means that you’re estimating the breeding value for moxie – which means that Clark needs to read Falconer right now. I’d guess that taking into account grandparents that the kids never even met, ones that died before their birth, will improve prediction. Let the sociologists chew on that.

...

If culture was the driver, a group could just adopt a different culture (it happens) and decide to be the new upper class by doing all that shit Amy Chua pushes, or possibly by playing cricket. I don’t believe that this ever actually occurs. Although with genetic engineering on the horizon, it may be possible. Of course that would be cheating.

It is hard to change these patterns very much. Universal public education, fluoridation, democracy, haven’t made much difference. I do think that shooting enough people would. Or a massive application of droit de seigneur, or its opposite.

...

If moxie is genetic, most economists must be wrong about human capital formation. Having fewer kids and spending more money on their education has only a modest effect: this must be the case, given slow long-run social mobility. It seems that social status is transmitted within families largely independently of the resources available to parents. Which is why Ashkenazi Jews could show up at Ellis Island flat broke, with no English, and have so many kids in the Ivy League by the 1920s that they imposed quotas. I’ve never understood why economists ever believed in this.

Moxie is not the same thing as IQ, although IQ must be a component. It is also worth remembering that this trait helps you acquire status – it is probably not quite the same thing as being saintly, honest, or incredibly competent at doing your damn job.

https://westhunt.wordpress.com/2014/03/24/simple-mobility-models/

https://westhunt.wordpress.com/2014/03/29/simple-mobility-models-ii/

books
summary
west-hunter
review
mobility
🌞
c:**
🎩
2014
spearhead
gregory-clark
biodet
legacy
assortative-mating
long-short-run
signal-noise
latent-variables
age-generation
scitariat
broad-econ
s-factor
flux-stasis
multi
models
microfoundations
honor
integrity
ability-competence
impact
regression-to-mean
agri-mindset
alt-inst
economics
human-capital
interdisciplinary
social-science
sociology
sports
analogy
examples
class
inequality
britain
europe
nordic
japan
korea
china
asia
latin-america
...

If culture was the driver, a group could just adopt a different culture (it happens) and decide to be the new upper class by doing all that shit Amy Chua pushes, or possibly by playing cricket. I don’t believe that this ever actually occurs. Although with genetic engineering on the horizon, it may be possible. Of course that would be cheating.

It is hard to change these patterns very much. Universal public education, fluoridation, democracy, haven’t made much difference. I do think that shooting enough people would. Or a massive application of droit de seigneur, or its opposite.

...

If moxie is genetic, most economists must be wrong about human capital formation. Having fewer kids and spending more money on their education has only a modest effect: this must be the case, given slow long-run social mobility. It seems that social status is transmitted within families largely independently of the resources available to parents. Which is why Ashkenazi Jews could show up at Ellis Island flat broke, with no English, and have so many kids in the Ivy League by the 1920s that they imposed quotas. I’ve never understood why economists ever believed in this.

Moxie is not the same thing as IQ, although IQ must be a component. It is also worth remembering that this trait helps you acquire status – it is probably not quite the same thing as being saintly, honest, or incredibly competent at doing your damn job.

https://westhunt.wordpress.com/2014/03/24/simple-mobility-models/

https://westhunt.wordpress.com/2014/03/29/simple-mobility-models-ii/

november 2016 by nhaliday

Why Information Grows – Paul Romer

september 2016 by nhaliday

thinking like a physicist:

The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/

books
summary
review
economics
growth-econ
interdisciplinary
hmm
physics
thinking
feynman
tradeoffs
paul-romer
econotariat
🎩
🎓
scholar
aphorism
lens
signal-noise
cartoons
skeleton
s:**
giants
electromag
mutation
genetics
genomics
bits
nibble
stories
models
metameta
metabuch
problem-solving
composition-decomposition
structure
abstraction
zooming
examples
knowledge
human-capital
behavioral-econ
network-structure
info-econ
communication
learning
information-theory
applications
volo-avolo
map-territory
externalities
duplication
spreading
property-rights
lattice
multi
government
polisci
policy
counterfactual
insight
paradox
parallax
reduction
empirical
detail-architecture
methodology
crux
visual-understanding
theory-practice
matching
analytical-holistic
branches
complement-substitute
local-global
internet
technology
cost-benefit
investing
micro
signaling
limits
public-goodish
interpretation
The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/

september 2016 by nhaliday

Learn Difficult Concepts with the ADEPT Method – BetterExplained

july 2016 by nhaliday

Make explanations ADEPT: Use an Analogy, Diagram, Example, Plain-English description, and then a Technical description.

thinking
education
learning
teaching
tutoring
better-explained
analogy
visual-understanding
examples
july 2016 by nhaliday

For potential Ph.D. students

may 2016 by nhaliday

Ravi Vakil's advice for PhD students

General advice:

Think actively about the creative process. A subtle leap is required from undergraduate thinking to active research (even if you have done undergraduate research). Think explicitly about the process, and talk about it (with me, and with others). For example, in an undergraduate class any Ph.D. student at Stanford will have tried to learn absolutely all the material flawlessly. But in order to know everything needed to tackle an important problem on the frontier of human knowledge, one would have to spend years reading many books and articles. So you'll have to learn differently. But how?

Don't be narrow and concentrate only on your particular problem. Learn things from all over the field, and beyond. The facts, methods, and insights from elsewhere will be much more useful than you might realize, possibly in your thesis, and most definitely afterwards. Being broad is a good way of learning to develop interesting questions.

When you learn the theory, you should try to calculate some toy cases, and think of some explicit basic examples.

Talk to other graduate students. A lot. Organize reading groups. Also talk to post-docs, faculty, visitors, and people you run into on the street. I learn the most from talking with other people. Maybe that's true for you too.

Specific topics:

- seminars

- giving talks

- writing

- links to other advice

advice
reflection
learning
thinking
math
phd
expert
stanford
grad-school
academia
insight
links
strategy
long-term
growth
🎓
scholar
metabuch
org:edu
success
tactics
math.AG
tricki
meta:research
examples
concrete
s:*
info-dynamics
s-factor
prof
org:junk
expert-experience
General advice:

Think actively about the creative process. A subtle leap is required from undergraduate thinking to active research (even if you have done undergraduate research). Think explicitly about the process, and talk about it (with me, and with others). For example, in an undergraduate class any Ph.D. student at Stanford will have tried to learn absolutely all the material flawlessly. But in order to know everything needed to tackle an important problem on the frontier of human knowledge, one would have to spend years reading many books and articles. So you'll have to learn differently. But how?

Don't be narrow and concentrate only on your particular problem. Learn things from all over the field, and beyond. The facts, methods, and insights from elsewhere will be much more useful than you might realize, possibly in your thesis, and most definitely afterwards. Being broad is a good way of learning to develop interesting questions.

When you learn the theory, you should try to calculate some toy cases, and think of some explicit basic examples.

Talk to other graduate students. A lot. Organize reading groups. Also talk to post-docs, faculty, visitors, and people you run into on the street. I learn the most from talking with other people. Maybe that's true for you too.

Specific topics:

- seminars

- giving talks

- writing

- links to other advice

may 2016 by nhaliday

Code Jam Statistics

april 2016 by nhaliday

Haskell people:

https://www.go-hero.net/jam/10/name/Reid

https://www.go-hero.net/jam/17/name/rotsor

https://www.go-hero.net/jam/16/name/watashi

https://www.go-hero.net/jam/17/name/holdenlee

Scala guy: https://www.go-hero.net/jam/13/name/winger

google
oly
oly-programming
tools
yak-shaving
aggregator
links
data
database
pls
programming
examples
best-practices
multi
people
https://www.go-hero.net/jam/10/name/Reid

https://www.go-hero.net/jam/17/name/rotsor

https://www.go-hero.net/jam/16/name/watashi

https://www.go-hero.net/jam/17/name/holdenlee

Scala guy: https://www.go-hero.net/jam/13/name/winger

april 2016 by nhaliday

Notes Essays—Peter Thiel’s CS183: Startup—Stanford, Spring 2012

business startups strategy course thiel contrarianism barons definite-planning entrepreneurialism lecture-notes skunkworks innovation competition market-power winner-take-all usa anglosphere duplication education higher-ed law ranking success envy stanford princeton harvard elite zero-positive-sum war truth realness capitalism markets darwinian rent-seeking google facebook apple microsoft amazon capital scale network-structure tech business-models twitter social media games frontier time rhythm space musk mobile ai transportation examples recruiting venture metabuch metameta skeleton crooked wisdom gnosis-logos thinking polarization synchrony allodium antidemos democracy things exploratory dimensionality nationalism-globalism trade technology distribution moments personality phalanges stereotypes tails plots visualization creative nietzschean thick-thin psych-architecture wealth class morality ethics status extra-introversion info-dynamics narrative stories fashun myth the-classics literature big-peeps crime

february 2016 by nhaliday

business startups strategy course thiel contrarianism barons definite-planning entrepreneurialism lecture-notes skunkworks innovation competition market-power winner-take-all usa anglosphere duplication education higher-ed law ranking success envy stanford princeton harvard elite zero-positive-sum war truth realness capitalism markets darwinian rent-seeking google facebook apple microsoft amazon capital scale network-structure tech business-models twitter social media games frontier time rhythm space musk mobile ai transportation examples recruiting venture metabuch metameta skeleton crooked wisdom gnosis-logos thinking polarization synchrony allodium antidemos democracy things exploratory dimensionality nationalism-globalism trade technology distribution moments personality phalanges stereotypes tails plots visualization creative nietzschean thick-thin psych-architecture wealth class morality ethics status extra-introversion info-dynamics narrative stories fashun myth the-classics literature big-peeps crime

february 2016 by nhaliday

**related tags**

Copy this bookmark: