nhaliday + heuristic   50

The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor
12 days ago by nhaliday
AFL + QuickCheck = ?
Adventures in fuzzing. Also differences between testing culture in software and hardware.
techtariat  dan-luu  programming  engineering  checking  random  haskell  path-dependence  span-cover  heuristic  libraries  links  tools  devtools  software  hardware  culture  formal-methods  local-global  golang  correctness
11 weeks ago by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s
september 2018 by nhaliday
Christian ethics - Wikipedia
Christian ethics is a branch of Christian theology that defines virtuous behavior and wrong behavior from a Christian perspective. Systematic theological study of Christian ethics is called moral theology, possibly with the name of the respective theological tradition, e.g. Catholic moral theology.

Christian virtues are often divided into four cardinal virtues and three theological virtues. Christian ethics includes questions regarding how the rich should act toward the poor, how women are to be treated, and the morality of war. Christian ethicists, like other ethicists, approach ethics from different frameworks and perspectives. The approach of virtue ethics has also become popular in recent decades, largely due to the work of Alasdair MacIntyre and Stanley Hauerwas.[2]

...

The seven Christian virtues are from two sets of virtues. The four cardinal virtues are Prudence, Justice, Restraint (or Temperance), and Courage (or Fortitude). The cardinal virtues are so called because they are regarded as the basic virtues required for a virtuous life. The three theological virtues, are Faith, Hope, and Love (or Charity).

- Prudence: also described as wisdom, the ability to judge between actions with regard to appropriate actions at a given time
- Justice: also considered as fairness, the most extensive and most important virtue[20]
- Temperance: also known as restraint, the practice of self-control, abstention, and moderation tempering the appetition
- Courage: also termed fortitude, forebearance, strength, endurance, and the ability to confront fear, uncertainty, and intimidation
- Faith: belief in God, and in the truth of His revelation as well as obedience to Him (cf. Rom 1:5:16:26)[21][22]
- Hope: expectation of and desire of receiving; refraining from despair and capability of not giving up. The belief that God will be eternally present in every human's life and never giving up on His love.
- Charity: a supernatural virtue that helps us love God and our neighbors, the same way as we love ourselves.

The seven deadly sins, also known as the capital vices or cardinal sins, is a grouping and classification of vices of Christian origin.[1] Behaviours or habits are classified under this category if they directly give birth to other immoralities.[2] According to the standard list, they are pride, greed, lust, envy, gluttony, wrath, and sloth,[2] which are also contrary to the seven virtues. These sins are often thought to be abuses or excessive versions of one's natural faculties or passions (for example, gluttony abuses one's desire to eat).

originally:
1 Gula (gluttony)
2 Luxuria/Fornicatio (lust, fornication)
3 Avaritia (avarice/greed)
4 Superbia (pride, hubris)
5 Tristitia (sorrow/despair/despondency)
6 Ira (wrath)
7 Vanagloria (vainglory)
8 Acedia (sloth)

Golden Rule: https://en.wikipedia.org/wiki/Golden_Rule
The Golden Rule (which can be considered a law of reciprocity in some religions) is the principle of treating others as one would wish to be treated. It is a maxim that is found in many religions and cultures.[1][2] The maxim may appear as _either a positive or negative injunction_ governing conduct:

- One should treat others as one would like others to treat oneself (positive or directive form).[1]
- One should not treat others in ways that one would not like to be treated (negative or prohibitive form).[1]
- What you wish upon others, you wish upon yourself (empathic or responsive form).[1]
The Golden Rule _differs from the maxim of reciprocity captured in do ut des—"I give so that you will give in return"—and is rather a unilateral moral commitment to the well-being of the other without the expectation of anything in return_.[3]

The concept occurs in some form in nearly every religion[4][5] and ethical tradition[6] and is often considered _the central tenet of Christian ethics_[7] [8]. It can also be explained from the perspectives of psychology, philosophy, sociology, human evolution, and economics. Psychologically, it involves a person empathizing with others. Philosophically, it involves a person perceiving their neighbor also as "I" or "self".[9] Sociologically, "love your neighbor as yourself" is applicable between individuals, between groups, and also between individuals and groups. In evolution, "reciprocal altruism" is seen as a distinctive advance in the capacity of human groups to survive and reproduce, as their exceptional brains demanded exceptionally long childhoods and ongoing provision and protection even beyond that of the immediate family.[10] In economics, Richard Swift, referring to ideas from David Graeber, suggests that "without some kind of reciprocity society would no longer be able to exist."[11]

...

Seneca the Younger (c. 4 BC–65 AD), a practitioner of Stoicism (c. 300 BC–200 AD) expressed the Golden Rule in his essay regarding the treatment of slaves: "Treat your inferior as you would wish your superior to treat you."[23]

...

The "Golden Rule" was given by Jesus of Nazareth, who used it to summarize the Torah: "Do to others what you want them to do to you." and "This is the meaning of the law of Moses and the teaching of the prophets"[33] (Matthew 7:12 NCV, see also Luke 6:31). The common English phrasing is "Do unto others as you would have them do unto you". A similar form of the phrase appeared in a Catholic catechism around 1567 (certainly in the reprint of 1583).[34] The Golden Rule is _stated positively numerous times in the Hebrew Pentateuch_ as well as the Prophets and Writings. Leviticus 19:18 ("Forget about the wrong things people do to you, and do not try to get even. Love your neighbor as you love yourself."; see also Great Commandment) and Leviticus 19:34 ("But treat them just as you treat your own citizens. Love foreigners as you love yourselves, because you were foreigners one time in Egypt. I am the Lord your God.").

The Old Testament Deuterocanonical books of Tobit and Sirach, accepted as part of the Scriptural canon by Catholic Church, Eastern Orthodoxy, and the Non-Chalcedonian Churches, express a _negative form_ of the golden rule:

"Do to no one what you yourself dislike."

— Tobit 4:15
"Recognize that your neighbor feels as you do, and keep in mind your own dislikes."

— Sirach 31:15
Two passages in the New Testament quote Jesus of Nazareth espousing the _positive form_ of the Golden rule:

Matthew 7:12
Do to others what you want them to do to you. This is the meaning of the law of Moses and the teaching of the prophets.

Luke 6:31
Do to others what you would want them to do to you.

...

The passage in the book of Luke then continues with Jesus answering the question, "Who is my neighbor?", by telling the parable of the Good Samaritan, indicating that "your neighbor" is anyone in need.[35] This extends to all, including those who are generally considered hostile.

Jesus' teaching goes beyond the negative formulation of not doing what one would not like done to themselves, to the positive formulation of actively doing good to another that, if the situations were reversed, one would desire that the other would do for them. This formulation, as indicated in the parable of the Good Samaritan, emphasizes the needs for positive action that brings benefit to another, not simply restraining oneself from negative activities that hurt another. Taken as a rule of judgment, both formulations of the golden rule, the negative and positive, are equally applicable.[36]

The Golden Rule: Not So Golden Anymore: https://philosophynow.org/issues/74/The_Golden_Rule_Not_So_Golden_Anymore
Pluralism is the most serious problem facing liberal democracies today. We can no longer ignore the fact that cultures around the world are not simply different from one another, but profoundly so; and the most urgent area in which this realization faces us is in the realm of morality. Western democratic systems depend on there being at least a minimal consensus concerning national values, especially in regard to such things as justice, equality and human rights. But global communication, economics and the migration of populations have placed new strains on Western democracies. Suddenly we find we must adjust to peoples whose suppositions about the ultimate values and goals of life are very different from ours. A clear lesson from events such as 9/11 is that disregarding these differences is not an option. Collisions between worldviews and value systems can be cataclysmic. Somehow we must learn to manage this new situation.

For a long time, liberal democratic optimism in the West has been shored up by suppositions about other cultures and their differences from us. The cornerpiece of this optimism has been the assumption that whatever differences exist they cannot be too great. A core of ‘basic humanity’ surely must tie all of the world’s moral systems together – and if only we could locate this core we might be able to forge agreements and alliances among groups that otherwise appear profoundly opposed. We could perhaps then shelve our cultural or ideological differences and get on with the more pleasant and productive business of celebrating our core agreement. One cannot fail to see how this hope is repeated in order buoy optimism about the Middle East peace process, for example.

...

It becomes obvious immediately that no matter how widespread we want the Golden Rule to be, there are some ethical systems that we have to admit do not have it. In fact, there are a few traditions that actually disdain the Rule. In philosophy, the Nietzschean tradition holds that the virtues implicit in the Golden Rule are antithetical to the true virtues of self-assertion and the will-to-power. Among religions, there are a good many that prefer to emphasize the importance of self, cult, clan or tribe rather than of general others; and a good many other religions for whom large populations are simply excluded from goodwill, being labeled as outsiders, heretics or … [more]
article  letters  philosophy  morality  ethics  formal-values  religion  christianity  theos  n-factor  europe  the-great-west-whale  occident  justice  war  peace-violence  janus  virtu  list  sanctity-degradation  class  lens  wealth  gender  sex  sexuality  multi  concept  wiki  reference  theory-of-mind  ideology  cooperate-defect  coordination  psychology  cog-psych  social-psych  emotion  cybernetics  ecology  deep-materialism  new-religion  hsu  scitariat  aphorism  quotes  stories  fiction  gedanken  altruism  parasites-microbiome  food  diet  nutrition  individualism-collectivism  taxes  government  redistribution  analogy  lol  troll  poast  death  long-short-run  axioms  judaism  islam  tribalism  us-them  kinship  interests  self-interest  dignity  civil-liberty  values  homo-hetero  diversity  unintended-consequences  within-without  increase-decrease  signum  ascetic  axelrod  guilt-shame  patho-altruism  history  iron-age  mediterranean  the-classics  robust  egalitarianism-hierarchy  intricacy  hypocrisy  parable  roots  explanans  crux  s
april 2018 by nhaliday
Finders, keepers - Wikipedia
Finders, keepers is an English adage with the premise that when something is unowned or abandoned, whoever finds it first can claim it. This idiom relates to an ancient Roman law of similar meaning and has been expressed in various ways over the centuries.[1] Of particular difficulty is how best to define when exactly something is unowned or abandoned, which can lead to legal or ethical disputes.

...

In the field of social simulation, Rosaria Conte and Cristiano Castelfranchi have used "finders, keepers" as a case study for simulating the evolution of norms in simple societies.[2]
concept  heuristic  law  leviathan  wiki  reference  aphorism  metabuch  philosophy  canon  history  iron-age  mediterranean  the-classics  anglosphere  conquest-empire  civil-liberty  social-norms  social-structure  universalism-particularism  axioms  ethics  simulation  egalitarianism-hierarchy  inequality  power  models  GT-101  EGT  new-religion  deep-materialism  parallax
april 2018 by nhaliday
Behaving Discretely: Heuristic Thinking in the Emergency Department
I find compelling evidence of heuristic thinking in this setting: patients arriving in the emergency department just after their 40th birthday are roughly 10% more likely to be tested for and 20% more likely to be diagnosed with ischemic heart disease (IHD) than patients arriving just before this date, despite the fact that the incidence of heart disease increases smoothly with age.

Figure 1: Proportion of ED patients tested for heart attack
pdf  study  economics  behavioral-econ  field-study  biases  heuristic  error  healthcare  medicine  meta:medicine  age-generation  aging  cardio  bounded-cognition  shift  trivia  cocktail  pro-rata
december 2017 by nhaliday
Sequence Modeling with CTC
A visual guide to Connectionist Temporal Classiﬁcation, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
acmtariat  techtariat  org:bleg  nibble  better-explained  machine-learning  deep-learning  visual-understanding  visualization  analysis  let-me-see  research  sequential  audio  classification  model-class  exposition  language  acm  approximation  comparison  markov  iteration-recursion  concept  atoms  distribution  orders  DP  heuristic  optimization  trees  greedy  matching  gradient-descent
december 2017 by nhaliday
Of Mice and Men | West Hunter
It’s not always easy figuring out how a pathogen causes disease. There is an example in mice for which the solution was very difficult, so difficult that we would probably have failed to discover the cause of a similarly obscure infectious disease in humans.

Mycoplasma pulmonis causes a chronic obstructive lung disease in mice, but it wasn’t easy to show this. The disease was first described in 1915, and by 1940, people began to suspect Mycoplasma pulmonis might be the cause. But then again, maybe not. It was often found in mice that seemed healthy. Pure cultures of this organism did not consistently produce lung disease – which means that it didn’t satisfy Koch’s postulates, in particular postulate 1 (The microorganism must be found in abundance in all organisms suffering from the disease, but should not be found in healthy organisms.) and postulate 3 (The cultured microorganism should cause disease when introduced into a healthy organism.).

Well, those postulates are not logic itself, but rather a useful heuristic. Koch knew that, even if lots of other people don’t.

This respiratory disease of mice is long-lasting, but slow to begin. It can take half a lifetime – a mouse lifetime, that is – and that made finding the cause harder. It required patience, which means I certainly couldn’t have done it.

Here’s how they solved it. You can raise germ-free mice. In the early 1970s, researchers injected various candidate pathogens into different groups of germ-free mice and waited to see which, if any, developed this chronic lung disease. It was Mycoplasma pulmonis , all right, but it had taken 60 years to find out.

It turned out that susceptibility differed between different mouse strains – genetic susceptibility was important. Co-infection with other pathogens affected the course of the disease. Microenvironmental details mattered – mainly ammonia in cages where the bedding wasn’t changed often enough. But it didn’t happen without that mycoplasma, which was a key causal link, something every engineer understands but many MDs don’t.

If there was a similarly obscure infectious disease of humans, say one that involved a fairly common bug found in both the just and the unjust, one that took decades for symptoms to manifest – would we have solved it? Probably not.

Cooties are everywhere.

gay germ search: https://westhunt.wordpress.com/2013/07/21/of-mice-and-men/#comment-15905
It’s hard to say, depends on how complicated the path of causation is. Assuming that I’m even right, of course. Some good autopsy studies might be fruitful – you’d look for microanatomical brain differences, as with nartcolepsy. Differences in gene expression, maybe. You could look for a pathogen – using the digital version of RDA (representational difference analysis), say on discordant twins. Do some old-fashioned epidemiology. Look for marker antibodies, signs of some sort of immunological event.

Do all of the above on gay rams – lots easier to get started, much less whining from those being vivisected.

Patrick Moore found the virus causing Kaposi’s sarcoma without any funding at all. I’m sure Peter Thiel could afford a serious try.
west-hunter  scitariat  discussion  ideas  reflection  analogy  model-organism  bio  disease  parasites-microbiome  medicine  epidemiology  heuristic  thick-thin  stories  experiment  track-record  intricacy  gotchas  low-hanging  🌞  patience  complex-systems  meta:medicine  multi  poast  methodology  red-queen  brain-scan  neuro  twin-study  immune  nature  gender  sex  sexuality  thiel  barons  gwern  stylized-facts  inference  apollonian-dionysian
september 2017 by nhaliday
Atrocity statistics from the Roman Era
Gibbon, Decline & Fall v.2 ch.XVI: < 2,000 k. under Roman persecution.
Ludwig Hertling ("Die Zahl de Märtyrer bis 313", 1944) estimated 100,000 Christians killed between 30 and 313 CE. (cited -- unfavorably -- by David Henige, Numbers From Nowhere, 1998)
Catholic Encyclopedia, "Martyr": number of Christian martyrs under the Romans unknown, unknowable. Origen says not many. Eusebius says thousands.

...

General population decline during The Fall of Rome: 7,000,000 [make link]
- Colin McEvedy, The New Penguin Atlas of Medieval History (1992)
- From 2nd Century CE to 4th Century CE: Empire's population declined from 45M to 36M [i.e. 9M]
- From 400 CE to 600 CE: Empire's population declined by 20% [i.e. 7.2M]
- Paul Bairoch, Cities and economic development: from the dawn of history to the present, p.111
- "The population of Europe except Russia, then, having apparently reached a high point of some 40-55 million people by the start of the third century [ca.200 C.E.], seems to have fallen by the year 500 to about 30-40 million, bottoming out at about 20-35 million around 600." [i.e. ca.20M]
- Francois Crouzet, A History of the European Economy, 1000-2000 (University Press of Virginia: 2001) p.1.
- "The population of Europe (west of the Urals) in c. AD 200 has been estimated at 36 million; by 600, it had fallen to 26 million; another estimate (excluding ‘Russia’) gives a more drastic fall, from 44 to 22 million." [i.e. 10M or 22M]

also:
The geometric mean of these two extremes would come to 4½ per day, which is a credible daily rate for the really bad years.

why geometric mean? can you get it as the MLE given min{X1, ..., Xn} and max{X1, ..., Xn} for {X_i} iid Poissons? some kinda limit? think it might just be a rule of thumb.

yeah, it's a rule of thumb. found it it his book (epub).
org:junk  data  let-me-see  scale  history  iron-age  mediterranean  the-classics  death  nihil  conquest-empire  war  peace-violence  gibbon  trivia  multi  todo  AMT  expectancy  heuristic  stats  ML-MAP-E  data-science  estimate  magnitude  population  demographics  database  list  religion  christianity  leviathan
september 2017 by nhaliday
Edward Feser: Conservatism, populism, and snobbery
https://archive.is/nuwnX
feser is good on this: chief task of conservative intellectuals is to defend epistemic credentials of mere prejudice

The Right vindicates common sense distinctions: https://bonald.wordpress.com/2017/02/10/the-right-vindicates-common-sense-distinctions/
In some ways, we’re already there. One of the core intellectual tasks of the Right has been, and will continue to be, the analysis and rehabilitation of categories found useful by pre-modern humanity but rejected by moderns in their fits of ideologically-driven oversimplification.
Consider these three:
1. Friend vs. Enemy. Carl Schmitt famously put this distinction at the core of his political theory in explicit defiance of the liberal humanitarianism of his day that wanted to reduce all questions to abstract morality and economic efficiency. The friend vs. enemy distinction, Schmitt insisted, is independent of these. To identify a threatening nation as the enemy does not necessarily make any statement about its moral, aesthetic, or economic qualities. Schmitt observed that the liberal nations (for him, the victors of WWI) in fact do mobilize against threats and competitors; forbidding themselves the vocabulary of “friend” and “enemy” means they recast their hostilities in terms of moral absolutes. The nation they attack cannot be called their own enemy, so it must be demonized as the enemy of all humanity. This will be a reoccurring conservative argument. Eliminating a needed category doesn’t eliminate hostility between peoples; it only forces them to be incorrectly conceptualized along moral lines, which actually diminishes our ability to empathize with our opponent.
2. Native vs. Foreigner. Much of what Schmitt said about the distinction between friend and enemy applies to the more basic categorization of people as belonging to “us” or as being alien. I argued recently in the Orthosphere, concerning the topic of Muslim immigration, that we can actually be more sympathetic to Muslims among us if we acknowledge that our concern is not that their ways are objectionable in some absolute (moral/philosophical) sense, but that they are alien to the culture we wish to preserve as dominant in our nation. Reflections about the “universal person” are also quite relevant to this.
3. Masculine vs. feminine. Conservatives have found little to recommend the liberals’ distinction between biological “sex” and socially constructed “gender”. However, pre-modern peoples had intriguing intuitions of masculinity and femininity as essences or principles that can be considered beyond the strict context of sexual reproduction. Largely defined by relation to each other (so that, for example, a woman relates in a feminine way to other people more than to wild animals or inanimate objects), even things other than sexually reproducing animals can participate in these principles to some extent. For example, the sun is masculine while Luna is feminine, at least in how they present themselves to us. Masculinity and femininity seem to represent poles in the structure of relationality itself, and so even the more mythical attributions of these essences were not necessarily intended metaphorically.

The liberal critique of these categories, and others not accommodated by their ideology, comes down to the following
1. Imperialism of the moral. The category in question is recognized as nonmoral, and the critic asserts that it is morally superior to use only moral categories. (“Wouldn’t it be better to judge someone based on whether he’s a good person than on where he was born?”) Alternatively, the critic presumes that other categories actually are reducible to moral categories, and other categories are condemned for being inaccurate in their presumed implicit moral evaluations. (“He’s a good person. How can you call him an ‘alien’ as if he were some kind of monster?!”)
2. Appeal to boundary cases. Sometimes the boundaries of the criticized category are fuzzy. Perhaps a particular person is like “us” in some ways but unlike “us” in others. From this, conclude that the category is arbitrary and meaningless.
3. Emotivism. Claim that the criticized category is actually a sub-rational emotional response. It must be because it has no place in liberal ideology, which the liberal presumes to be coextensive with reason itself. And in fact, when certain ways of thinking are made socially unacceptable, they will likely only pop out in emergencies and moments of distress. It would be no different with moral categories–if the concepts “evil” and “unfair” were socially disfavored, people would only resort to them when intolerably provoked and undoubtedly emotional.
4. Imputation of sinister social motives. The critic points out that the categorization promotes some established social structure; therefore, it must be an illusion.

Why the Republican Party Is Falling Apart: http://nationalinterest.org/feature/why-the-republican-party-falling-apart-22491?page=show
Moore and a great many of his voters subscribe to a simplistic and exaggerated view of the world and the conflicts it contains. Moore has voiced the belief that Christian communities in Illinois or Indiana, or somewhere “up north,” are under Sharia law. That’s absurd. But why does he believe it, and why do voters trust him despite such beliefs? Because on the other side is another falsehood, more sophisticated but patently false: the notion that unlimited Islamic immigration to Europe, for example, is utterly harmless, or the notion that Iran is an implacable fundamentalist threat while good Sunni extremists in Saudi Arabia are our true and faithful friends. Each of the apocalyptic beliefs held by a Roy Moore or his supporters contains a fragment of truth—or at least amounts to a rejection of some falsehood that has become an article of faith among America’s elite. The liberal view of the world to which Democrats and elite Republicans alike subscribe is false, but the resources for showing its falsehood in a nuanced way are lacking. Even the more intellectual sort of right-winger who makes it through the cultural indoctrination of his college and peer class tends to be mutilated by the experience. He—most often a he—comes out of it embittered and reactionary or else addicted to opium dreams of neo-medievalism or platonic republics. Since there are few nonliberal institutions of political thought, the right that recognizes the falsehood of liberalism and rejects it tends to be a force of feeling rather than reflection. Moore, of course, has a legal education, and he assuredly reads the Bible. He’s not unintelligent, but he cannot lean upon a well-balanced and subtle right because such a thing hardly exists in our environment. Yet there is a need for a right nonetheless, and so a Roy Moore or a Donald Trump fills the gap. There is only one thing the Republican establishment can do if it doesn’t like that: reform itself from stem to stern.

Who Are ‘The People’ Anyway?: http://www.theamericanconservative.com/articles/who-are-the-people-anyway/
Beware of those who claim to speak for today's populist audience.
- Paul Gottfried

Gottfried's got a real chip on his shoulder about the Straussians
journos-pundits  essay  right-wing  politics  ideology  government  civil-liberty  culture  egalitarianism-hierarchy  class  hypocrisy  populism  tradition  society  rhetoric  aristos  prudence  meta:rhetoric  debate  multi  gnon  us-them  gender  coalitions  twitter  social  commentary  unaffiliated  self-interest  prejudice  paleocon  current-events  news  org:mag  org:foreign  instinct  counter-revolution  axioms  straussian  subculture  trump  reason  orwellian  universalism-particularism  pragmatic  systematic-ad-hoc  analytical-holistic  philosophy  info-dynamics  insight  slippery-slope  values  heuristic  alt-inst  humility  emotion  metabuch  thinking  list  top-n  persuasion  duty  impetus  left-wing  wisdom  love-hate  judgement
july 2017 by nhaliday
Culture and the Historical Process
This article discusses the importance of accounting for cultural values and beliefs when studying the process of historical economic development. A notion of culture as heuristics or rules-of-thumb that aid in decision making is described. Because cultural traits evolve based upon relative fitness, historical shocks can have persistent impacts if they alter the costs and benefits of different traits. A number of empirical studies confirm that culture is an important mechanism that helps explain why historical shocks can have persistent impacts; these are reviewed here. As an example, I discuss the colonial origins hypothesis (Acemoglu, Johnson and Robinson, 2001), and show that our understanding of the transplantation of European legal and political institutions during the colonial period remains incomplete unless the values and beliefs brought by European settlers are taken into account. It is these cultural beliefs that formed the foundation of the initial institutions that in turn were key for long-term economic development.

...

The notion of culture that I employ is that of decision making heuristics or ‘rules-of-thumb that have evolved given our need to make decisions in complex and uncertain environments. Using theoretical models, Boyd and Richerson (1985, 2005) show that if information acquisition is either costly or imperfect, the use of heuristics or rules-of-thumb in decision-making can arise optimally. By relying on general beliefs, values or gut feelings about the “right” thing to do in different situations, individuals may not behave in a manner that is optimal in every instance, but they do save on the costs of obtaining the information necessary to always behave optimally. The benefit of these heuristics is that they are “fast-and-frugal”, a benefit which in many environments outweighs the costs of imprecision (Gigerenzer and Goldstein, 1996). Therefore, culture, as defined in this paper, refers to these decision-making heuristics, which typically manifest themselves as values, beliefs, or social norms.
study  economics  growth-econ  methodology  explanation  conceptual-vocab  concept  culture  cultural-dynamics  anthropology  broad-econ  path-dependence  roots  institutions  decision-making  heuristic  🎩  europe  age-of-discovery  expansionism  world  developing-world  wealth-of-nations  🌞  s:*  pseudoE  political-econ  north-weingast-like  social-norms  microfoundations  hari-seldon
june 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea
may 2017 by nhaliday
More on Low-Trust Russia: Do Russian Who Wants To Be A Millionaire contestants avoid asking the audience because they expect audience members to deliberately mislead them?

Xenocrypt on the math of economic geography: “A party’s voters should get more or less seats based on the shape of the monotonic curve with integral one they can be arranged in” might sound like a very silly belief, but it is equivalent to the common mantra that you deserve to lose if your voters are ‘too clustered’”

Okay, look, I went way too long between writing up links posts this time, so you’re getting completely dated obsolete stuff like Actually, Neil Gorsuch Is A Champion Of The Little Guy. But aside from the Gorsuch reference this is actually pretty timeless – basically an argument for strict constructionism on the grounds that “a flexible, living, bendable law will always tend to be bent in the direction of the powerful.”

Otium: Are Adult Developmental Stages Real? Looks at Kohlberg, Kegan, etc.

I mentioned the debate over 5-HTTLPR, a gene supposedly linked to various mental health outcomes, in my review of pharmacogenomics. Now a very complete meta-analysis finds that a lot of the hype around it isn’t true. This is pretty impressive since there are dozens of papers claiming otherwise, and maybe the most striking example yet of how apparently well-replicated a finding can be and still fail to pan out.

Rootclaim describes itself as a crowd-sourced argument mapper. See for example its page on who launched the chemical attack in Syria.

Apparently if you just kill off all the cells that are growing too old, you can partly reverse organisms’ aging (paper, popular article)

The Politics Of The Gene: “Contrary to expectations, however, we find little evidence that it is more common for whites, the socioeconomically advantaged, or political conservatives to believe that genetics are important for health and social outcomes.”

Siberian Fox linked me to two studies that somewhat contradicted my minimalist interpretation of childhood trauma here: Alemany on psychosis and Turkheimer on harsh punishment.

Lyrebird is an AI project which, if fed samples of a person’s voice, can read off any text you want in the same voice. See their demo with Obama, Trump, and Hillary (I find them instantly recognizable but not at all Turing-passing). They say making this available is ethical because it raises awareness of the potential risk, which a Facebook friend compared to “selling nukes to ISIS in order to raise awareness of the risk of someone selling nukes to ISIS.”

Freddie deBoer gives lots of evidence that there is no shortage of qualified STEM workers relative to other fields and the industry is actually pretty saturated. But Wall Street Journal seems to think they have evidence for the opposite? Curious what all of the tech workers here think.

Scott Sumner: How Can There Be A Shortage Of Construction Workers? That is, is it at all plausible that (as help wanted ads would suggest) there are areas where construction companies can’t find unskilled laborers willing to work for \$90,000/year? Sumner splits this question in two – first, an economics question of why an efficient market wouldn’t cause salaries to rise to a level that guarantees all jobs get filled. And second, a political question of how this could happen in a country where we’re constantly told that unskilled men are desperate because there are no job opportunities for them anymore. The answers seem to be “there’s a neat but complicated economics reason for the apparent inefficiency” and “the \$90,000 number is really misleading but there may still be okay-paying construction jobs going unfilled and that’s still pretty strange”.

Study which is so delightfully contrarian I choose to reblog it before reading it all the way through: mandatory class attendance policies in college decrease grades by preventing students from making rational decisions about when and how to study.
ratty  yvain  ssc  links  multi  russia  trust  cultural-dynamics  speculation  wonkish  politics  polisci  government  elections  density  urban  economics  trends  regularizer  law  institutions  heuristic  corruption  crooked  chapman  things  psychology  social-psych  anthropology  developmental  study  summary  biodet  behavioral-gen  genetics  candidate-gene  neuro  neuro-nitgrit  psychiatry  stress  core-rats  meta-analysis  replication  null-result  epistemic  ideology  info-dynamics  audio  tools  realness  clinton  trump  obama  tech  science  planning  uncertainty  migration  business  labor  gender  education  higher-ed  unintended-consequences  illusion  unaffiliated  left-wing  supply-demand  urban-rural  judgement
may 2017 by nhaliday
Animal spirits (Keynes) - Wikipedia
Animal spirits is the term John Maynard Keynes used in his 1936 book The General Theory of Employment, Interest and Money to describe the instincts, proclivities and emotions that ostensibly influence and guide human behavior, and which can be measured in terms of, for example, consumer confidence. It has since been argued that trust is also included in or produced by "animal spirits".
economics  macro  meta:prediction  tetlock  psychology  social-psych  instinct  heuristic  bounded-cognition  error  info-dynamics  wiki  reference  jargon  aphorism  big-peeps
april 2017 by nhaliday
Annotating Greg Cochran’s interview with James Miller
https://westhunt.wordpress.com/2017/04/05/interview-2/
opinion of Scott and Hanson: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90238
Greg's methodist: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90256
https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90299
You have to consider the relative strengths of Japan and the USA. USA was ~10x stronger, industrially, which is what mattered. Technically superior (radar, Manhattan project). Almost entirely self-sufficient in natural resources. Japan was sure to lose, and too crazy to quit, which meant that they would lose after being smashed flat.
--
There’s a fairly common way of looking at things in which the bad guys are not at fault because they’re bad guys, born that way, and thus can’t help it. Well, we can’t help it either, so the hell with them. I don’t think we had to respect Japan’s innate need to fuck everybody in China to death.

https://westhunt.wordpress.com/2017/03/25/ramble-on/
https://westhunt.wordpress.com/2017/03/24/topics/
https://soundcloud.com/user-519115521/greg-cochran-part-1
2nd part: https://pinboard.in/u:nhaliday/b:9ab84243b967

- political correctness, the Cathedral and the left (personnel continuity but not ideology/value) at start
- joke: KT impact = asteroid mining, every mass extinction = intelligent life destroying itself
- Alawites: not really Muslim, women liberated because "they don't have souls", ended up running shit in Syria because they were only ones that wanted to help the British during colonial era
- solution to Syria: "put the Alawites in NYC"
- Zimbabwe was OK for a while, if South Africa goes sour, just "put the Boers in NYC" (Miller: left would probably say they are "culturally incompatible", lol)
- story about Lincoln and his great-great-great-grandfather
- skepticism of free speech
- free speech, authoritarianism, and defending against the Mongols
- Scott crazy (not in a terrible way), LW crazy (genetics), ex.: polyamory
- TFP or microbio are better investments than stereotypical EA stuff
- just ban AI worldwide (bully other countries to enforce)
- bit of a back-and-forth about macroeconomics
- not sure climate change will be huge issue. world's been much warmer before and still had a lot of mammals, etc.
- he quite likes Pseudoerasmus
- shits on modern conservatism/Bret Stephens a bit

- mentions Japan having industrial base a tenth the size of the US's and no chance of winning WW2 around 11m mark
- describes himself as "fairly religious" around 20m mark
- 27m30s: Eisenhower was smart, read Carlyle, classical history, etc.

The Scandals of Meritocracy. Virtue vs. competence. Would you rather have a boss who is evil but competent, or good but incompetent? The reality is you have to balance the two. Richard Nixon was probably smarter that Dwight Eisenhower in raw g, but Eisenhower was probably a better person.
org:med  west-hunter  scitariat  summary  links  podcast  audio  big-picture  westminster  politics  culture-war  academia  left-wing  ideology  biodet  error  crooked  bounded-cognition  stories  history  early-modern  africa  developing-world  death  mostly-modern  deterrence  japan  asia  war  meta:war  risk  ai  climate-change  speculation  agriculture  environment  prediction  religion  islam  iraq-syria  gender  dominant-minority  labor  econotariat  cracker-econ  coalitions  infrastructure  parasites-microbiome  medicine  low-hanging  biotech  terrorism  civil-liberty  civic  social-science  randy-ayndy  law  polisci  government  egalitarianism-hierarchy  expression-survival  disease  commentary  authoritarianism  being-right  europe  nordic  cohesion  heuristic  anglosphere  revolution  the-south  usa  thinking  info-dynamics  yvain  ssc  lesswrong  ratty  subculture  values  descriptive  epistemic  cost-disease  effective-altruism  charity  econ-productivity  technology  rhetoric  metameta  ai-control  critique  sociology  arms  paying-rent  parsimony  writing  realness  migration  eco
april 2017 by nhaliday
Hanlon's razor - Wikipedia
Hanlon's razor is an aphorism expressed in various ways including "Never attribute to malice that which is adequately explained by stupidity"[1][2] or "Don't assume bad intentions over neglect and misunderstanding." It recommends a way of eliminating unlikely explanations for a phenomenon (a philosophical razor).
aphorism  history  early-modern  mostly-modern  bounded-cognition  error  crooked  metabuch  heuristic  wiki  reference  info-dynamics  meta:prediction  impetus  judgement
march 2017 by nhaliday
INFECTIOUS CAUSATION OF DISEASE: AN EVOLUTIONARY PERSPECTIVE
A New Germ Theory: https://www.theatlantic.com/magazine/archive/1999/02/a-new-germ-theory/377430/
The dictates of evolution virtually demand that the causes of some of humanity's chronic and most baffling "noninfectious" illnesses will turn out to be pathogens -- that is the radical view of a prominent evolutionary biologist

A LATE-SEPTEMBER heat wave enveloped Amherst College, and young people milled about in shorts or sleeveless summer frocks, or read books on the grass. Inside the red-brick buildings framing the leafy quadrangle students listened to lectures on Ellison and Emerson, on Paul Verlaine and the Holy Roman Empire. Few suspected that strains of the organism that causes cholera were growing nearby, in the Life Sciences Building. If they had known, they would probably not have grasped the implications. But these particular strains of cholera make Paul Ewald smile; they are strong evidence that he is on the right track. Knowing the rules of evolutionary biology, he believes, can change the course of infectious disease.

https://www.theatlantic.com/past/docs/issues/99feb/germ2.htm
I HAVE a motto," Gregory Cochran told me recently. "'Big old diseases are infectious.' If it's common, higher than one in a thousand, I get suspicious. And if it's old, if it has been around for a while, I get suspicious."

https://www.theatlantic.com/past/docs/issues/99feb/germ3.htm
pdf  study  speculation  bio  evolution  sapiens  parasites-microbiome  red-queen  disease  west-hunter  🌞  unit  nibble  len:long  biodet  EGT  wild-ideas  big-picture  epidemiology  deep-materialism  🔬  spearhead  scitariat  maxim-gun  ideas  lens  heterodox  darwinian  equilibrium  medicine  heuristic  spreading  article  psychiatry  QTL  distribution  behavioral-gen  genetics  population-genetics  missing-heritability  gender  sex  sexuality  cardio  track-record  aging  popsci  natural-experiment  japan  asia  meta:medicine  profile  ability-competence  empirical  theory-practice  data  magnitude  scale  cost-benefit  is-ought  occam  parsimony  stress  GWAS  roots  explanans  embodied  obesity  geography  canada  britain  anglo  trivia  cocktail  shift  aphorism  stylized-facts  evidence  inference
february 2017 by nhaliday
254A, Supplement 4: Probabilistic models and heuristics for the primes (optional) | What's new
among others, the Cramér model for the primes (basically kinda looks like primality is independently distributed w/ Pr[n is prime] = 1/log n)
gowers  mathtariat  lecture-notes  exposition  math  math.NT  probability  heuristic  models  cartoons  nibble  org:bleg  pseudorandomness  borel-cantelli  concentration-of-measure  multiplicative
february 2017 by nhaliday
Expert credibility in climate change
Here, we use an extensive dataset of 1,372 climate researchers and their publication and citation data to show that (i) 97–98% of the climate researchers most actively publishing in the field surveyed here support the tenets of ACC outlined by the Intergovernmental Panel on Climate Change, and (ii) the relative climate expertise and scientific prominence of the researchers unconvinced of ACC are substantially below that of the convinced researchers.
study  data  poll  expert  science  environment  meta:science  culture-war  org:nat  descriptive  epistemic  climate-change  heuristic  info-dynamics  expert-experience  judgement
december 2016 by nhaliday
Information Processing: Bounded cognition
Many people lack standard cognitive tools useful for understanding the world around them. Perhaps the most egregious case: probability and statistics, which are central to understanding health, economics, risk, crime, society, evolution, global warming, etc. Very few people have any facility for calculating risk, visualizing a distribution, understanding the difference between the average, the median, variance, etc.

Risk, Uncertainty, and Heuristics: http://infoproc.blogspot.com/2018/03/risk-uncertainty-and-heuristics.html
Risk = space of outcomes and probabilities are known. Uncertainty = probabilities not known, and even space of possibilities may not be known. Heuristic rules are contrasted with algorithms like maximization of expected utility.

How do smart people make smart decisions? | Gerd Gigerenzer

Helping Doctors and Patients Make Sense of Health Statistics: http://www.ema.europa.eu/docs/en_GB/document_library/Presentation/2014/12/WC500178514.pdf
street-fighting  thinking  stats  rationality  hsu  metabuch  models  biases  distribution  pre-2013  scitariat  intelligence  neurons  conceptual-vocab  map-territory  clarity  meta:prediction  nibble  mental-math  bounded-cognition  nitty-gritty  s:*  info-dynamics  quantitative-qualitative  chart  tricki  pdf  white-paper  multi  outcome-risk  uncertainty  heuristic  study  medicine  meta:medicine  decision-making  decision-theory  judgement
july 2016 by nhaliday
Answer to What is it like to understand advanced mathematics? - Quora
thinking like a mathematician

some of the points:
- small # of tricks (echoes Rota)
- web of concepts and modularization (zooming out) allow quick reasoning
- comfort w/ ambiguity and lack of understanding, study high-dimensional objects via projections
- above is essential for research (and often what distinguishes research mathematicians from people who were good at math, or majored in math)
math  reflection  thinking  intuition  expert  synthesis  wormholes  insight  q-n-a  🎓  metabuch  tricks  scholar  problem-solving  aphorism  instinct  heuristic  lens  qra  soft-question  curiosity  meta:math  ground-up  cartoons  analytical-holistic  lifts-projections  hi-order-bits  scholar-pack  nibble  giants  the-trenches  innovation  novelty  zooming  tricki  virtu  humility  metameta  wisdom  abstraction  skeleton  s:***  knowledge  expert-experience  elegance  judgement  advanced
may 2016 by nhaliday
Rob Pike: Notes on Programming in C
Issues of typography
...
Sometimes they care too much: pretty printers mechanically produce pretty output that accentuates irrelevant detail in the program, which is as sensible as putting all the prepositions in English text in bold font. Although many people think programs should look like the Algol-68 report (and some systems even require you to edit programs in that style), a clear program is not made any clearer by such presentation, and a bad program is only made laughable.
Typographic conventions consistently held are important to clear presentation, of course - indentation is probably the best known and most useful example - but when the ink obscures the intent, typography has taken over.

...

Naming
...
Finally, I prefer minimum-length but maximum-information names, and then let the context fill in the rest. Globals, for instance, typically have little context when they are used, so their names need to be relatively evocative. Thus I say maxphysaddr (not MaximumPhysicalAddress) for a global variable, but np not NodePointer for a pointer locally defined and used. This is largely a matter of taste, but taste is relevant to clarity.

...

Pointers
C is unusual in that it allows pointers to point to anything. Pointers are sharp tools, and like any such tool, used well they can be delightfully productive, but used badly they can do great damage (I sunk a wood chisel into my thumb a few days before writing this). Pointers have a bad reputation in academia, because they are considered too dangerous, dirty somehow. But I think they are powerful notation, which means they can help us express ourselves clearly.
Consider: When you have a pointer to an object, it is a name for exactly that object and no other.

...

A delicate matter, requiring taste and judgement. I tend to err on the side of eliminating comments, for several reasons. First, if the code is clear, and uses good type names and variable names, it should explain itself. Second, comments aren't checked by the compiler, so there is no guarantee they're right, especially after the code is modified. A misleading comment can be very confusing. Third, the issue of typography: comments clutter code.
But I do comment sometimes. Almost exclusively, I use them as an introduction to what follows.

...

Complexity
Most programs are too complicated - that is, more complex than they need to be to solve their problems efficiently. Why? Mostly it's because of bad design, but I will skip that issue here because it's a big one. But programs are often complicated at the microscopic level, and that is something I can address here.
Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.

Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.

Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.) For example, binary trees are always faster than splay trees for workaday problems.

Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.

The following data structures are a complete list for almost all practical programs:

array
hash table
binary tree
Of course, you must also be prepared to collect these into compound data structures. For instance, a symbol table might be implemented as a hash table containing linked lists of arrays of characters.
Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming. (See The Mythical Man-Month: Essays on Software Engineering by F. P. Brooks, page 102.)

Rule 6. There is no Rule 6.

Programming with data.
...
One of the reasons data-driven programs are not common, at least among beginners, is the tyranny of Pascal. Pascal, like its creator, believes firmly in the separation of code and data. It therefore (at least in its original form) has no ability to create initialized data. This flies in the face of the theories of Turing and von Neumann, which define the basic principles of the stored-program computer. Code and data are the same, or at least they can be. How else can you explain how a compiler works? (Functional languages have a similar problem with I/O.)

Function pointers
Another result of the tyranny of Pascal is that beginners don't use function pointers. (You can't have function-valued variables in Pascal.) Using function pointers to encode complexity has some interesting properties.
Some of the complexity is passed to the routine pointed to. The routine must obey some standard protocol - it's one of a set of routines invoked identically - but beyond that, what it does is its business alone. The complexity is distributed.

There is this idea of a protocol, in that all functions used similarly must behave similarly. This makes for easy documentation, testing, growth and even making the program run distributed over a network - the protocol can be encoded as remote procedure calls.

I argue that clear use of function pointers is the heart of object-oriented programming. Given a set of operations you want to perform on data, and a set of data types you want to respond to those operations, the easiest way to put the program together is with a group of function pointers for each type. This, in a nutshell, defines class and method. The O-O languages give you more of course - prettier syntax, derived types and so on - but conceptually they provide little extra.

...

Include files
Simple rule: include files should never include include files. If instead they state (in comments or implicitly) what files they need to have included first, the problem of deciding which files to include is pushed to the user (programmer) but in a way that's easy to handle and that, by construction, avoids multiple inclusions. Multiple inclusions are a bane of systems programming. It's not rare to have files included five or more times to compile a single C source file. The Unix /usr/include/sys stuff is terrible this way.
There's a little dance involving #ifdef's that can prevent a file being read twice, but it's usually done wrong in practice - the #ifdef's are in the file itself, not the file that includes it. The result is often thousands of needless lines of code passing through the lexical analyzer, which is (in good compilers) the most expensive phase.

cf https://stackoverflow.com/questions/1101267/where-does-the-compiler-spend-most-of-its-time-during-parsing
First, I don't think it actually is true: in many compilers, most time is not spend in lexing source code. For example, in C++ compilers (e.g. g++), most time is spend in semantic analysis, in particular in overload resolution (trying to find out what implicit template instantiations to perform). Also, in C and C++, most time is often spend in optimization (creating graph representations of individual functions or the whole translation unit, and then running long algorithms on these graphs).

When comparing lexical and syntactical analysis, it may indeed be the case that lexical analysis is more expensive. This is because both use state machines, i.e. there is a fixed number of actions per element, but the number of elements is much larger in lexical analysis (characters) than in syntactical analysis (tokens).

https://news.ycombinator.com/item?id=7728207
programming  systems  philosophy  c(pp)  summer-2014  intricacy  engineering  rhetoric  contrarianism  diogenes  parsimony  worse-is-better/the-right-thing  data-structures  list  algorithms  stylized-facts  essay  ideas  performance  functional  state  pls  oop  gotchas  blowhards  duplication  compilers  syntax  lexical  checklists  metabuch  lens  notation  thinking  neurons  guide  pareto  heuristic  time  cost-benefit  multi  q-n-a  stackex  plt  hn  commentary  minimalism  techtariat  rsc  writing  technical-writing  cracker-prog
august 2014 by nhaliday

bundles : abstractmathmeta

Copy this bookmark:

description:

tags: