nhaliday + inference   38

The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
Solution concept - Wikipedia
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

Many solution concepts, for many games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions. Each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games.

nice diagram
concept  conceptual-vocab  list  wiki  reference  acm  game-theory  inference  equilibrium  extrema  reduction  sub-super 
may 2019 by nhaliday
Measuring fitness heritability: Life history traits versus morphological traits in humans - Gavrus‐Ion - 2017 - American Journal of Physical Anthropology - Wiley Online Library
Traditional interpretation of Fisher's Fundamental Theorem of Natural Selection is that life history traits (LHT), which are closely related with fitness, show lower heritabilities, whereas morphological traits (MT) are less related with fitness and they are expected to show higher heritabilities.

...

LHT heritabilities ranged from 2.3 to 34% for the whole sample, with men showing higher heritabilities (4–45%) than women (0‐23.7%). Overall, MT presented higher heritability values than most of LHT, ranging from 0 to 40.5% in craniofacial indices, and from 13.8 to 32.4% in craniofacial angles. LHT showed considerable additive genetic variance values, similar to MT, but also high environmental variance values, and most of them presenting a higher evolutionary potential than MT.
study  biodet  behavioral-gen  population-genetics  hmm  contrarianism  levers  inference  variance-components  fertility  life-history  demographics  embodied  prediction  contradiction  empirical  sib-study 
may 2019 by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Theological differences between the Catholic Church and the Eastern Orthodox Church - Wikipedia
Did the Filioque Ruin the West?: https://contingentnotarbitrary.com/2017/06/15/the-filioque-ruined-the-west/
The theology of the filioque makes the Father and the Son equal as sources of divinity. Flattening the hierarchy implicit in the Trinity does away with the Monarchy of the Father: the family relationship becomes less patriarchal and more egalitarian. The Son, with his humanity, mercy, love and sacrifice, is no longer subordinate to the Father, while the Father – the God of the Old Testament, law and tradition – is no longer sovereign. Looks like the change would elevate egalitarianism, compassion, humanity and self-sacrifice while undermining hierarchy, rules, family and tradition. Sound familiar?
article  wiki  reference  philosophy  backup  religion  christianity  theos  ideology  comparison  nitty-gritty  intricacy  europe  the-great-west-whale  occident  russia  MENA  orient  letters  epistemic  truth  science  logic  inference  guilt-shame  volo-avolo  causation  multi  gnon  eastern-europe  roots  explanans  enlightenment-renaissance-restoration-reformation  modernity  egalitarianism-hierarchy  love-hate  free-riding  cooperate-defect  gender  justice  law  tradition  legacy  parenting  ascetic  altruism  farmers-and-foragers  protestant-catholic  exegesis-hermeneutics 
april 2018 by nhaliday
Contingent, Not Arbitrary | Truth is contingent on what is, not on what we wish to be true.
A vital attribute of a value system of any kind is that it works. I consider this a necessary (but not sufficient) condition for goodness. A value system, when followed, should contribute to human flourishing and not produce results that violate its core ideals. This is a pragmatic, I-know-it-when-I-see-it definition. I may refine it further if the need arises.

I think that the prevailing Western values fail by this standard. I will not spend much time arguing this; many others have already. If you reject this premise, this blog may not be for you.

I consider old traditions an important source of wisdom: they have proven their worth over centuries of use. Where they agree, we should listen. Where they disagree, we should figure out why. Where modernity departs from tradition, we should be wary of the new.

Tradition has one nagging problem: it was abandoned by the West. How and why did that happen? I consider this a central question. I expect the reasons to be varied and complex. Understanding them seems necessary if we are to fix what may have been broken.

In short, I want to answer these questions:

1. How do values spread and persist? An ideology does no good if no one holds it.
2. Which values do good? Sounding good is worse than useless if it leads to ruin.

The ultimate hope would be to find a way to combine the two. Many have tried and failed. I don’t expect to succeed either, but I hope I’ll manage to clarify the questions.

Christianity Is The Schelling Point: https://contingentnotarbitrary.com/2018/02/22/christianity-is-the-schelling-point/
Restoring true Christianity is both necessary and sufficient for restoring civilization. The task is neither easy nor simple but that’s what it takes. It is also our best chance of weathering the collapse if that’s too late to avoid.

Christianity is the ultimate coordination mechanism: it unites us with a higher purpose, aligns us with the laws of reality and works on all scales, from individuals to entire civilizations. Christendom took over the world and then lost it when its faith faltered. Historically and culturally, Christianity is the unique Schelling point for the West – or it would be if we could agree on which church (if any) was the true one.

Here are my arguments for true Christianity as the Schelling point. I hope to demonstrate these points in subsequent posts; for now I’ll just list them.

- A society of saints is the most powerful human arrangement possible. It is united in purpose, ideologically stable and operates in harmony with natural law. This is true independent of scale and organization: from military hierarchy to total decentralization, from persecuted minority to total hegemony. Even democracy works among saints – that’s why it took so long to fail.
- There is such a thing as true Christianity. I don’t know how to pinpoint it but it does exist; that holds from both secular and religious perspectives. Our task is to converge on it the best we can.
- Don’t worry too much about the existence of God. I’m proof that you don’t need that assumption in order to believe – it helps but isn’t mandatory.

Pascal’s Wager never sat right with me. Now I know why: it’s a sucker bet. Let’s update it.

If God exists, we must believe because our souls and civilization depend on it. If He doesn’t exist, we must believe because civilization depends on it.

Morality Should Be Adaptive: http://www.overcomingbias.com/2012/04/morals-should-be-adaptive.html
I agree with this
gnon  todo  blog  stream  religion  christianity  theos  morality  ethics  formal-values  philosophy  truth  is-ought  coordination  cooperate-defect  alignment  tribalism  cohesion  nascent-state  counter-revolution  epistemic  civilization  rot  fertility  intervention  europe  the-great-west-whale  occident  telos-atelos  multi  ratty  hanson  big-picture  society  culture  evolution  competition  🤖  rationality  rhetoric  contrarianism  values  water  embedded-cognition  ideology  deep-materialism  moloch  new-religion  patho-altruism  darwinian  existence  good-evil  memetics  direct-indirect  endogenous-exogenous  tradition  anthropology  cultural-dynamics  farmers-and-foragers  egalitarianism-hierarchy  organizing  institutions  protestant-catholic  enlightenment-renaissance-restoration-reformation  realness  science  empirical  modernity  revolution  inference  parallax  axioms  pragmatic  zeitgeist  schelling  prioritizing  ends-means  degrees-of-freedom  logic  reason  interdisciplinary  exegesis-hermeneutics  o 
april 2018 by nhaliday
Diving into Chinese philosophy – Gene Expression
Back when I was in college one of my roommates was taking a Chinese philosophy class for a general education requirement. A double major in mathematics and economics (he went on to get an economics Ph.D.) he found the lack of formal rigor in the field rather maddening. I thought this was fair, but I suggested to him that the this-worldy and often non-metaphysical orientation of much of Chinese philosophy made it less amenable to formal and logical analysis.

...

IMO the much more problematic thing about premodern Chinese political philosophy from the point of view of the West is its lack of interest in constitutionalism and the rule of law, stemming from a generally less rationalist approach than the Classical Westerns, than any sort of inherent anti-individualism or collectivism or whatever. For someone like Aristotle the constitutional rule of law was the highest moral good in itself and the definition of justice, very much not so for Confucius or for Zhu Xi. They still believed in Justice in the sense of people getting what they deserve, but they didn’t really consider the written rule of law an appropriate way to conceptualize it. OG Confucius leaned more towards the unwritten traditions and rituals passed down from the ancestors, and Neoconfucianism leaned more towards a sort of Universal Reason that could be accessed by the individual’s subjective understanding but which again need not be written down necessarily (although unlike Kant/the Enlightenment it basically implies that such subjective reasoning will naturally lead one to reaffirming the ancient traditions). In left-right political spectrum terms IMO this leads to a well-defined right and left and a big old hole in the center where classical republicanism would be in the West. This resonates pretty well with modern East Asian political history IMO

https://www.radicalphilosophy.com/article/is-logos-a-proper-noun
Is logos a proper noun?
Or, is Aristotelian Logic translatable into Chinese?
gnxp  scitariat  books  recommendations  discussion  reflection  china  asia  sinosphere  philosophy  logic  rigor  rigidity  flexibility  leviathan  law  individualism-collectivism  analytical-holistic  systematic-ad-hoc  the-classics  canon  morality  ethics  formal-values  justice  reason  tradition  government  polisci  left-wing  right-wing  order-disorder  eden-heaven  analogy  similarity  comparison  thinking  summary  top-n  n-factor  universalism-particularism  duality  rationality  absolute-relative  subjective-objective  the-self  apollonian-dionysian  big-peeps  history  iron-age  antidemos  democracy  institutions  darwinian  multi  language  concept  conceptual-vocab  inference  linguistics  foreign-lang  mediterranean  europe  germanic  mostly-modern  gallic  culture 
march 2018 by nhaliday
Indiana Jones, Economist?! - Marginal REVOLUTION
In a stunningly original paper Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, and Ali Hortaçsu use the gravity model of trade to infer the location of lost cities from Bronze age Assyria! The simplest gravity model makes predictions about trade flows based on the sizes of cities and the distances between them. More complicated models add costs based on geographic barriers. The authors have data from ancient texts on trade flows between all the cities, they know the locations of some of the cities, and they know the geography of the region. Using this data they can invert the gravity model and, triangulating from the known cities, find the lost cities that would best “fit” the model. In other words, by assuming the model is true the authors can predict where the lost cities should be located. To test the idea the authors pretend that some known cities are lost and amazingly the model is able to accurately rediscover those cities.
econotariat  marginal-rev  commentary  study  summary  economics  broad-econ  cliometrics  interdisciplinary  letters  history  antiquity  MENA  urban  geography  models  prediction  archaeology  trade  trivia  cocktail  links  cool  tricks  urban-rural  inference  traces 
november 2017 by nhaliday
Of Mice and Men | West Hunter
It’s not always easy figuring out how a pathogen causes disease. There is an example in mice for which the solution was very difficult, so difficult that we would probably have failed to discover the cause of a similarly obscure infectious disease in humans.

Mycoplasma pulmonis causes a chronic obstructive lung disease in mice, but it wasn’t easy to show this. The disease was first described in 1915, and by 1940, people began to suspect Mycoplasma pulmonis might be the cause. But then again, maybe not. It was often found in mice that seemed healthy. Pure cultures of this organism did not consistently produce lung disease – which means that it didn’t satisfy Koch’s postulates, in particular postulate 1 (The microorganism must be found in abundance in all organisms suffering from the disease, but should not be found in healthy organisms.) and postulate 3 (The cultured microorganism should cause disease when introduced into a healthy organism.).

Well, those postulates are not logic itself, but rather a useful heuristic. Koch knew that, even if lots of other people don’t.

This respiratory disease of mice is long-lasting, but slow to begin. It can take half a lifetime – a mouse lifetime, that is – and that made finding the cause harder. It required patience, which means I certainly couldn’t have done it.

Here’s how they solved it. You can raise germ-free mice. In the early 1970s, researchers injected various candidate pathogens into different groups of germ-free mice and waited to see which, if any, developed this chronic lung disease. It was Mycoplasma pulmonis , all right, but it had taken 60 years to find out.

It turned out that susceptibility differed between different mouse strains – genetic susceptibility was important. Co-infection with other pathogens affected the course of the disease. Microenvironmental details mattered – mainly ammonia in cages where the bedding wasn’t changed often enough. But it didn’t happen without that mycoplasma, which was a key causal link, something every engineer understands but many MDs don’t.

If there was a similarly obscure infectious disease of humans, say one that involved a fairly common bug found in both the just and the unjust, one that took decades for symptoms to manifest – would we have solved it? Probably not.

Cooties are everywhere.

gay germ search: https://westhunt.wordpress.com/2013/07/21/of-mice-and-men/#comment-15905
It’s hard to say, depends on how complicated the path of causation is. Assuming that I’m even right, of course. Some good autopsy studies might be fruitful – you’d look for microanatomical brain differences, as with nartcolepsy. Differences in gene expression, maybe. You could look for a pathogen – using the digital version of RDA (representational difference analysis), say on discordant twins. Do some old-fashioned epidemiology. Look for marker antibodies, signs of some sort of immunological event.

Do all of the above on gay rams – lots easier to get started, much less whining from those being vivisected.

Patrick Moore found the virus causing Kaposi’s sarcoma without any funding at all. I’m sure Peter Thiel could afford a serious try.
west-hunter  scitariat  discussion  ideas  reflection  analogy  model-organism  bio  disease  parasites-microbiome  medicine  epidemiology  heuristic  thick-thin  stories  experiment  track-record  intricacy  gotchas  low-hanging  🌞  patience  complex-systems  meta:medicine  multi  poast  methodology  red-queen  brain-scan  neuro  twin-study  immune  nature  gender  sex  sexuality  thiel  barons  gwern  stylized-facts  inference  apollonian-dionysian 
september 2017 by nhaliday
Gauge Transformation | West Hunter
Modern armies require enormous amounts of supply, mostly gas and ammo. In those days, supply mainly meant railroads. In Germany, and in most of Europe, the separation between the rails, the rail gauge,  was 1,435 mm (4ft 8 1/2 in).   So when the Germans invaded France, they could immediately make use of the French railnet.  In Russia, the Germans faced a problem: the gauge was different, 1528 mm (5 ft).  German locomotives could not use those tracks until they had been converted.  As Pravda used to say, this was no coincidence:  it is thought that the Czarist government made this choice for defensive reasons.
west-hunter  scitariat  discussion  history  mostly-modern  war  europe  germanic  gallic  russia  intricacy  trivia  cocktail  tactics  communism  world-war  transportation  thick-thin  inference  apollonian-dionysian 
may 2017 by nhaliday
INFECTIOUS CAUSATION OF DISEASE: AN EVOLUTIONARY PERSPECTIVE
A New Germ Theory: https://www.theatlantic.com/magazine/archive/1999/02/a-new-germ-theory/377430/
The dictates of evolution virtually demand that the causes of some of humanity's chronic and most baffling "noninfectious" illnesses will turn out to be pathogens -- that is the radical view of a prominent evolutionary biologist

A LATE-SEPTEMBER heat wave enveloped Amherst College, and young people milled about in shorts or sleeveless summer frocks, or read books on the grass. Inside the red-brick buildings framing the leafy quadrangle students listened to lectures on Ellison and Emerson, on Paul Verlaine and the Holy Roman Empire. Few suspected that strains of the organism that causes cholera were growing nearby, in the Life Sciences Building. If they had known, they would probably not have grasped the implications. But these particular strains of cholera make Paul Ewald smile; they are strong evidence that he is on the right track. Knowing the rules of evolutionary biology, he believes, can change the course of infectious disease.

https://www.theatlantic.com/past/docs/issues/99feb/germ2.htm
I HAVE a motto," Gregory Cochran told me recently. "'Big old diseases are infectious.' If it's common, higher than one in a thousand, I get suspicious. And if it's old, if it has been around for a while, I get suspicious."

https://www.theatlantic.com/past/docs/issues/99feb/germ3.htm
pdf  study  speculation  bio  evolution  sapiens  parasites-microbiome  red-queen  disease  west-hunter  🌞  unit  nibble  len:long  biodet  EGT  wild-ideas  big-picture  epidemiology  deep-materialism  🔬  spearhead  scitariat  maxim-gun  ideas  lens  heterodox  darwinian  equilibrium  medicine  heuristic  spreading  article  psychiatry  QTL  distribution  behavioral-gen  genetics  population-genetics  missing-heritability  gender  sex  sexuality  cardio  track-record  aging  popsci  natural-experiment  japan  asia  meta:medicine  profile  ability-competence  empirical  theory-practice  data  magnitude  scale  cost-benefit  is-ought  occam  parsimony  stress  GWAS  roots  explanans  embodied  obesity  geography  canada  britain  anglo  trivia  cocktail  shift  aphorism  stylized-facts  evidence  inference  psycho-atoms 
february 2017 by nhaliday
Wizard War | West Hunter
Some of his successes were classically thin, as when he correctly analyzed the German two-beam navigation system (Knickebein). He realize that the area of overlap of two beams could be narrow, far narrower than suggested by the Rayleigh criterion.

During the early struggle with the Germans, the “Battle of the Beams”, he personally read all the relevant Enigma messages. They piled up on his desk, but he could almost always pull out the relevant message, since he remembered the date, which typewriter it had been typed on, and the kind of typewriter ribbon or carbon. When asked, he could usually pick out the message in question in seconds. This system was deliberate: Jones believed that the larger the field any one man could cover, the greater the chance of one brain connecting two facts – the classic approach to a ‘thick’ problem, not that anyone seems to know that anymore.

All that information churning in his head produced results, enough so that his bureaucratic rivals concluded that he had some special unshared source of information. They made at least three attempts to infiltrate his Section to locate this great undisclosed source. An officer from Bletchley Park was offered on a part-time basis with that secret objective. After a month or so he was called back, and assured his superiors that there was no trace of anything other than what they already knew. When someone asked ‘Then how does Jones do it? ‘ he replied ‘Well, I suppose, Sir, he thinks!’
west-hunter  books  review  history  stories  problem-solving  frontier  thick-thin  intel  mostly-modern  the-trenches  complex-systems  applications  scitariat  info-dynamics  world-war  theory-practice  intersection-connectedness  quotes  alt-inst  inference  apollonian-dionysian  consilience 
november 2016 by nhaliday
Not Final! | West Hunter
In mathematics we often prove that some proposition is true by showing that  the alternative is false.  The principle can sometimes work in other disciplines, but it’s tricky.  You have to have a very good understanding  to know that some things are impossible (or close enough to impossible).   You can do it fairly often in physics, less often in biology.
west-hunter  science  history  reflection  epistemic  occam  contradiction  parsimony  noise-structure  scitariat  info-dynamics  hetero-advantage  sapiens  evolution  disease  sexuality  ideas  genetics  s:*  thinking  the-trenches  no-go  thick-thin  theory-practice  inference  apollonian-dionysian  elegance  applicability-prereqs  necessity-sufficiency 
november 2016 by nhaliday
Epigenetics | West Hunter
more: https://westhunt.wordpress.com/2015/04/02/back-by-popular-demand/

https://westhunt.wordpress.com/2015/04/02/back-by-popular-demand/#comment-68170
It’s not a real theory, like saying that the wet spot on the kitchen floor is caused by a hole in the roof. All the implications of a real theory are taken seriously (like that hole in the roof will make it cold in the winter). In this kind of pseudo-theory, only the implications that you like exist.

Growing Pains for Field of Epigenetics as Some Call for Overhaul: https://www.nytimes.com/2016/07/02/science/epigenetic-marks-dna-genes.html
https://twitter.com/WiringTheBrain/status/773417464336187392
https://archive.is/RHuF2
For transgenerational epigenetic transmission of behaviour to occur in mammals, here's what would have to happen:

Inherited memories: https://westhunt.wordpress.com/2013/12/13/inherited-memories/
In a recent paper in Nature Neuroscience, Dias and Ressler trained mice to fear the smell of acetophenone. They claim that this reaction was passed on to their offspring, and to the following generation.

I don’t believe a word of it. It would require a mechanism that takes the epigenetic states of genes in the brain, sends that information down to the testes, and then somehow imprints it on the germ cell precursors. And it would have to do this in a very special way, because many epigenetic changes that are the product of learning wouldn’t be the right thing at all during embryogenesis and development: somehow you’d have to pass timing information as well – info that says “methylate this sucker when you’re three weeks old, but not before”. Genes are like a recipe, but this patch would be more like a program. And it’d take a tnuctipun – or better – to prepare it.

According to the blurb at Nature, Kerry Ressler is a neurobiologist and psychiatrist at Emory University. In other words, he’s already a good deal more likely than average to be a flake. He became “interested in epigenetic inheritance after working with poor people living in inner cities, where cycles of drug addiction, neuropsychiatric illness and other problems often seem to recur in parents and their children. So he’s motivated. He’d like this to be true. Too bad.

We’re going to see more and more articles like this: people want to hear it. Tyler Cowen certainly does, but then he may not really be people. None of this research will ever be replicated by anyone careful and honest, but that has hardly stopped a flood of analogous nonsense in the social sciences – for example, how poverty reduces your IQ, unless your name is Abel or Ramanujan.

...

There are more things yet to be discovered than are dreamt of in our philosophy – but there’s even more bullshit. And that’s what this is.
west-hunter  rant  critique  genetics  concept  multi  genomics  epigenetics  scitariat  biodet  behavioral-gen  westminster  realness  info-dynamics  pop-diff  gbooks  trends  replication  news  bio  org:rec  twitter  social  discussion  pic  attaq  chart  aphorism  truth  org:nat  study  summary  commentary  class  scifi-fantasy  econotariat  marginal-rev  cracker-econ  people  backup  thinking  science  inference  contradiction  poast 
november 2016 by nhaliday
Enigma | West Hunter
The modern consensus is that breaking Enigma shortened the war by at least a year.

Although a number of highly-placed people knew the story, some because they had been personally involved during WWII, the successful decryption of Enigma was kept secret until 1974, when F. W. Winterbotham published The Ultra Secret.

Most historians didn’t know about it. Without that information, the course of World War II can’t really have made sense. Why didn’t anyone notice?

various WW2 trivia in the comments/corrections

high school:
https://westhunt.wordpress.com/2012/03/15/enigma/#comment-2417
They couldn’t hide an anomalous level of success. In fact, the Germans came to realize that the Allies had some kind of intelligence edge, but never managed to figure out what it was. When your opponent anticipates your moves, you must eventually notice.

Professional historians, after the war, don’t seem to have noticed anything anomalous. I find this revealing because _I_ noticed that things had gone weirdly smoothly while I was still in high school. I wrote an essay about it.

https://westhunt.wordpress.com/2017/01/05/subsocieties/#comment-86828
I wish I still had it around. I didn’t manage to guess how many rotors Enigma had, for sure. I only talked about how mysteriously well things had gone, didn’t know why. I remember the conclusion: God protects drunks, babies, and the United States of America.
west-hunter  rant  history  social-science  war  intel  mostly-modern  error  bounded-cognition  contradiction  descriptive  realness  being-right  scitariat  info-dynamics  track-record  great-powers  world-war  questions  truth  multi  poast  canon  alt-inst  thick-thin  open-closed  trivia  ability-competence  letters  expert-experience  explanans  inference  technology  crypto  people  theos  religion  aphorism  reflection 
november 2016 by nhaliday
Thick and thin | West Hunter
There is a spectrum of problem-solving, ranging from, at one extreme, simplicity and clear chains of logical reasoning (sometimes long chains) and, at the other, building a picture by sifting through a vast mass of evidence of varying quality. I will give some examples. Just the other day, when I was conferring, conversing and otherwise hobnobbing with my fellow physicists, I mentioned high-altitude lighting, sprites and elves and blue jets. I said that you could think of a thundercloud as a vertical dipole, with an electric field that decreased as the cube of altitude, while the breakdown voltage varied with air pressure, which declines exponentially with altitude. At which point the prof I was talking to said ” and so the curves must cross!”. That’s how physicists think, and it can be very effective. The amount of information required to solve the problem is not very large. I call this a ‘thin’ problem’.

...

In another example at the messy end of the spectrum, Joe Rochefort, running Hypo in the spring of 1942, needed to figure out Japanese plans. He had an an ever-growing mass of Japanese radio intercepts, some of which were partially decrypted – say, one word of five, with luck. He had data from radio direction-finding; his people were beginning to be able to recognize particular Japanese radio operators by their ‘fist’. He’d studied in Japan, knew the Japanese well. He had plenty of Navy experience – knew what was possible. I would call this a classic ‘thick’ problem, one in which an analyst needs to deal with an enormous amount of data of varying quality. Being smart is necessary but not sufficient: you also need to know lots of stuff.

...

Nimitz believed Rochefort – who was correct. Because of that, we managed to prevail at Midway, losing one carrier and one destroyer while the the Japanese lost four carriers and a heavy cruiser*. As so often happens, OP-20-G won the bureaucratic war: Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.

The usual explanation of Joe Rochefort’s fall argues that John Redman’s ( head of OP-20-G, the Navy’s main signals intelligence and cryptanalysis group) geographical proximity to Navy headquarters was a key factor in winning the bureaucratic struggle, along with his brother’s influence (Rear Admiral Joseph Redman). That and being a shameless liar.

Personally, I wonder if part of the problem is the great difficulty of explaining the analysis of a thick problem to someone without a similar depth of knowledge. At best, they believe you because you’ve been right in the past. Or, sometimes, once you have developed the answer, there is a ‘thin’ way of confirming your answer – as when Rochefort took Jasper Holmes’s suggestion and had Midway broadcast an uncoded complaint about the failure of their distillation system – soon followed by a Japanese report that ‘AF’ was short of water.

Most problems in the social sciences are ‘thick’, and unfortunately, almost all of the researchers are as well. There are a lot more Redmans than Rocheforts.
west-hunter  thinking  things  science  social-science  rant  problem-solving  innovation  pre-2013  metabuch  frontier  thick-thin  stories  intel  mostly-modern  history  flexibility  rigidity  complex-systems  metameta  s:*  noise-structure  discovery  applications  scitariat  info-dynamics  world-war  analytical-holistic  the-trenches  creative  theory-practice  being-right  management  track-record  alien-character  darwinian  old-anglo  giants  magnitude  intersection-connectedness  knowledge  alt-inst  sky  physics  electromag  oceans  military  statesmen  big-peeps  organizing  communication  fire  inference  apollonian-dionysian  consilience  bio  evolution  elegance  necessity-sufficiency  certificates-recognition 
november 2016 by nhaliday
Megafaunal Extinctions | West Hunter
When competent human hunters encountered naive fauna, the biggest animals, things like mammoths and toxodons and diprotodons, all went extinct. It is not hard to see why this occurred. Large animals are more worth hunting than rabbits, and easier to catch, while having a far lower reproductive rate. Moreover, humans are not naturally narrow specialists on any one species, so are not limited by the abundance of that species in the way that the lynx population depends on the hare population. Being omnivores, they could manage even when the megafauna as a whole were becoming rare.

There were subtle factors at work as well: the first human colonists in a new land probably didn’t develop ethnic/language splits for some time, which meant that the no-mans-land zones between tribes that can act as natural game preserves didn’t exist in that crucial early period. Such game preserves might have allowed the megafauna to evolve better defenses against humans – but they never got the chance.

It happened in the Americas, in Australia, in New Zealand, in Madagascar, and in sundry islands. There is no reason to think that climate had much to do with it, except in the sense that climatic change may sometimes have helped open up a path to those virgin lands in which the hand of man had never set foot, via melting glaciers or low sea level.

I don’t know the numbers, but certainly a large fraction of archeologists and paleontologists, perhaps a majority, don’t believe that human hunters were responsible, or believe that hunting was only one of several factors. Donald Grayson and David Meltzer, for example. Why do they think this? In part I think it is an aversion to simple explanations, a reversal of Ockham’s razor, which is common in these fields. Of course then I have to explain why they would do such a silly thing, and I can’t. Probably some with these opinions are specialists in a particular geographic area, and do not appreciate the power of looking at multiple extinction events: it’s pretty hard to argue that the climate just happened to change whenever people showed when it happens five or six times.

It might be that belief in specialization is even more of a problem than specialization itself. Lots of time you have to gather insights and information from several fields to make progress on a puzzle. It seems to me that many researchers aren’t willing to learn much outside their field, even when it’s the only route to the answer. But then, maybe they can’t. I remember an anthropologist who could believe in humans rapidly filling up New Zealand, which is about the size of Colorado, but just couldn’t see how they could have managed to fill up a whole continent in a couple of thousand years. Evidently she didn’t understand geometric growth. She is not alone. I have see anthropologists argue [The revolution that wasn’t] that increased human density in ancient Africa was driven by the continent ‘finally getting full’, rather than increased intellectual abilities and resulting greater technological sophistication. That’s truly silly. Look, back in those days, technology changed slowly: you would hardly notice significant change over 50k years. Human populations grow far faster than that, given the chance. Imagine a population with three surviving children per couple, which is nothing special: it would grow by a factor of ten million in a thousand years. The average long-term growth rate was very low, but that is because the rate of increase in human capabilities, which determine the carrying capacity, was very slow – not because rapid population growth is difficult or impossible.

I could explain this to my 11-year old twins in five minutes, but I don’t know that I could ever explain it to Brooks and McBrearty.

various comments about climate change

https://westhunt.wordpress.com/2012/05/20/megafaunal-extinctions/#comment-3039
Why do people act as if a slightly more habitable Greenland a millennium ago somehow disproves the statement that the world as a whole was cooler then than now? Motivated reasoning: they want a certain conclusion real bad. At this point it’s become an identifying tribal marker, like left-wingers believing in the innocence of Alger Hiss. And of course they’re mostly just repeating nonsense that some flack dreamed up. Many of the same people will mouth drivel about how a Finn and a Zulu could easily be genetically closer two each other than to other co-ethnics, which is never, ever, true.

When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong. Movement conservatives have learned this lesson well.

https://westhunt.wordpress.com/2013/09/12/younger-dryas-meteorite/
It has been suggested that a large meteorite was responsible for an odd climatic twitch from about 12,800 to 11,500 years ago (the Younger Dryas , a temporary return to glacial conditions in the Northern Hemisphere) and for the extinction of the large mammals of North America. They hypothesize air bursts or impact of a swarm of meteors , centered around the Great Lakes. Probably this is all nonsense.

The topic of the Holocene extinction of megafauna seems to bring out the crazy in people. In my opinion, the people supporting this Younger Dryas impact hypothesis are nuts, and half of their opponents are nuts as well.

...

The problem for that meteorite explanation of North Ammerican megafaunal extinction is that South America had an even more varied set of megafauna (gomphotheriums, toxodonts, macrauchenia, glyptodonts, giant sloths, etc) and they went extinct around the same time (probably a few hundred years later). There’s no way for a hit around the Great Lakes to wipe out stuff in Patagonia, barring a huge, dinosaur-killer type hit that throws tremendous amount of debris into suborbital trajectories. But that would have hit the entire world… Didn’t happen.

https://westhunt.wordpress.com/2012/05/26/redlining/
If you take too many chances in the process of making a living, you’ll get yourself killed before you manage to raise a family. Therefore there is a maximum sustainable risk per calorie acquired from hunting *. If the average member of the species incurs too much risk, more than that sustainable maximum, the species goes extinct. The Neanderthals must have come closer to that red line than anatomically modern humans in Africa, judging from their beat-up skeletons, which resemble those of rodeo riders. They were almost entirely carnivorous, judging from isotopic studies, and that helps us understand all those fractures: they apparently had limited access to edible plants, which entail far lower risks. Tubers and berries seldom break your ribs.

...

Risk per calorie was particularly high among the Neanderthals because they seem to have had no way of storing meat – they had no drying racks or storage pits in frozen ground like those used by their successors. Think of it this way: storage allow more complete usage of a large carcass such as a bison, that might weigh over a thousand pounds – it wouldn’t be easy to eat all of that before it went bad. Higher utilization – using all of the buffalo – drops the risk per calorie.

You might think that they could have chased rabbits or whatever, but that is relatively unrewarding. It works a lot better if you can use nets or snares, but no evidence of such devices has been found among the Neanderthals.

It looks as if the Neanderthals had health insurance: surely someone else fed them while they were recovering from being hurt. You see the same pattern, to a degree, in lions, and it probably existed in sabertooths as well, since they often exhibit significant healed injuries.

...

So we can often understand the pattern, but why were mammoths rapidly wiped out in the Americas while elephants survived in Africa and south Asia? I offer several possible explanations. First, North American mammoths had no evolved behavioral defenses against man – while Old World elephants had had time to acquire such adaptations. That may have made hunting old world elephants far more dangerous, and therefore less attractive. Second, there are areas in Africa that are almost uninhabitable, due to the tsetse fly. They may have acted as natural game preserves, and there are no equivalents in the Americas. Third, the Babel effect: in the early days, paleoIndians likely had not yet split into different ethnic groups with different languages: with less fighting among the early Indians, animals would not have had relatively border regions acting as refugia. Also, with fewer human-caused casualties, paleoindians could have taken more risks in hunting.

https://westhunt.wordpress.com/2013/09/18/hunter-gatherer-fish-and-game-laws/
I don’t think that there are any. But then how did they manage to be one-with-the-land custodians of wildlife? Uh….

Conservation is hard. Even if the population as a whole would be better off if a given prey species persisted in fair numbers, any single individual would benefit from cheating – even from eating the very last mammoth.

More complicated societies, with private property and draconian laws against poaching, do better, but even they don’t show much success in preserving a tasty prey species over the long haul. Considers the aurochs, the wild ancestor of the cow. The Indian version seems to have been wiped out 4-5,000 years ago. The Eurasian version was still common in Roman times, but was rare by the 13th century, surviving only in Poland. Theoretically, only members of the Piast dynasty could hunt aurochsen – but they still went extinct in 1627.

How then did edible species survive in pre-state societies? I can think of several ways in which some species managed to survive … [more]
west-hunter  sapiens  antiquity  rant  nature  occam  thick-thin  migration  scitariat  info-dynamics  multi  archaics  nihil  archaeology  kumbaya-kult  the-trenches  discussion  speculation  ideas  environment  food  energy-resources  farmers-and-foragers  history  bio  malthus  cooperate-defect  property-rights  free-riding  public-goodish  alt-inst  population  density  multiplicative  technology  iteration-recursion  magnitude  quantitative-qualitative  study  contradiction  no-go  spreading  death  interests  climate-change  epistemic  truth  coalitions  left-wing  right-wing  science  poast  europe  nordic  agriculture  efficiency  tribalism  signaling  us-them  leviathan  duty  cohesion  organizing  axelrod  westminster  preference-falsification  illusion  inference  apollonian-dionysian 
november 2016 by nhaliday
Son of low-hanging fruit | West Hunter
You see, you can think of the thunderstorm, after a ground discharge, as a vertical dipole. Its electrical field drops as the cube of altitude. The threshold voltage for atmospheric breakdown is proportional to pressure, while pressure drops exponentially with altitude: and as everyone knows, a negative exponential drops faster than any power.

The curves must cross. Electrical breakdown occurs. Weird lightning, way above the clouds.

As I said, people reported sprites at least a hundred years ago, and they have probably been observed occasionally since the dawn of time. However, they’re far easier to see if you’re above the clouds – pilots often do.

Pilots also learned not to talk about it, because nobody listened. Military and commercial pilots have to pass periodic medical exams known as ‘flight physicals’, and there was a suspicion that reporting glowing red cephalopods in the sky might interfere with that. Generally, you had to see the things that were officially real (whether they were really real or not), and only those things.

Sprites became real when someone recorded one by accident on a fast camera in 1989. Since then it’s turned into a real subject, full of strangeness: turns out that thunderstorms sometimes generate gamma-rays and even antimatter.
west-hunter  physics  cocktail  stories  history  thick-thin  low-hanging  applications  bounded-cognition  error  epistemic  management  scitariat  info-dynamics  ideas  discovery  the-trenches  alt-inst  trivia  theory-practice  is-ought  being-right  magnitude  intersection-connectedness  sky  electromag  fire  inference  apollonian-dionysian  consilience  elegance 
november 2016 by nhaliday
Epistemic learned helplessness - Jackdaws love my big sphinx of quartz
I don’t think I’m overselling myself too much to expect that I could argue circles around the average uneducated person. Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!” Or, more plausibly, “Shut up I don’t want to talk about this!”

And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology, rather than those of the universally reviled crackpots who write books about Venus being a comet.

You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.

(This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)

...

Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.

A friend tells me of a guy who once accepted fundamentalist religion because of Pascal’s Wager. I will provisionally admit that this person “takes ideas seriously”. Everyone else gets partial credit, at best.

...

Responsible doctors are at the other end of the spectrum from terrorists here. I once heard someone rail against how doctors totally ignored all the latest and most exciting medical studies. The same person, practically in the same breath, then railed against how 50% to 90% of medical studies are wrong. These two observations are not unrelated. Not only are there so many terrible studies, but pseudomedicine (not the stupid homeopathy type, but the type that links everything to some obscure chemical on an out-of-the-way metabolic pathway) has, for me, proven much like pseudohistory – unless I am an expert in that particular subsubfield of medicine, it can sound very convincing even when it’s very wrong.

The medical establishment offers a shiny tempting solution. First, a total unwillingness to trust anything, no matter how plausible it sounds, until it’s gone through an endless cycle of studies and meta-analyses. Second, a bunch of Institutes and Collaborations dedicated to filtering through all these studies and analyses and telling you what lessons you should draw from them.

I’m glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains. It seems to me that although these people are more likely to become terrorists or Velikovskians or homeopaths, they’re also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.

But I’m also glad epistemic learned helplessness exists. It seems like a pretty useful social safety valve most of the time.
yvain  essay  thinking  rationality  philosophy  reflection  ratty  ssc  epistemic  🤖  2013  minimalism  intricacy  p:null  info-dynamics  truth  reason  s:**  contrarianism  subculture  inference  bayesian  priors-posteriors  debate  rhetoric  pessimism  nihil  spreading  flux-stasis  robust  parsimony  dark-arts  illusion 
october 2016 by nhaliday
Noise: dinosaurs, syphilis, and all that | West Hunter
Generally speaking, I thought the paleontologists were a waste of space: innumerate, ignorant about evolution, and simply not very smart.

None of them seemed to understand that a sharp, short unpleasant event is better at causing a mass extinction, since it doesn’t give flora and fauna time to adapt.

Most seemed to think that gradual change caused by slow geological and erosion forces was ‘natural’, while extraterrestrial impact was not. But if you look at the Moon, or Mars, or the Kirkwood gaps in the asteroids, or think about the KAM theorem, it is apparent that Newtonian dynamics implies that orbits will be perturbed, and that sometimes there will be catastrophic cosmic collisions. Newtonian dynamics is as ‘natural’ as it gets: paleontologists not studying it in school and not having much math hardly makes it ‘unnatural’.

One of the more interesting general errors was not understanding how to to deal with noise – incorrect observations. There’s a lot of noise in the paleontological record. Dinosaur bones can be eroded and redeposited well after their life times – well after the extinction of all dinosaurs. The fossil record is patchy: if a species is rare, it can easily look as if it went extinct well before it actually did. This means that the data we have is never going to agree with a perfectly correct hypothesis – because some of the data is always wrong. Particularly true if the hypothesis is specific and falsifiable. If your hypothesis is vague and imprecise – not even wrong – it isn’t nearly as susceptible to noise. As far as I can tell, a lot of paleontologists [ along with everyone in the social sciences] think of of unfalsifiability as a strength.

Done Quickly: https://westhunt.wordpress.com/2011/12/03/done-quickly/
I’ve never seen anyone talk about it much, but when you think about mass extinctions, you also have to think about rates of change

You can think of a species occupying a point in a many-dimensional space, where each dimension represents some parameter that influences survival and/or reproduction: temperature, insolation, nutrient concentrations, oxygen partial pressure, toxin levels, yada yada yada. That point lies within a zone of habitability – the set of environmental conditions that the species can survive. Mass extinction occurs when environmental changes are so large that many species are outside their comfort zone.

The key point is that, with gradual change, species adapt. In just a few generations, you can see significant heritable responses to a new environment. Frogs have evolved much greater tolerance of acidification in 40 years (about 15 generations). Some plants in California have evolved much greater tolerance of copper in just 70 years.

As this happens, the boundaries of the comfort zone move. Extinctions occur when the rate of environmental change is greater than the rate of adaptation, or when the amount of environmental change exceeds the limit of feasible adaptation. There are such limits: bar-headed geese fly over Mt. Everest, where the oxygen partial pressure is about a third of that at sea level, but I’m pretty sure that no bird could survive on the Moon.

...

Paleontologists prefer gradualist explanations for mass extinctions, but they must be wrong, for the most part.
disease  science  critique  rant  history  thinking  regularizer  len:long  west-hunter  thick-thin  occam  social-science  robust  parasites-microbiome  early-modern  parsimony  the-trenches  bounded-cognition  noise-structure  signal-noise  scitariat  age-of-discovery  sex  sexuality  info-dynamics  alt-inst  map-territory  no-go  contradiction  dynamical  math.DS  space  physics  mechanics  archaeology  multi  speed  flux-stasis  smoothness  evolution  environment  time  shift  death  nihil  inference  apollonian-dionysian  error  explanation  spatial  discrete  visual-understanding  consilience  traces  evidence  elegance 
september 2016 by nhaliday
Coefficient of relationship - Wikipedia, the free encyclopedia
relatedness by consanguinity

Average percent DNA shared between relatives – 23andMe Customer Care: https://customercare.23andme.com/hc/en-us/articles/212170668-Average-percent-DNA-shared-between-relatives
summary of relatedness by consanguinity
shouldn't it be 2^-4 ~ 6% for first cousins?
wiki  sapiens  reference  genetics  evolution  concept  cheatsheet  kinship  metrics  intersection-connectedness  multi  brands  biotech  data  graphs  trees  magnitude  identity  estimate  measurement  inference 
july 2016 by nhaliday
Math attic
includes a nice visualization of implications between properties of topological spaces
math  visualization  visual-understanding  metabuch  techtariat  graphs  topology  synthesis  math.GN  separation  metric-space  zooming  inference  cheatsheet 
march 2016 by nhaliday
Lean
https://lean-forward.github.io
The goal of the Lean Forward project is to collaborate with number theorists to formally prove theorems about research mathematics and to address the main usability issues hampering the adoption of proof assistants in mathematical circles. The theorems will be selected together with our collaborators to guide the development of formal libraries and verified tools.

mostly happening in the Netherlands

https://formalabstracts.github.io

A Review of the Lean Theorem Prover: https://jiggerwit.wordpress.com/2018/09/18/a-review-of-the-lean-theorem-prover/
- Thomas Hales
seems like a Coq might be a better starter if I ever try to get into proof assistants/theorem provers

edit: on second thought this actually seems like a wash for beginners

An Argument for Controlled Natural Languages in Mathematics: https://jiggerwit.wordpress.com/2019/06/20/an-argument-for-controlled-natural-languages-in-mathematics/
By controlled natural language for mathematics (CNL), we mean an artificial language for the communication of mathematics that is (1) designed in a deliberate and explicit way with precise computer-readable syntax and semantics, (2) based on a single natural language (such as Chinese, Spanish, or English), and (3) broadly understood at least in an intuitive way by mathematically literate speakers of the natural language.

The definition of controlled natural language is intended to exclude invented languages such as Esperanto and Logjam that are not based on a single natural language. Programming languages are meant to be excluded, but a case might be made for TeX as the first broadly adopted controlled natural language for mathematics.

Perhaps it is best to start with an example. Here is a beautifully crafted CNL text created by Peter Koepke and Steffen Frerix. It reproduces a theorem and proof in Rudin’s Principles of mathematical analysis almost word for word. Their automated proof system is able to read and verify the proof.

https://github.com/Naproche/Naproche-SAD
research  math  formal-methods  msr  multi  homepage  research-program  skunkworks  math.NT  academia  ux  CAS  mathtariat  expert-experience  cost-benefit  nitty-gritty  review  critique  rant  types  learning  intricacy  functional  performance  c(pp)  ocaml-sml  comparison  ecosystem  DSL  tradeoffs  composition-decomposition  interdisciplinary  europe  germanic  grokkability  nlp  language  heavyweights  inference  rigor  automata-languages  repo  software  tools  syntax  frontier  state-of-art  pls  grokkability-clarity  technical-writing  database  lifts-projections 
january 2016 by nhaliday

related tags

ability-competence  absolute-relative  abstraction  academia  accretion  accuracy  acm  adversarial  advertising  age-of-discovery  aging  agriculture  ai  ai-control  alien-character  alignment  alt-inst  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anonymity  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  applicability-prereqs  applications  archaeology  archaics  art  article  ascetic  asia  attaq  attention  automata-languages  axelrod  axioms  backup  barons  bayesian  behavioral-gen  being-becoming  being-right  better-explained  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  biotech  blog  books  bounded-cognition  brain-scan  brands  britain  broad-econ  c(pp)  canada  canon  cardio  cartoons  CAS  causation  certificates-recognition  charity  chart  cheatsheet  checking  checklists  china  christianity  civilization  class  climate-change  cliometrics  coalitions  coarse-fine  cocktail  cog-psych  cohesion  comics  commentary  communication  communism  comparison  competition  complex-systems  complexity  composition-decomposition  computation  concept  conceptual-vocab  concrete  confidence  consilience  contiguity-proximity  contradiction  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  correctness  correlation  cost-benefit  counter-revolution  cracker-econ  cracker-prog  creative  critique  crux  crypto  cs  cultural-dynamics  culture  curiosity  dark-arts  darwinian  data  database  death  debate  decision-making  decision-theory  deep-learning  deep-materialism  degrees-of-freedom  democracy  demographics  dennett  density  descriptive  design  detail-architecture  differential  dimensionality  direct-indirect  direction  dirty-hands  discovery  discrete  discussion  disease  distributed  distribution  DSL  duality  duty  dynamical  early-modern  eastern-europe  economics  econotariat  ecosystem  eden-heaven  efficiency  egalitarianism-hierarchy  EGT  electromag  elegance  embedded-cognition  embodied  emotion  empirical  encyclopedic  endogenous-exogenous  ends-means  energy-resources  engineering  enlightenment-renaissance-restoration-reformation  environment  epidemiology  epigenetics  epistemic  equilibrium  error  essay  essence-existence  estimate  ethics  europe  evidence  evolution  examples  exegesis-hermeneutics  existence  experiment  expert-experience  explanans  explanation  explore-exploit  extrema  farmers-and-foragers  fertility  fiction  film  finance  finiteness  fire  flexibility  flux-stasis  food  foreign-lang  form-design  formal-methods  formal-values  free-riding  frontier  functional  futurism  gallic  game-theory  gbooks  gender  generalization  genetics  genomics  geography  geometry  germanic  giants  gnon  gnxp  good-evil  google  gotchas  government  graphs  great-powers  grokkability  grokkability-clarity  ground-up  guessing  guilt-shame  GWAS  gwern  hanson  heavyweights  hetero-advantage  heterodox  heuristic  hi-order-bits  history  hmm  homepage  homo-hetero  hsu  huge-data-the-biggest  humility  hypocrisy  hypothesis-testing  ideas  identity  ideology  idk  illusion  immune  increase-decrease  individualism-collectivism  induction  inference  info-dynamics  infographic  innovation  insight  institutions  integrity  intel  intelligence  interdisciplinary  interests  interface  interface-compatibility  internet  intersection-connectedness  intervention  intricacy  intuition  invariance  iron-age  is-ought  iteration-recursion  japan  jargon  judgement  justice  kinship  knowledge  kumbaya-kult  labor  language  law  learning  learning-theory  left-wing  legacy  len:long  lens  lesswrong  letters  levers  leviathan  life-history  lifts-projections  linguistics  links  list  logic  love-hate  low-hanging  lower-bounds  machine-learning  magnitude  malthus  management  map-territory  marginal-rev  matching  math  math.CA  math.CO  math.DS  math.GN  math.NT  mathtariat  maxim-gun  measure  measurement  mechanics  medicine  mediterranean  memetics  MENA  meta:math  meta:medicine  meta:prediction  meta:research  meta:rhetoric  meta:science  metabuch  metameta  methodology  metric-space  metrics  migration  military  minimalism  miri-cfar  missing-heritability  mobile  model-organism  models  modernity  moloch  morality  mostly-modern  motivation  move-fast-(and-break-things)  msr  multi  multiplicative  n-factor  narrative  nascent-state  natural-experiment  nature  near-far  necessity-sufficiency  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nihil  nitty-gritty  nlp  no-go  noise-structure  nonlinearity  nordic  novelty  obesity  objektbuch  ocaml-sml  occam  occident  oceans  old-anglo  open-closed  open-problems  opsec  optimism  order-disorder  orders  org:junk  org:mag  org:med  org:nat  org:rec  org:sci  org:theos  organizing  orient  overflow  p:null  p:whenever  papers  parallax  parasites-microbiome  parenting  parsimony  patho-altruism  patience  PCP  pdf  people  performance  pessimism  phalanges  philosophy  physics  pic  pls  poast  poetry  polisci  pop-diff  popsci  population  population-genetics  pragmatic  pre-2013  prediction  predictive-processing  preference-falsification  prioritizing  priors-posteriors  privacy  probability  problem-solving  profile  programming  proof-systems  proofs  property-rights  protestant-catholic  psychiatry  psycho-atoms  psychology  public-goodish  q-n-a  QTL  quality  quantitative-qualitative  quantum  questions  quixotic  quiz  quotes  rant  rat-pack  rationality  ratty  realness  reason  recommendations  red-queen  reduction  reference  reflection  regularizer  relativity  religion  replication  repo  research  research-program  responsibility  retention  review  revolution  rhetoric  right-wing  rigidity  rigor  risk  robust  roots  rot  russia  s:*  s:**  sapiens  scale  schelling  scholar  science  scifi-fantasy  scitariat  SDP  security  separation  sex  sexuality  shift  sib-study  signal-noise  signaling  similarity  sinosphere  skeleton  skunkworks  sky  smoothness  social  social-psych  social-science  society  soft-question  software  space  spatial  spearhead  speculation  speed  spengler  spock  spreading  ssc  stackex  state-of-art  statesmen  static-dynamic  stats  stereotypes  stories  stream  street-fighting  stress  structure  study  stylized-facts  sub-super  subculture  subjective-objective  summary  symmetry  syntax  synthesis  systematic-ad-hoc  tactics  tcs  tech  technical-writing  technology  techtariat  telos-atelos  the-classics  the-great-west-whale  the-self  the-trenches  theory-of-mind  theory-practice  theos  thesis  thick-thin  thiel  things  thinking  threat-modeling  thurston  time  todo  tools  top-n  topology  traces  track-record  trade  tradeoffs  tradition  transportation  trees  trends  tribalism  tricki  tricks  trivia  troll  trust  truth  turing  twin-study  twitter  types  unaffiliated  uncertainty  unit  universalism-particularism  urban  urban-rural  us-them  ux  values  variance-components  virtu  visual-understanding  visualization  visuo  volo-avolo  war  water  weird  west-hunter  westminster  whole-partial-many  wiki  wild-ideas  wire-guided  wisdom  within-without  world-war  wormholes  worrydream  worse-is-better/the-right-thing  writing  yoga  yvain  zeitgeist  zooming  🌞  🎓  🔬  🤖 

Copy this bookmark:



description:


tags: