nhaliday + complex-systems   96

Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
An adaptability limit to climate change due to heat stress
Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wet-bulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedence of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Trajectories of the Earth System in the Anthropocene: http://www.pnas.org/content/early/2018/07/31/1810141115
We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be.
study  org:nat  environment  climate-change  humanity  existence  risk  futurism  estimate  physics  thermo  prediction  temperature  nature  walls  civilization  flexibility  rigidity  embodied  multi  manifolds  plots  equilibrium  phase-transition  oscillation  comparison  complex-systems  earth 
august 2018 by nhaliday
Eliminative materialism - Wikipedia
Eliminative materialism (also called eliminativism) is the claim that people's common-sense understanding of the mind (or folk psychology) is false and that certain classes of mental states that most people believe in do not exist.[1] It is a materialist position in the philosophy of mind. Some supporters of eliminativism argue that no coherent neural basis will be found for many everyday psychological concepts such as belief or desire, since they are poorly defined. Rather, they argue that psychological concepts of behaviour and experience should be judged by how well they reduce to the biological level.[2] Other versions entail the non-existence of conscious mental states such as pain and visual perceptions.[3]

Eliminativism about a class of entities is the view that that class of entities does not exist.[4] For example, materialism tends to be eliminativist about the soul; modern chemists are eliminativist about phlogiston; and modern physicists are eliminativist about the existence of luminiferous aether. Eliminative materialism is the relatively new (1960s–1970s) idea that certain classes of mental entities that common sense takes for granted, such as beliefs, desires, and the subjective sensation of pain, do not exist.[5][6] The most common versions are eliminativism about propositional attitudes, as expressed by Paul and Patricia Churchland,[7] and eliminativism about qualia (subjective interpretations about particular instances of subjective experience), as expressed by Daniel Dennett and Georges Rey.[3] These philosophers often appeal to an introspection illusion.

In the context of materialist understandings of psychology, eliminativism stands in opposition to reductive materialism which argues that mental states as conventionally understood do exist, and that they directly correspond to the physical state of the nervous system.[8][need quotation to verify] An intermediate position is revisionary materialism, which will often argue that the mental state in question will prove to be somewhat reducible to physical phenomena—with some changes needed to the common sense concept.

Since eliminative materialism claims that future research will fail to find a neuronal basis for various mental phenomena, it must necessarily wait for science to progress further. One might question the position on these grounds, but other philosophers like Churchland argue that eliminativism is often necessary in order to open the minds of thinkers to new evidence and better explanations.[8]
concept  conceptual-vocab  philosophy  ideology  thinking  metameta  weird  realness  psychology  cog-psych  neurons  neuro  brain-scan  reduction  complex-systems  cybernetics  wiki  reference  parallax  truth  dennett  within-without  the-self  subjective-objective  absolute-relative  deep-materialism  new-religion  identity  analytical-holistic  systematic-ad-hoc  science  theory-practice  theory-of-mind  applicability-prereqs  nihil  lexical 
april 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Society of Mind - Wikipedia
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

https://en.wikipedia.org/wiki/Modularity_of_mind

The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
books  ideas  speculation  structure  composition-decomposition  complex-systems  neuro  ai  psychology  cog-psych  intelligence  reduction  wiki  giants  philosophy  number  cohesion  diversity  systematic-ad-hoc  detail-architecture  pdf  study  neuro-nitgrit  brain-scan  nitty-gritty  network-structure  graphs  graph-theory  models  whole-partial-many  evopsych  eden  reference  psych-architecture  article 
april 2018 by nhaliday
Ultimate fate of the universe - Wikipedia
The fate of the universe is determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below.[8] However, observations are not conclusive, and alternative models are still possible.[9]

Big Freeze or heat death
Main articles: Future of an expanding universe and Heat death of the universe
The Big Freeze is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature.[10] This scenario, in combination with the Big Rip scenario, is currently gaining ground as the most important hypothesis.[11] It could, in the absence of dark energy, occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe. In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, which themselves will disappear over time as they emit Hawking radiation.[12] Over infinite time, there would be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations,[13][14] and the fluctuation theorem.[15][16]

A related scenario is heat death, which states that the universe goes to a state of maximum entropy in which everything is evenly distributed and there are no gradients—which are needed to sustain information processing, one form of which is life. The heat death scenario is compatible with any of the three spatial models, but requires that the universe reach an eventual temperature minimum.[17]
physics  big-picture  world  space  long-short-run  futurism  singularity  wiki  reference  article  nibble  thermo  temperature  entropy-like  order-disorder  death  nihil  bio  complex-systems  cybernetics  increase-decrease  trends  computation  local-global  prediction  time  spatial  spreading  density  distribution  manifolds  geometry  janus 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Antinomia Imediata – experiments in a reaction from the left
https://antinomiaimediata.wordpress.com/lrx/
So, what is the Left Reaction? First of all, it’s reaction: opposition to the modern rationalist establishment, the Cathedral. It opposes the universalist Jacobin program of global government, favoring a fractured geopolitics organized through long-evolved complex systems. It’s profoundly anti-socialist and anti-communist, favoring market economy and individualism. It abhors tribalism and seeks a realistic plan for dismantling it (primarily informed by HBD and HBE). It looks at modernity as a degenerative ratchet, whose only way out is intensification (hence clinging to crypto-marxist market-driven acceleration).

How come can any of this still be in the *Left*? It defends equality of power, i.e. freedom. This radical understanding of liberty is deeply rooted in leftist tradition and has been consistently abhored by the Right. LRx is not democrat, is not socialist, is not progressist and is not even liberal (in its current, American use). But it defends equality of power. It’s utopia is individual sovereignty. It’s method is paleo-agorism. The anti-hierarchy of hunter-gatherer nomads is its understanding of the only realistic objective of equality.

...

In more cosmic terms, it seeks only to fulfill the Revolution’s side in the left-right intelligence pump: mutation or creation of paths. Proudhon’s antinomy is essentially about this: the collective force of the socius, evinced in moral standards and social organization vs the creative force of the individuals, that constantly revolutionize and disrupt the social body. The interplay of these forces create reality (it’s a metaphysics indeed): the Absolute (socius) builds so that the (individualistic) Revolution can destroy so that the Absolute may adapt, and then repeat. The good old formula of ‘solve et coagula’.

Ultimately, if the Neoreaction promises eternal hell, the LRx sneers “but Satan is with us”.

https://antinomiaimediata.wordpress.com/2016/12/16/a-statement-of-principles/
Liberty is to be understood as the ability and right of all sentient beings to dispose of their persons and the fruits of their labor, and nothing else, as they see fit. This stems from their self-awareness and their ability to control and choose the content of their actions.

...

Equality is to be understood as the state of no imbalance of power, that is, of no subjection to another sentient being. This stems from their universal ability for empathy, and from their equal ability for reason.

...

It is important to notice that, contrary to usual statements of these two principles, my standpoint is that Liberty and Equality here are not merely compatible, meaning they could coexist in some possible universe, but rather they are two sides of the same coin, complementary and interdependent. There can be NO Liberty where there is no Equality, for the imbalance of power, the state of subjection, will render sentient beings unable to dispose of their persons and the fruits of their labor[1], and it will limit their ability to choose over their rightful jurisdiction. Likewise, there can be NO Equality without Liberty, for restraining sentient beings’ ability to choose and dispose of their persons and fruits of labor will render some more powerful than the rest, and establish a state of subjection.

https://antinomiaimediata.wordpress.com/2017/04/18/flatness/
equality is the founding principle (and ultimately indistinguishable from) freedom. of course, it’s only in one specific sense of “equality” that this sentence is true.

to try and eliminate the bullshit, let’s turn to networks again:

any nodes’ degrees of freedom is the number of nodes they are connected to in a network. freedom is maximum when the network is symmetrically connected, i. e., when all nodes are connected to each other and thus there is no topographical hierarchy (middlemen) – in other words, flatness.

in this understanding, the maximization of freedom is the maximization of entropy production, that is, of intelligence. As Land puts it:

https://antinomiaimediata.wordpress.com/category/philosophy/mutualism/
gnon  blog  stream  politics  polisci  ideology  philosophy  land  accelerationism  left-wing  right-wing  paradox  egalitarianism-hierarchy  civil-liberty  power  hmm  revolution  analytical-holistic  mutation  selection  individualism-collectivism  tribalism  us-them  modernity  multi  tradeoffs  network-structure  complex-systems  cybernetics  randy-ayndy  insight  contrarianism  metameta  metabuch  characterization  cooperate-defect  n-factor  altruism  list  coordination  graphs  visual-understanding  cartoons  intelligence  entropy-like  thermo  information-theory  order-disorder  decentralized  distribution  degrees-of-freedom  analogy  graph-theory  extrema  evolution  interdisciplinary  bio  differential  geometry  anglosphere  optimate  nascent-state  deep-materialism  new-religion  cool  mystic  the-classics  self-interest  interests  reason  volo-avolo  flux-stasis  invariance  government  markets  paying-rent  cost-benefit  peace-violence  frontier  exit-voice  nl-and-so-can-you  war  track-record  usa  history  mostly-modern  world-war  military  justice  protestant-cathol 
march 2018 by nhaliday
What are the Laws of Biology?
The core finding of systems biology is that only a very small subset of possible network motifs is actually used and that these motifs recur in all kinds of different systems, from transcriptional to biochemical to neural networks. This is because only those arrangements of interactions effectively perform some useful operation, which underlies some necessary function at a cellular or organismal level. There are different arrangements for input summation, input comparison, integration over time, high-pass or low-pass filtering, negative auto-regulation, coincidence detection, periodic oscillation, bistability, rapid onset response, rapid offset response, turning a graded signal into a sharp pulse or boundary, and so on, and so on.

These are all familiar concepts and designs in engineering and computing, with well-known properties. In living organisms there is one other general property that the designs must satisfy: robustness. They have to work with noisy components, at a scale that’s highly susceptible to thermal noise and environmental perturbations. Of the subset of designs that perform some operation, only a much smaller subset will do it robustly enough to be useful in a living organism. That is, they can still perform their particular functions in the face of noisy or fluctuating inputs or variation in the number of components constituting the elements of the network itself.
scitariat  reflection  proposal  ideas  thinking  conceptual-vocab  lens  bio  complex-systems  selection  evolution  flux-stasis  network-structure  structure  composition-decomposition  IEEE  robust  signal-noise  perturbation  interdisciplinary  graphs  circuits  🌞  big-picture  hi-order-bits  nibble  synthesis 
november 2017 by nhaliday
Europa, Enceladus, Moon Miranda | West Hunter
A lot of ice moons seem to have interior oceans, warmed by tidal flexing and possibly radioactivity.  But they’re lousy candidates for life, because you need free energy; and there’s very little in the interior oceans of such system.

It is possible that NASA is institutionally poor at pointing this out.
west-hunter  scitariat  discussion  ideas  rant  speculation  prediction  government  dirty-hands  space  xenobio  oceans  fluid  thermo  phys-energy  temperature  no-go  volo-avolo  physics  equilibrium  street-fighting  nibble  error  track-record  usa  bio  eden  cybernetics  complex-systems 
september 2017 by nhaliday
Of Mice and Men | West Hunter
It’s not always easy figuring out how a pathogen causes disease. There is an example in mice for which the solution was very difficult, so difficult that we would probably have failed to discover the cause of a similarly obscure infectious disease in humans.

Mycoplasma pulmonis causes a chronic obstructive lung disease in mice, but it wasn’t easy to show this. The disease was first described in 1915, and by 1940, people began to suspect Mycoplasma pulmonis might be the cause. But then again, maybe not. It was often found in mice that seemed healthy. Pure cultures of this organism did not consistently produce lung disease – which means that it didn’t satisfy Koch’s postulates, in particular postulate 1 (The microorganism must be found in abundance in all organisms suffering from the disease, but should not be found in healthy organisms.) and postulate 3 (The cultured microorganism should cause disease when introduced into a healthy organism.).

Well, those postulates are not logic itself, but rather a useful heuristic. Koch knew that, even if lots of other people don’t.

This respiratory disease of mice is long-lasting, but slow to begin. It can take half a lifetime – a mouse lifetime, that is – and that made finding the cause harder. It required patience, which means I certainly couldn’t have done it.

Here’s how they solved it. You can raise germ-free mice. In the early 1970s, researchers injected various candidate pathogens into different groups of germ-free mice and waited to see which, if any, developed this chronic lung disease. It was Mycoplasma pulmonis , all right, but it had taken 60 years to find out.

It turned out that susceptibility differed between different mouse strains – genetic susceptibility was important. Co-infection with other pathogens affected the course of the disease. Microenvironmental details mattered – mainly ammonia in cages where the bedding wasn’t changed often enough. But it didn’t happen without that mycoplasma, which was a key causal link, something every engineer understands but many MDs don’t.

If there was a similarly obscure infectious disease of humans, say one that involved a fairly common bug found in both the just and the unjust, one that took decades for symptoms to manifest – would we have solved it? Probably not.

Cooties are everywhere.

gay germ search: https://westhunt.wordpress.com/2013/07/21/of-mice-and-men/#comment-15905
It’s hard to say, depends on how complicated the path of causation is. Assuming that I’m even right, of course. Some good autopsy studies might be fruitful – you’d look for microanatomical brain differences, as with nartcolepsy. Differences in gene expression, maybe. You could look for a pathogen – using the digital version of RDA (representational difference analysis), say on discordant twins. Do some old-fashioned epidemiology. Look for marker antibodies, signs of some sort of immunological event.

Do all of the above on gay rams – lots easier to get started, much less whining from those being vivisected.

Patrick Moore found the virus causing Kaposi’s sarcoma without any funding at all. I’m sure Peter Thiel could afford a serious try.
west-hunter  scitariat  discussion  ideas  reflection  analogy  model-organism  bio  disease  parasites-microbiome  medicine  epidemiology  heuristic  thick-thin  stories  experiment  track-record  intricacy  gotchas  low-hanging  🌞  patience  complex-systems  meta:medicine  multi  poast  methodology  red-queen  brain-scan  neuro  twin-study  immune  nature  gender  sex  sexuality  thiel  barons  gwern  stylized-facts  inference  apollonian-dionysian 
september 2017 by nhaliday
All models are wrong - Wikipedia
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled "All models are wrong but some are useful". The section is copied below.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
thinking  metabuch  metameta  map-territory  models  accuracy  wire-guided  truth  philosophy  stats  data-science  methodology  lens  wiki  reference  complex-systems  occam  parsimony  science  nibble  hi-order-bits  info-dynamics  the-trenches  meta:science  physics  fluid  thermo  stat-mech  applicability-prereqs  theory-practice 
august 2017 by nhaliday
Is the economy illegible? | askblog
In the model of the economy as a GDP factory, the most fundamental equation is the production function, Y = f(K,L).

This says that total output (Y) is determined by the total amount of capital (K) and the total amount of labor (L).

Let me stipulate that the economy is legible to the extent that this model can be applied usefully to explain economic developments. I want to point out that the economy, while never as legible as economists might have thought, is rapidly becoming less legible.
econotariat  cracker-econ  economics  macro  big-picture  empirical  legibility  let-me-see  metrics  measurement  econ-metrics  volo-avolo  securities  markets  amazon  business-models  business  tech  sv  corporation  inequality  compensation  polarization  econ-productivity  stagnation  monetary-fiscal  models  complex-systems  map-territory  thinking  nationalism-globalism  time-preference  cost-disease  education  healthcare  composition-decomposition  econometrics  methodology  lens  arrows  labor  capital  trends  intricacy  🎩  moments  winner-take-all  efficiency  input-output 
august 2017 by nhaliday
Controversial New Theory Suggests Life Wasn't a Fluke of Biology—It Was Physics | WIRED
First Support for a Physics Theory of Life: https://www.quantamagazine.org/first-support-for-a-physics-theory-of-life-20170726/
Take chemistry, add energy, get life. The first tests of Jeremy England’s provocative origin-of-life hypothesis are in, and they appear to show how order can arise from nothing.
news  org:mag  profile  popsci  bio  xenobio  deep-materialism  roots  eden  physics  interdisciplinary  applications  ideas  thermo  complex-systems  cybernetics  entropy-like  order-disorder  arrows  phys-energy  emergent  empirical  org:sci  org:inst  nibble  chemistry  fixed-point  wild-ideas 
august 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software 
june 2017 by nhaliday
Geologic temperature record - Wikipedia
2100 projection is comparable to early Pliocene/late Miocene, which is before H. sapiens (still plenty of mammals tho)
climate-change  environment  temperature  history  antiquity  time  sequential  wiki  reference  data  visualization  objektbuch  prediction  complex-systems  let-me-see  earth  time-series 
april 2017 by nhaliday
Interview Greg Cochran by Future Strategist
https://westhunt.wordpress.com/2016/08/10/interview/

- IQ enhancement (somewhat apprehensive, wonder why?)
- ~20 years to CRISPR enhancement (very ballpark)
- cloning as an alternative strategy
- environmental effects on IQ, what matters (iodine, getting hit in the head), what doesn't (schools, etc.), and toss-ups (childhood/embryonic near-starvation, disease besides direct CNS-affecting ones [!])
- malnutrition did cause more schizophrenia in Netherlands (WW2) and China (Great Leap Forward) though
- story about New Mexico schools and his children (mostly grad students in physics now)
- clever sillies, weird geniuses, and clueless elites
- life-extension and accidents, half-life ~ a few hundred years for a typical American
- Pinker on Harvard faculty adoptions (always Chinese girls)
- parabiosis, organ harvesting
- Chicago economics talk
- Catholic Church, cousin marriage, and the rise of the West
- Gregory Clark and Farewell to Alms
- retinoblastoma cancer, mutational load, and how to deal w/ it ("something will turn up")
- Tularemia and Stalingrad (ex-Soviet scientist literally mentioned his father doing it)
- germ warfare, nuclear weapons, and testing each
- poison gas, Haber, nerve gas, terrorists, Japan, Syria, and Turkey
- nukes at https://en.wikipedia.org/wiki/Incirlik_Air_Base
- IQ of ancient Greeks
- history of China and the Mongols, cloning Genghis Khan
- Alexander the Great vs. Napoleon, Russian army being late for meetup w/ Austrians
- the reason why to go into Iraq: to find and clone Genghis Khan!
- efficacy of torture
- monogamy, polygamy, and infidelity, the Aboriginal system (reverse aging wives)
- education and twin studies
- errors: passing white, female infanticide, interdisciplinary social science/economic imperialism, the slavery and salt story
- Jewish optimism about environmental interventions, Rabbi didn't want people to know, Israelis don't want people to know about group differences between Ashkenazim and other groups in Israel
- NASA spewing crap on extraterrestrial life (eg, thermodynamic gradient too weak for life in oceans of ice moons)
west-hunter  interview  audio  podcast  being-right  error  bounded-cognition  history  mostly-modern  giants  autism  physics  von-neumann  math  longevity  enhancement  safety  government  leadership  elite  scitariat  econotariat  cracker-econ  big-picture  judaism  iq  recent-selection  🌞  spearhead  gregory-clark  2016  space  xenobio  equilibrium  phys-energy  thermo  no-go  🔬  disease  gene-flow  population-genetics  gedanken  genetics  evolution  dysgenics  assortative-mating  aaronson  CRISPR  biodet  variance-components  environmental-effects  natural-experiment  stories  europe  germanic  psychology  cog-psych  psychiatry  china  asia  prediction  frontier  genetic-load  realness  time  aging  pinker  academia  medicine  economics  chicago  social-science  kinship  tribalism  religion  christianity  protestant-catholic  the-great-west-whale  divergence  roots  britain  agriculture  farmers-and-foragers  time-preference  cancer  society  civilization  russia  arms  parasites-microbiome  epidemiology  nuclear  biotech  deterrence  meta:war  terrorism  iraq-syria  MENA  foreign-poli 
march 2017 by nhaliday
Peter Norvig, the meaning of polynomials, debugging as psychotherapy | Quomodocumque
He briefly showed a demo where, given values of a polynomial, a machine can put together a few lines of code that successfully computes the polynomial. But the code looks weird to a human eye. To compute some quadratic, it nests for-loops and adds things up in a funny way that ends up giving the right output. So has it really ”learned” the polynomial? I think in computer science, you typically feel you’ve learned a function if you can accurately predict its value on a given input. For an algebraist like me, a function determines but isn’t determined by the values it takes; to me, there’s something about that quadratic polynomial the machine has failed to grasp. I don’t think there’s a right or wrong answer here, just a cultural difference to be aware of. Relevant: Norvig’s description of “the two cultures” at the end of this long post on natural language processing (which is interesting all the way through!)
mathtariat  org:bleg  nibble  tech  ai  talks  summary  philosophy  lens  comparison  math  cs  tcs  polynomials  nlp  debugging  psychology  cog-psych  complex-systems  deep-learning  analogy  legibility  interpretability 
march 2017 by nhaliday
Information Processing: Greenspan now agrees with Soros; Galbraith interview and a calculation
Easy Question: What growth rate advantage (additional GDP growth rate per annum) would savage, unfettered markets need to generate to justify these occasional disasters?

Answer: an additional 0.1 percent annual GDP growth would be more than enough. That is, an unregulated economy whose growth rate was 0.1 percent higher would, even after paying for each 20 year crisis, be richer than the heavily regulated comparator which avoided the crises but had a lower growth rate.

Hard Question: would additional regulation decrease economic growth rates by that amount or more?

Unless you think you can evaluate the relative GDP growth effects of two different policy regimes with accuracy of better than 0.1 percent, then the intellectually honest answer to the policy question is: I don't know. No shouting, no shaking your fist, no lecturing other people, no writing op eds, just I don't know. Correct the things that are obviously stupid, but don't overstate your confidence level about additional policy changes.

(Note I'm aware that distributional issues are also important. In the most recent era gains went mostly to a small number of top earners whereas the cost of the bailout will be spread over the whole tax base.)

http://voxeu.org/article/endogenous-growth-and-lack-recovery-global-crisis
Wall St. lending to Main St. even as many decry Dodd-Frank: https://apnews.com/0e4ee980a46549908733afb2f6824def/wall-st-lending-main-st-even-many-decry-dodd-frank
hsu  scitariat  finance  investing  economics  money  commentary  links  tradeoffs  regulation  econ-metrics  complex-systems  cycles  risk  market-failure  cost-benefit  multi  org:ngo  econotariat  wonkish  trends  technology  stagnation  econ-productivity  growth-econ  data  article  chart  macro  news  trump  politics  policy  :/  current-events  cjones-like  events 
february 2017 by nhaliday
Edge Master Class 2008 RICHARD THALER, SENDHIL MULLAINATHAN, DANIEL KAHNEMAN: A SHORT COURSE IN BEHAVIORAL ECONOMICS | Edge.org
https://twitter.com/toad_spotted/status/878990195953205248
huge popularity of "behavioral economics"among powerful people=largely excitement at how much more control they'd exert over stupider people

Time for Behavioral Political Economy? An Analysis of Articles in Behavioral Economics: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1846184
This study analyzes leading research in behavioral economics to see whether it contains advocacy of paternalism and whether it addresses the potential cognitive limitations and biases of the policymakers who are going to implement paternalist policies. The findings reveal that 20.7% of the studied articles in behavioral economics propose paternalist policy action and that 95.5% of these do not contain any analysis of the cognitive ability of policymakers. This suggests that behavioral political economy, in which the analytical tools of behavioral economics are applied to political decision-makers as well, would offer a useful extension of the research program.

https://www.bloomberg.com/view/articles/2017-07-19/some-countries-like-nudges-more-than-others
Research shows that Americans and conservatives can be less open to cues to change behavior.

It’s For Your Own Good!: http://www.nybooks.com/articles/2013/03/07/its-your-own-good/
- Cass Sunstein

Against Autonomy: Justifying Coercive Paternalism
by Sarah Conly
Cambridge University Press, 206 pp., $95.00

WHO NUDGES THE NUDGERS?: https://jacobitemag.com/2017/10/26/who-nudges-the-nudgers/
org:edge  guide  video  lectures  list  expert  economics  behavioral-econ  psychology  cog-psych  unit  complex-systems  🎩  multi  twitter  social  discussion  ratty  unaffiliated  aphorism  hmm  lol  authoritarianism  managerial-state  microfoundations  news  org:mag  org:biz  org:bv  journos-pundits  technocracy  books  review  expert-experience  elite  vampire-squid  study  lmao  data  info-dynamics  error  biases  iq  distribution  pro-rata  impetus  crooked  antidemos  civil-liberty  randy-ayndy  political-econ  gnon  org:popup  paying-rent  incentives  government  world  usa  alien-character  allodium  old-anglo  big-peeps  humility  noblesse-oblige  institutions  interests  org:local  utopia-dystopia 
february 2017 by nhaliday
Links 2/17: Site Your Sources | Slate Star Codex
The United States not only does poorly on education benchmark PISA, but each decile of wealth also does poorly compared to equivalent deciles in other countries. I find this surprising. Does this torpedo the theory that each US ethnic group does as well as its foreign counterparts, and US underperformance is a Simpson’s Paradox on ethnic distribution?

Twitter: @EveryoneIsDril.

New Study Finds Performance-Enhancing Drugs For Chess. Okay, fine, just modafinil, which we already knew about, but the exact pattern is interesting. Modafinil makes people take longer to make their moves, but the moves are ultimately better. That suggests that its advantage is not increasing IQ per se, but in giving people the increased attention span/concentration to work harder on finding good moves. I think this elegantly ties together a lot of stuff into a good explanation of modafinil’s cognitive-enhancing properties.

New Zealand Wants To Know How Peter Thiel Became A Secret Citizen. Give up, New Zealand; Peter Thiel is a citizen of any country he wants to be a citizen of. Also: Peter Thiel Denies California Governor Run Despite Mysterious Group’s Backing.

I was going to link to the paper Physics Envy May Be Hazardous To Your Wealth, but the part that actually interested me is small enough that I’m just going to include it here as a jpg (h/t Julia Galef).

Nature: Prevalence And Architecture Of De Novo Mutations In Developmental Disorders. There’s been a lot of debate over paternal age effects, and this paper helps clarify that by actually counting people’s de novo mutations and finding that children of older fathers (and to a lesser degree older mothers) have more of them. I am not sure to what degree this answers the objection that fathers with worse genes will tend to get married later; my impression is that it’s circumstantial evidence against (de novo mutations are more specific to paternal age than just bad genes) but not complete disproof.

Psssst, kid, wanna buy a parasitic worm? Key quote: “Those who experience the ‘hookworm bounce’ tend to describe it as ‘feeling as if they are teenagers again'” (h/t pistachi0n).

New paper in Crime And Delinquency: “We find no evidence that the number of fatal police shootings either increased or decreased post-Ferguson. Claims to the contrary are based on weak analyses of short-term trends.” This is especially surprising in light of claims that increased inner-city crime is caused by police withdrawing in order to prevent further fatal shootings; if that’s the police’s plan, it doesn’t seem to be working very well.

Intranasal ghrelin vaccine prevents obesity in mice.

Gene drive testing thwarted when organisms quickly develop resistance. There goes that idea.

New poll: Majority of Europeans support banning Muslim immigration. It’s an Internet-based poll, which is always cause for suspicion, but they seem to be a reputable organization and not the sort of group whose results are 100% due to trolling by 4chan, plus it’s consistent with some other results. Still pretty shocking and an existential-terror-level reminder of partisan bubbles. Also: Rasmussen finds most Americans support Trump’s refugee ban order.

Closely related: M.G. Miles makes the case for banning Muslim immigration. Maybe the first person I have seen make this case in a principled way; everyone else just seems to be screaming about stuff and demanding their readers reinterpret it into argument form. Also, he uses the word “terrorism” zero times, which seems like the correct number of times for a case of this sort. This is what people should be debating and responding to. Rebuttals by Americans would probably want to start with the differences between Muslim immigrants to Europe and Muslim immigrants to the US – Miles discusses the European case, but by my understanding these are very different populations with very different outcomes).

Second Enumerations podcast: Grognor reading interesting essays.

SSRN: Extreme Protest Tactics Reduce Popular Support For Social Movements: “We find across three experiments that extreme protest tactics decreased popular support for a given cause because they reduced feelings of identification with the movement. Though this effect obtained in tests of popular responses to extreme tactics used by animal rights, Black Lives Matter, and anti-Trump protests (Studies 1-3), we found that self-identified political activists were willing to use extreme tactics because they believed them to be effective for recruiting popular support.” Cf. The Toxoplasma Of Rage. (h/t Dain)

The Cagots were an underclass of people in medieval France whom everyone hated, with various purity laws around how decent people weren’t allowed to associate with/marry/touch/go near them. In the 1500s, the Pope personally intervened to tell the French to stop persecuting them, but the French ignored him and persecuted them more than ever. As far as anyone can tell, they looked, spoke, and acted just like everyone else, and exactly how they became so despised is one of the minor mysteries of medieval history.
ratty  yvain  ssc  links  commentary  multi  education  psychometrics  class  usa  regularizer  twitter  memes(ew)  nootropics  study  summary  games  thiel  government  anglo  california  pic  physics  interdisciplinary  complex-systems  models  map-territory  epistemic  science  social-science  org:nat  paternal-age  genetics  genetic-load  genomics  parasites-microbiome  data  crime  trends  criminal-justice  politics  culture-war  medicine  obesity  model-organism  geoengineering  CRISPR  unintended-consequences  europe  poll  migrant-crisis  migration  policy  islam  rhetoric  attaq  audio  podcast  postrat  subculture  medieval  gallic  tribalism  thinking  tactics  anthropology  meta:rhetoric  persuasion 
february 2017 by nhaliday
Information Processing: Machine Dreams
This is a controversial book because it demolishes not just the conventional history of the discipline, but its foundational assumptions. For example, once you start thinking about the information processing requirements that each agent (or even the entire system) must satisfy to find the optimal neoclassical equilibrium points, you realize the task is impossible. In fact, in some cases it has been rigorously shown to be beyond the capability of any universal Turing machine. Certainly, it seems beyond the plausible capabilities of a primitive species like homo sapiens. Once this bounded rationality (see also here) is taken into account, the whole notion of optimality of market equilibrium becomes far-fetched and speculative. It cannot be justified in any formal sense, and therefore cries out for experimental justification, which is not to be found.

I like this quote: This polymath who prognosticated that "science and technology would shift from a past emphasis on subjects of motion, force and energy to a future emphasis on subjects of communications, organization, programming and control," was spot on the money.
hsu  scitariat  economics  cs  computation  interdisciplinary  map-territory  models  market-failure  von-neumann  giants  history  quotes  links  debate  critique  review  big-picture  turing  heterodox  complex-systems  lens  s:*  books  🎩  thinking  markets  bounded-cognition 
february 2017 by nhaliday
Information Processing: How Brexit was won, and the unreasonable effectiveness of physicists
‘If you don’t get this elementary, but mildly unnatural, mathematics of elementary probability into your repertoire, then you go through a long life like a one-legged man in an ass-kicking contest. You’re giving a huge advantage to everybody else. One of the advantages of a fellow like Buffett … is that he automatically thinks in terms of decision trees and the elementary math of permutations and combinations… It’s not that hard to learn. What is hard is to get so you use it routinely almost everyday of your life. The Fermat/Pascal system is dramatically consonant with the way that the world works. And it’s fundamental truth. So you simply have to have the technique…

‘One of the things that influenced me greatly was studying physics… If I were running the world, people who are qualified to do physics would not be allowed to elect out of taking it. I think that even people who aren’t [expecting to] go near physics and engineering learn a thinking system in physics that is not learned so well anywhere else… The tradition of always looking for the answer in the most fundamental way available – that is a great tradition.’ --- Charlie Munger, Warren Buffet’s partner.

...

If you want to make big improvements in communication, my advice is – hire physicists, not communications people from normal companies, and never believe what advertising companies tell you about ‘data’ unless you can independently verify it. Physics, mathematics, and computer science are domains in which there are real experts, unlike macro-economic forecasting which satisfies neither of the necessary conditions – 1) enough structure in the information to enable good predictions, 2) conditions for good fast feedback and learning. Physicists and mathematicians regularly invade other fields but other fields do not invade theirs so we can see which fields are hardest for very talented people. It is no surprise that they can successfully invade politics and devise things that rout those who wrongly think they know what they are doing. Vote Leave paid very close attention to real experts. ...

More important than technology is the mindset – the hard discipline of obeying Richard Feynman’s advice: ‘The most important thing is not to fool yourself and you are the easiest person to fool.’ They were a hard floor on ‘fooling yourself’ and I empowered them to challenge everybody including me. They saved me from many bad decisions even though they had zero experience in politics and they forced me to change how I made important decisions like what got what money. We either operated scientifically or knew we were not, which is itself very useful knowledge. (One of the things they did was review the entire literature to see what reliable studies have been done on ‘what works’ in politics and what numbers are reliable.) Charlie Munger is one half of the most successful investment partnership in world history. He advises people – hire physicists. It works and the real prize is not the technology but a culture of making decisions in a rational way and systematically avoiding normal ways of fooling yourself as much as possible. This is very far from normal politics.
albion  hsu  scitariat  politics  strategy  tactics  recruiting  stories  reflection  britain  brexit  data-science  physics  interdisciplinary  impact  arbitrage  spock  discipline  clarity  lens  thick-thin  quotes  commentary  tetlock  meta:prediction  wonkish  complex-systems  intricacy  systematic-ad-hoc  realness  current-events  info-dynamics  unaffiliated 
january 2017 by nhaliday
Information Processing: Is science self-correcting?
A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character.
hsu  scitariat  ioannidis  science  meta:science  error  commentary  physics  limits  oscillation  models  equilibrium  bounded-cognition  complex-systems  being-right  info-dynamics  the-trenches  truth 
january 2017 by nhaliday
The Experts | West Hunter
It seems to me that not all people called experts actually are. In fact, there are whole fields in which none of the experts are experts. But let’s try to define terms.

...

Along these lines, I’ve read Tetlock’s book, Expert Political Judgment. A funny, funny, book. I will have more to say on that later.

USSR: https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60760
iraq war:
https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60653
Of course it is how Bush sold the war. Selling the war involving statements to the press, leaks, etc, not a Congressional resolution, which is the product of that selling job. Leaks to that lying slut at the New York Times, Judith Miller, for example.

Actively seeking a nuclear weapons capacity would have meant making fissionables, or building facilities to make fissionables. That hadn’t happened, and it was impossible for Iraq to have done so, given that any such effort had to be undetectable (because we hadn’t detected it with our ‘national technical means’, spy satellites and such) and given their limited resources in men, money, and materiel. Iraq had done nothing along these lines. Absolutely nothing.

https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60674
You don’t even know what yellow cake is. It is true that Saddam had had a nuclear program before the Gulf War, although it had not come too close to a weapon – but that program had been destroyed, and could not be rebuilt A. in a way invisible to our spy satellites and B with no money, because of sanctions.

The 550 tons of uranium oxide- unenriched uranium oxide – was a leftover from the earlier program. Under UN seal, and those seals had not been broken. Without enrichment, and without a means of enrichment, it was useless.

What’s the point of pushing this nonsense? somebody paying you?

The President was a moron, the Government of the United States proved itself a pack of fools,as did the New York Times, the Washington Post, Congress, virtually all of the pundits, etc. etc. And undoubtedly you were a fool as well: you might as well deal with it, because the truth is not going to go away.

interesting discussion of battle fatigue and desertion: https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60709
Actually, I don’t know how Freudian those Army psychologists were in 1944: they may have been useless in some other way. The gist is that in the European theater, for example in the Normandy campaign, the US had a much higher rate of psychological casualties than the Germans. “Both British and American psychiatrists were struck by the ‘apparently few cases of psychoneurosis’ among German prisoners of war. ” They were lower in the Red Army, as well.

In the Pacific theater, combat fatigue was even worse for US soldiers, but rare among the Japanese.

...

The infantry took most of the casualties – it was a very dangerous, unpleasant job. People didn’t like being in the infantry. In the American Army, and to a lesser extent, the British Army, getting into medical evacuation channels was a way to avoid getting killed. Not so much in the German Army: suspected malingerers were shot. In the American Army, they weren’t. That’s the most importance difference between the Germans and Americans affecting the ‘combat fatigue’ rate – the Germans didn’t put up with it. They did have some procedures, but they all ended up putting the guy back in combat fairly rapidly.

Even for desertion, only ONE American soldier was executed. In the Germany Army, 20,000. It makes a difference. We ran a soft war: since we ended up with whole divisions out of the fight, we probably would have done better (won faster, lost fewer guys) if we had been harsher on malingerers and deserters.

more on emdees: https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60697
As for your idea that doctors improve with age, I doubt it. So do some other people: for example, in this article in Annals of Internal Medicine (Systematic review: the relationship between clinical experience and quality of health care), they say “Overall, 32 of the 62 (52%) evaluations reported decreasing performance with increasing years in practice for all outcomes assessed; 13 (21%) reported decreasing performance with increasing experience for some outcomes but no association for others; 2 (3%) reported that performance initially increased with increasing experience, peaked, and then decreased (concave relationship); 13 (21%) reported no association; 1 (2%) reported increasing performance with increasing years in practice for some outcomes but no association for others; and 1 (2%) reported increasing performance with increasing years in practice for all outcomes. Results did not change substantially when the analysis was restricted to studies that used the most objective outcome measures.

I don’t how well that 25-year old doctor with an IQ of 160 would do, never having met anyone like that. I do know a mathematician who has an IQ around 160 and was married to a doctor, but she* dumped him after he put her through med school and came down with lymphoma.

And that libertarian friend I mentioned, who said that although quarantine would have worked against AIDS, better that we didn’t, despite the extra hundreds of thousands of deaths that resulted – why, he’s a doctor.

*all the other fifth-years in her program also dumped their spouses. Catching?

climate change: https://westhunt.wordpress.com/2014/10/20/the-experts/#comment-60787
I think that predicting climate is difficult, considering the complex feedback loops, but I know that almost every right-wing thing said about it that I have checked out turned out to be false.
west-hunter  rant  discussion  social-science  error  history  psychology  military  war  multi  mostly-modern  bounded-cognition  martial  crooked  meta:war  realness  being-right  emotion  scitariat  info-dynamics  poast  world-war  truth  tetlock  alt-inst  expert-experience  epidemiology  public-health  spreading  disease  sex  sexuality  iraq-syria  gender  gender-diff  parenting  usa  europe  germanic  psychiatry  courage  medicine  meta:medicine  age-generation  aging  climate-change  track-record  russia  communism  economics  correlation  nuclear  arms  randy-ayndy  study  evidence-based  data  time  reason  ability-competence  complex-systems  politics  ideology  roots  government  elite  impetus 
january 2017 by nhaliday
Information Processing: Brexit in the Multiverse: Dominic Cummings on the Vote Leave campaign
some other stuff from same post:
Generally the better educated are more prone to irrational political opinions and political hysteria than the worse educated far from power. Why? In the field of political opinion they are more driven by fashion, a gang mentality, and the desire to pose about moral and political questions all of which exacerbate cognitive biases, encourage groupthink, and reduce accuracy. Those on average incomes are less likely to express political views to send signals; political views are much less important for signalling to one’s immediate in-group when you are on 20k a year. The former tend to see such questions in more general and abstract terms, and are more insulated from immediate worries about money. The latter tend to see such questions in more concrete and specific terms and ask ‘how does this affect me?’. The former live amid the emotional waves that ripple around powerful and tightly linked self-reinforcing networks. These waves rarely permeate the barrier around insiders and touch others.
hsu  scitariat  politics  polisci  government  brexit  britain  people  profile  commentary  counterfactual  albion  meta:prediction  tetlock  wonkish  complex-systems  current-events  info-dynamics  unaffiliated  education  class  epistemic  biases  organizing 
january 2017 by nhaliday
Public perceptions of expert disagreement: Bias and incompetence or a complex and random world? - Sep 07, 2015
People with low education, or with low self-reported topic knowledge, were most likely to attribute disputes to expert incompetence. People with higher self-reported knowledge tended to attribute disputes to expert bias due to financial or ideological reasons. The more highly educated and cognitively able were most likely to attribute disputes to natural factors, such as the irreducible complexity and randomness of the phenomenon.

reminds me of Hanson's interpretation of political disagreement: poor data, complex phenomena with high causal density
study  psychology  social-psych  rationality  iq  expert  info-foraging  decision-making  epistemic  albion  intricacy  wonkish  biases  self-report  complex-systems  thick-thin  stylized-facts  descriptive  ideology  info-dynamics  chart  truth  expert-experience  reason 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition 
january 2017 by nhaliday
Improving Economic Research | askblog
To make a long story short:

1. Economic phenomena are rife with causal density. Theories make predictions assuming “other things equal,” but other things are never equal.

2. When I was a student, the solution was thought to be multiple regression analysis. You entered a bunch of variables into an estimated equation, and in doing so you “controlled for” those variables and thereby created conditions of “other things equal.” However, in 1978, Edward Leamer pointed out that actual practice diverges from theory. The researcher typically undertakes a lot of exploratory data analysis before reporting a final result. This process of exploratory analysis creates a bias toward finding the result desired by the researcher, rather than achieving a scientific ideal of objectivity.

3. In recent decades, the approach has shifted toward “natural experiments” and laboratory experiments. These suffer from other problems. The experimental population may not be representative. Even if this problem is not present, studies that offer definitive results are more likely to be published but consequently less likely to be replicated.
econotariat  cracker-econ  study  summary  methodology  economics  causation  social-science  best-practices  academia  hypothesis-testing  thick-thin  density  replication  complex-systems  roots  noise-structure  endo-exo  info-dynamics  natural-experiment  endogenous-exogenous 
january 2017 by nhaliday
Edge.org: Q-Bio, the most interesting recent [scientific] news
Applied mathematicians and theoretical physicists are rushing to develop new sophisticated tools that can capture the other, non-genomic challenges posed in trying to quantify biology. One of these challenges is that the number of individuals in a community may be large, but not as large as there are molecules of gas in your lungs, for example. So the traditional tools of physics based on statistical modeling have to be upgraded to deal with the large fluctuations encountered, such as in the number of proteins in a cell or individuals in an ecosystem. Another fundamental challenge is that living systems need an energy source.

They are inherently out of thermodynamic equilibrium, and so cannot be described by the century-old tools of statistical thermodynamics developed by Einstein, Boltzmann and Gibbs. Stanislaw Ulam, a mathematician who helped originate the basic principle behind the hydrogen bomb, once quipped, “Ask not what physics can do for biology. Ask what biology can do for physics.” Today, the answer is clear: biology is forcing physicists to develop new experimental and theoretical tools to explore living cells in action.
bio  trends  science  interdisciplinary  physics  thermo  org:edge  giants  einstein  boltzmann  stat-mech  equilibrium  complex-systems  cybernetics 
november 2016 by nhaliday
Wizard War | West Hunter
Some of his successes were classically thin, as when he correctly analyzed the German two-beam navigation system (Knickebein). He realize that the area of overlap of two beams could be narrow, far narrower than suggested by the Rayleigh criterion.

During the early struggle with the Germans, the “Battle of the Beams”, he personally read all the relevant Enigma messages. They piled up on his desk, but he could almost always pull out the relevant message, since he remembered the date, which typewriter it had been typed on, and the kind of typewriter ribbon or carbon. When asked, he could usually pick out the message in question in seconds. This system was deliberate: Jones believed that the larger the field any one man could cover, the greater the chance of one brain connecting two facts – the classic approach to a ‘thick’ problem, not that anyone seems to know that anymore.

All that information churning in his head produced results, enough so that his bureaucratic rivals concluded that he had some special unshared source of information. They made at least three attempts to infiltrate his Section to locate this great undisclosed source. An officer from Bletchley Park was offered on a part-time basis with that secret objective. After a month or so he was called back, and assured his superiors that there was no trace of anything other than what they already knew. When someone asked ‘Then how does Jones do it? ‘ he replied ‘Well, I suppose, Sir, he thinks!’
west-hunter  books  review  history  stories  problem-solving  frontier  thick-thin  intel  mostly-modern  the-trenches  complex-systems  applications  scitariat  info-dynamics  world-war  theory-practice  intersection-connectedness  quotes  alt-inst  inference  apollonian-dionysian  consilience 
november 2016 by nhaliday
Thick and thin | West Hunter
There is a spectrum of problem-solving, ranging from, at one extreme, simplicity and clear chains of logical reasoning (sometimes long chains) and, at the other, building a picture by sifting through a vast mass of evidence of varying quality. I will give some examples. Just the other day, when I was conferring, conversing and otherwise hobnobbing with my fellow physicists, I mentioned high-altitude lighting, sprites and elves and blue jets. I said that you could think of a thundercloud as a vertical dipole, with an electric field that decreased as the cube of altitude, while the breakdown voltage varied with air pressure, which declines exponentially with altitude. At which point the prof I was talking to said ” and so the curves must cross!”. That’s how physicists think, and it can be very effective. The amount of information required to solve the problem is not very large. I call this a ‘thin’ problem’.

...

In another example at the messy end of the spectrum, Joe Rochefort, running Hypo in the spring of 1942, needed to figure out Japanese plans. He had an an ever-growing mass of Japanese radio intercepts, some of which were partially decrypted – say, one word of five, with luck. He had data from radio direction-finding; his people were beginning to be able to recognize particular Japanese radio operators by their ‘fist’. He’d studied in Japan, knew the Japanese well. He had plenty of Navy experience – knew what was possible. I would call this a classic ‘thick’ problem, one in which an analyst needs to deal with an enormous amount of data of varying quality. Being smart is necessary but not sufficient: you also need to know lots of stuff.

...

Nimitz believed Rochefort – who was correct. Because of that, we managed to prevail at Midway, losing one carrier and one destroyer while the the Japanese lost four carriers and a heavy cruiser*. As so often happens, OP-20-G won the bureaucratic war: Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.

The usual explanation of Joe Rochefort’s fall argues that John Redman’s ( head of OP-20-G, the Navy’s main signals intelligence and cryptanalysis group) geographical proximity to Navy headquarters was a key factor in winning the bureaucratic struggle, along with his brother’s influence (Rear Admiral Joseph Redman). That and being a shameless liar.

Personally, I wonder if part of the problem is the great difficulty of explaining the analysis of a thick problem to someone without a similar depth of knowledge. At best, they believe you because you’ve been right in the past. Or, sometimes, once you have developed the answer, there is a ‘thin’ way of confirming your answer – as when Rochefort took Jasper Holmes’s suggestion and had Midway broadcast an uncoded complaint about the failure of their distillation system – soon followed by a Japanese report that ‘AF’ was short of water.

Most problems in the social sciences are ‘thick’, and unfortunately, almost all of the researchers are as well. There are a lot more Redmans than Rocheforts.
west-hunter  thinking  things  science  social-science  rant  problem-solving  innovation  pre-2013  metabuch  frontier  thick-thin  stories  intel  mostly-modern  history  flexibility  rigidity  complex-systems  metameta  s:*  noise-structure  discovery  applications  scitariat  info-dynamics  world-war  analytical-holistic  the-trenches  creative  theory-practice  being-right  management  track-record  alien-character  darwinian  old-anglo  giants  magnitude  intersection-connectedness  knowledge  alt-inst  sky  physics  electromag  oceans  military  statesmen  big-peeps  organizing  communication  fire  inference  apollonian-dionysian  consilience  bio  evolution 
november 2016 by nhaliday
The Day Before Forever | West Hunter
Yesterday, I was discussing the possibilities concerning slowing, or reversing aging – why it’s obviously possible, although likely a hard engineering problem. Why partial successes would be valuable, why making use of the evolutionary theory of senescence should help, why we should look at whales and porcupines as well as Jeanne Calment, etc., etc. I talked a long time – it’s a subject that has interested me for many years.

But there’s one big question: why are the powers that be utterly uninterested ?

https://westhunt.wordpress.com/2017/07/03/the-best-things-in-life-are-cheap-today/
What if you could buy an extra year of youth for a million bucks (real cost). Clearly this country ( or any country) can’t afford that for everyone. Some people could: and I think it would stick in many people’s craw. Even worse if they do it by harvesting the pineal glands of children and using them to manufacture a waxy nodule that forfends age.

This is something like the days of old, pre-industrial times. Back then, the expensive, effective life-extender was food in a famine year.

https://westhunt.wordpress.com/2017/04/11/the-big-picture/
Once upon a time, I wrote a long spiel on life extension – before it was cool, apparently. I sent it off to an interested friend [a science fiction editor] who was at that time collaborating on a book with a certain politician. That politician – Speaker of the House, but that could be anyone of thousands of guys, right? – ran into my spiel and read it. His immediate reaction was that greatly extending the healthy human life span would be horrible – it would bankrupt Social Security ! Nice to know that guys running the show always have the big picture in mind.

Reminds me of a sf story [Trouble with Lichens] in which something of that sort is invented and denounced by the British trade unions, as a plot to keep them working forever & never retire.

https://westhunt.wordpress.com/2015/04/16/he-still-has-that-hair/
He’s got the argument backward: sure, natural selection has not favored perfect repair, so says the evolutionary theory of of senescence. If it had, then we could perhaps conclude that perfect repair was very hard to achieve, since we don’t see it, at least not in complex animals.* But since it was not favored, since natural selection never even tried, it may not be that difficult.

Any cost-free longevity gene that made you live to be 120 would have had a small payoff, since various hazards were fairly likely to get you by then anyway… And even if it would have been favored, a similar gene that cost a nickel would not have been. Yet we can afford a nickel.

There are useful natural examples: we don’t have to start from scratch. Bowhead whales live over 200 years: I’m not too proud to learn from them.

Lastly , this would take a lot of work. So what?

*Although we can invent things that evolution can’t – we don’t insist that all the intermediate stages be viable.

https://westhunt.wordpress.com/2013/12/09/aging/
https://westhunt.wordpress.com/2014/09/22/suspicious-minds/

doesn't think much of Aubrey de Gray: https://westhunt.wordpress.com/2013/07/21/of-mice-and-men/#comment-15832
I wouldn’t rely on Aubrey de Gray.

It might be easier to fix if we invested more than a millionth of a percent of GNP on longevity research. It’s doable, but hardly anyone is interested. I doubt if most people, including most MDs and biologists, even know that it’s theoretically possible.

I suppose I should do something about it. Some of our recent work ( Henry and me) suggests that people of sub-Saharan African descent might offer some clues – their funny pattern of high paternal age probably causes the late-life mortality crossover, it couldn’t hurt to know the mechanisms involved.

Make Room! Make Room!: https://westhunt.wordpress.com/2015/06/24/make-room-make-room/
There is a recent article in Phys Rev Letters (“Programed Death is Favored by Natural Selection in Spatial Systems”) arguing that aging is an adaptation – natural selection has favored mechanisms that get rid of useless old farts. I can think of other people that have argued for this – some pretty smart cookies (August Weismann, for example, although he later abandoned the idea) and at the other end of the spectrum utter loons like Martin Blaser.

...

There might could be mutations that significantly extended lifespan but had consequences that were bad for fitness, at least in past environments – but that isn’t too likely if mutational accumulation and antagonistic pleiotropy are the key drivers of senescence in humans. As I said, we’ve never seen any.

more on Martin Blaser:
https://westhunt.wordpress.com/2013/01/22/nasty-brutish-but-not-that-short/#comment-7514
This is off topic, but I just read Germs Are Us and was struck by the quote from Martin Blaser ““[causing nothing but harm] isn’t how evolution works” […] “H. pylori is an ancestral component of humanity.”
That seems to be the assumption that the inevitable trend is toward symbiosis that I recall from Ewald’s “Plague Time”. My recollection is that it’s false if the pathogen can easily jump to another host. The bulk of the New Yorker article reminded me of Seth Roberts.

I have corresponded at length with Blaser. He’s a damn fool, not just on this. Speaking of, would there be general interest in listing all the damn fools in public life? Of course making the short list would be easier.

https://westhunt.wordpress.com/2013/01/18/dirty-old-men/#comment-64117
I have corresponded at length with Blaser. He’s a damn fool, not just on this. Speaking of, would there be general interest in listing all the damn fools in public life? Of course making the short list would be easier.
enhancement  longevity  aging  discussion  west-hunter  scitariat  multi  thermo  death  money  big-picture  reflection  bounded-cognition  info-dynamics  scifi-fantasy  food  pinker  thinking  evolution  genetics  nature  oceans  inequality  troll  lol  chart  model-organism  shift  smoothness  🌞  🔬  track-record  low-hanging  aphorism  ideas  speculation  complex-systems  volo-avolo  poast  people  paternal-age  life-history  africa  natural-experiment  mutation  genetic-load  questions  study  summary  critique  org:nat  commentary  parasites-microbiome  disease  elite  tradeoffs  homo-hetero  contrarianism  history  medieval  lived-experience  EEA  modernity  malthus  optimization 
november 2016 by nhaliday
Information Processing: What is medicine’s 5 sigma?
I'm not aware of this history you reference, but I am only a recent entrant into this field. On the other hand Ioannidis is both a long time genomics researcher and someone who does meta-research on science, so he should know. He may have even written a paper on this subject -- I seem to recall he had hard numbers on the rate of replication of candidate gene studies and claimed it was in the low percents. BTW, this result shows that the vaunted intuition of biomedical types about "how things really work" in the human body is worth very little. We are much better off, in my opinion, relying on machine learning methods and brute force statistical power than priors based on, e.g., knowledge of biochemical pathways or cartoon models of cell function. (Even though such things are sometimes deemed sufficient to raise ~$100m in biotech investment!) This situation may change in the future but the record from the first decade of the 21st century is there for any serious scholar of the scientific method to study.

Both Ioannidis and I (through separate and independent analyses) feel that modern genomics is a good example of biomedical science that (now) actually works and produces results that replicate with relatively high confidence. It should be a model for other areas ...
hsu  replication  science  medicine  scitariat  meta:science  evidence-based  ioannidis  video  interview  bio  genomics  lens  methodology  thick-thin  candidate-gene  hypothesis-testing  complex-systems  stat-power  bounded-cognition  postmortem  info-dynamics  stats 
november 2016 by nhaliday
Mandelbrot (and Hudson’s) The (mis)Behaviour of Markets: A Fractal View of Risk, Ruin, and Reward | EVOLVING ECONOMICS
If you have read Nassim Taleb’s The Black Swan you will have come across some of Benoit Mandelbrot’s ideas. However, Mandelbrot and Hudson’s The (mis)Behaviour of Markets: A Fractal View of Risk, Ruin, and Reward offers a much clearer critique of the underpinnings of modern financial theory (there are many parts of The Black Swan where I’m still not sure I understand what Taleb is saying). Mandelbrot describes and pulls apart the contributions of Markowitz, Sharpe, Black, Scholes and friends in a way likely understandable to the intelligent lay reader. I expect that might flow from science journalist Richard Hudson’s involvement in writing the book.

- interesting parable about lakes and markets (but power laws aren't memoryless...?)
- yeah I think that's completely wrong actually. the important property of power laws is the lack of finite higher-order moments.

based off http://www.iima.ac.in/~jrvarma/blog/index.cgi/2008/12/21/ I think he really did mean a power law (x = 100/sqrt(r) => pdf is p(x) ~ |dr/dx| = 2e4/x^3)

edit: ah I get it now, for X ~ p(x) = 2/x^3 on [1,inf), we have E[X|X > k] = 2k, so not memoryless, but rather subject to a "slippery slope"
books  summary  finance  map-territory  tetlock  review  econotariat  distribution  parable  blowhards  multi  risk  decision-theory  tails  meta:prediction  complex-systems  broad-econ  power-law 
november 2016 by nhaliday
Paul Krugman Is an "Evolution Groupie" - Evonomics
Let me give you an example. William Hamilton’s wonderfully named paper “Geometry for the Selfish Herd” imagines a group of frogs sitting at the edge of a circular pond, from which a snake may emerge – and he supposes that the snake will grab and eat the nearest frog. Where will the frogs sit? To compress his argument, Hamilton points out that if there are two groups of frogs around the pool, each group has an equal chance of being targeted, and so does each frog within each group – which means that the chance of being eaten is less if you are a frog in the larger group. Thus if you are a frog trying to maximize your choice of survival, you will want to be part of the larger group; and the equilibrium must involve clumping of all the frogs as close together as possible.

Notice what is missing from this analysis. Hamilton does not talk about the evolutionary dynamics by which frogs might acquire a sit-with-the-other-frogs instinct; he does not take us through the intermediate steps along the evolutionary path in which frogs had not yet completely “realized” that they should stay with the herd. Why not? Because to do so would involve him in enormous complications that are basically irrelevant to his point, whereas – ahem – leapfrogging straight over these difficulties to look at the equilibrium in which all frogs maximize their chances given what the other frogs do is a very parsimonious, sharp-edged way of gaining insight.
essay  economics  evolution  interdisciplinary  methodology  reflection  krugman  heterodox  🎩  econotariat  c:**  equilibrium  parsimony  complex-systems  lens  competition  news  org:sci  org:mag 
november 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : dismalityframevague

related tags

:/  aaronson  ability-competence  absolute-relative  abstraction  academia  accelerationism  accuracy  acm  acmtariat  aDNA  adversarial  africa  age-generation  aging  agriculture  ai  ai-control  albion  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  anomie  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  arbitrage  arms  arrows  art  article  asia  assortative-mating  atmosphere  atoms  attaq  attention  audio  authoritarianism  autism  automata  automation  axioms  backup  barons  behavioral-econ  behavioral-gen  being-becoming  being-right  benevolence  best-practices  bias-variance  biases  big-peeps  big-picture  big-surf  big-yud  bio  biodet  biotech  bits  blog  blowhards  boltzmann  books  bostrom  bounded-cognition  brain-scan  brexit  britain  broad-econ  buddhism  business  business-models  c:**  c:***  california  cancer  candidate-gene  canon  capital  capitalism  career  cartoons  causation  characterization  chart  checklists  chemistry  chicago  china  christianity  circuits  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  clever-rats  climate-change  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  commentary  communication  communism  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  conquest-empire  consilience  contracts  contradiction  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  corporation  correlation  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  courage  cracker-econ  creative  crime  criminal-justice  CRISPR  critique  crooked  crux  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  darwinian  data  data-science  database  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definition  degrees-of-freedom  demographics  dennett  density  descriptive  detail-architecture  deterrence  developing-world  diet  differential  dignity  dimensionality  direction  dirty-hands  discipline  discovery  discrimination  discussion  disease  distribution  divergence  diversity  duality  duplication  duty  dysgenics  early-modern  earth  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  eden-heaven  education  EEA  efficiency  egalitarianism-hierarchy  EGT  einstein  electromag  elite  embodied  emergent  emotion  empirical  ems  endo-exo  endogenous-exogenous  ends-means  engineering  enhancement  ensembles  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epistemic  equilibrium  ergodic  error  essay  essence-existence  estimate  ethics  EU  europe  events  evidence-based  evolution  evopsych  examples  existence  exit-voice  expanders  experiment  expert  expert-experience  explanans  explanation  externalities  extrema  farmers-and-foragers  FDA  fermi  fiction  fields  finance  finiteness  fire  fixed-point  flexibility  fluid  flux-stasis  food  foreign-policy  formal-values  forms-instances  frequency  frontier  futurism  gallic  games  garett-jones  gavisti  gedanken  gelman  gender  gender-diff  gene-flow  generalization  genetic-load  genetics  genomics  geoengineering  geometry  geopolitics  germanic  giants  gnon  gnosis-logos  gnxp  good-evil  gotchas  government  graph-theory  graphs  gravity  gray-econ  gregory-clark  group-selection  growth-econ  guide  GWAS  gwern  hanson  hard-tech  hardness  hardware  hari-seldon  harvard  health  healthcare  heavy-industry  hetero-advantage  heterodox  heuristic  hi-order-bits  higher-ed  history  hmm  hn  homo-hetero  housing  hsu  human-capital  humanity  humility  hypocrisy  hypothesis-testing  ideas  identity  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impact  impetus  incentives  increase-decrease  india  individualism-collectivism  industrial-org  inequality  inference  info-dynamics  info-foraging  information-theory  innovation  input-output  insight  instinct  institutions  intel  intelligence  interdisciplinary  interests  internet  interpretability  intersection-connectedness  intervention  interview  intricacy  intuition  invariance  investing  ioannidis  iq  iraq-syria  iron-age  islam  israel  iteration-recursion  janus  jargon  journos-pundits  judaism  justice  kinship  knowledge  krugman  kumbaya-kult  labor  land  language  large-factor  law  leadership  learning  lectures  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  lexical  life-history  limits  linear-algebra  links  list  literature  lived-experience  lmao  local-global  logic  lol  long-short-run  long-term  longevity  longform  low-hanging  lower-bounds  machine-learning  macro  madisonian  magnitude  malthus  management  managerial-state  manifolds  map-territory  marginal  marginal-rev  market-failure  markets  martial  matching  math  math.CO  math.DS  mathtariat  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memes(ew)  MENA  meta:medicine  meta:prediction  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metameta  methodology  metrics  micro  microfoundations  migrant-crisis  migration  military  minimalism  miri-cfar  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  morality  mostly-modern  multi  multiplicative  mutation  mystic  n-factor  nascent-state  nationalism-globalism  natural-experiment  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  nlp  no-go  noahpinion  noblesse-oblige  noise-structure  nonlinearity  nootropics  novelty  nuclear  null-result  number  nutrition  obesity  objektbuch  occam  occident  oceans  offense-defense  old-anglo  operational  optimate  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:biz  org:bleg  org:bv  org:data  org:econlib  org:edge  org:edu  org:gov  org:inst  org:local  org:mag  org:mat  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  oscillation  outcome-risk  overflow  p:someday  papers  parable  paradox  parallax  parasites-microbiome  parenting  parsimony  paternal-age  patho-altruism  patience  paul-romer  paying-rent  pdf  peace-violence  people  performance  persuasion  perturbation  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  pic  piketty  pinker  planning  plots  poast  podcast  polanyi-marx  polarization  policy  polisci  political-econ  politics  poll  polynomials  pop-diff  popsci  population-genetics  populism  postmortem  postrat  power  power-law  pragmatic  pre-2013  prediction  predictive-processing  preprint  presentation  priors-posteriors  pro-rata  probability  problem-solving  profile  propaganda  properties  proposal  protestant-catholic  prudence  psych-architecture  psychiatry  psychology  psychometrics  public-health  publishing  putnam-like  q-n-a  qra  quantitative-qualitative  quantum  quantum-info  questions  quotes  race  randy-ayndy  rant  rat-pack  rationality  ratty  realness  reason  recent-selection  recruiting  red-queen  reddit  redistribution  reduction  reference  reflection  regression-to-mean  regularizer  regulation  reinforcement  relativity  religion  replication  research  responsibility  retention  review  revolution  rhetoric  right-wing  rigidity  rigor  risk  robust  roots  rot  russia  s:*  s:**  safety  sapiens  scale  science  scifi-fantasy  scitariat  search  securities  selection  self-interest  self-report  sequential  sex  sexuality  shift  signal-noise  signaling  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  sky  slides  slippery-slope  smoothness  social  social-capital  social-norms  social-psych  social-science  social-structure  sociality  society  sociology  socs-and-mops  software  space  spatial  spearhead  speculation  speed  speedometer  spengler  spock  spreading  ssc  stagnation  stat-mech  stat-power  statesmen  stats  status  stochastic-processes  stories  strategy  stream  street-fighting  stress  structure  study  studying  stylized-facts  subculture  subjective-objective  summary  supply-demand  survey  sv  symmetry  synthesis  systematic-ad-hoc  tactics  tails  talks  taxes  tcs  tcstariat  tech  technocracy  technology  techtariat  telos-atelos  temperance  temperature  terrorism  tetlock  the-classics  the-great-west-whale  the-self  the-trenches  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  time  time-preference  time-series  top-n  track-record  trade  tradeoffs  trends  tribalism  trivia  troll  trump  trust  truth  turing  twin-study  twitter  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  urban  urban-rural  us-them  usa  utopia-dystopia  values  vampire-squid  variance-components  video  virtu  visual-understanding  visualization  visuo  volo-avolo  von-neumann  walls  war  wealth  web  weird  welfare-state  west-hunter  westminster  white-paper  whole-partial-many  wiki  wild-ideas  winner-take-all  wire-guided  within-without  wonkish  working-stiff  workshop  world  world-war  xenobio  yvain  zeitgeist  zero-positive-sum  🌞  🎩  🐸  👽  🔬  🤖 

Copy this bookmark:



description:


tags: