nhaliday + composition-decomposition   69

Links 3/19: Linkguini | Slate Star Codex
How did the descendants of the Mayan Indians end up in the Eastern Orthodox Church?

Does Parental Quality Matter? Study using three sources of parental variation that are mostly immune to genetic confounding find that “the strong parent-child correlation in education is largely causal”. For example, “the parent-child correlation in education is stronger with the parent that spends more time with the child”.

Before and after pictures of tech leaders like Jeff Bezos, Elon Musk, and Sergey Brin suggest they’re taking supplemental testosterone. And though it may help them keep looking young, Palladium points out that there might be other effects from having some of our most powerful businessmen on a hormone that increases risk-taking and ambition. They ask whether the new availability of testosterone supplements is prolonging Silicon Valley businessmen’s “brash entrepreneur” phase well past the point where they would normally become mature respectable elders. But it also hints at an almost opposite take: average testosterone levels have been falling for decades, so at this point these businessmen would be the only “normal” (by 1950s standards) men out there, and everyone else would be unprecedently risk-averse and boring. Paging Peter Thiel and everyone else who takes about how things “just worked better” in Eisenhower’s day.

China’s SesameCredit social monitoring system, widely portrayed as dystopian, has an 80% approval rate in China (vs. 19% neutral and 1% disapproval). The researchers admit that although all data is confidential and they are not affiliated with the Chinese government, their participants might not believe that confidently enough to answer honestly.

I know how much you guys love attacking EAs for “pathological altruism” or whatever terms you’re using nowadays, so here’s an article where rationalist community member John Beshir describes his experience getting malaria on purpose to help researchers test a vaccine.

Some evidence against the theory that missing fathers cause earlier menarche.

John Nerst of EverythingStudies’ political compass.
ratty  yvain  ssc  links  multi  biodet  behavioral-gen  regularizer  causation  contrarianism  education  correlation  parenting  developmental  direct-indirect  time  religion  christianity  eastern-europe  russia  latin-america  other-xtian  endocrine  trends  malaise  stagnation  thiel  barons  tech  sv  business  rot  zeitgeist  outcome-risk  critique  environmental-effects  poll  china  asia  authoritarianism  alt-inst  sentiment  policy  n-factor  individualism-collectivism  pro-rata  technocracy  managerial-state  civil-liberty  effective-altruism  subculture  wtf  disease  parasites-microbiome  patho-altruism  self-interest  lol  africa  experiment  medicine  expression-survival  things  dimensionality  degrees-of-freedom  sex  composition-decomposition  analytical-holistic  systematic-ad-hoc  coordination  alignment  cooperate-defect  politics  coalitions  ideology  left-wing  right-wing  summary  exit-voice  redistribution  randy-ayndy  welfare-state 
6 weeks ago by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
[1804.04268] Incomplete Contracting and AI Alignment
We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a systematic approach to finding solutions. We first provide an overview of the incomplete contracting literature and explore parallels between this work and the problem of AI alignment. As we emphasize, misalignment between principal and agent is a core focus of economic analysis. We highlight some technical results from the economics literature on incomplete contracts that may provide insights for AI alignment researchers. Our core contribution, however, is to bring to bear an insight that economists have been urged to absorb from legal scholars and other behavioral scientists: the fact that human contracting is supported by substantial amounts of external structure, such as generally available institutions (culture, law) that can supply implied terms to fill the gaps in incomplete contracts. We propose a research agenda for AI alignment work that focuses on the problem of how to build AI that can replicate the human cognitive processes that connect individual incomplete contracts with this supporting external structure.
nibble  preprint  org:mat  papers  ai  ai-control  alignment  coordination  contracts  law  economics  interests  culture  institutions  number  context  behavioral-econ  composition-decomposition  rent-seeking  whole-partial-many 
april 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Society of Mind - Wikipedia
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

https://en.wikipedia.org/wiki/Modularity_of_mind

The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
books  ideas  speculation  structure  composition-decomposition  complex-systems  neuro  ai  psychology  cog-psych  intelligence  reduction  wiki  giants  philosophy  number  cohesion  diversity  systematic-ad-hoc  detail-architecture  pdf  study  neuro-nitgrit  brain-scan  nitty-gritty  network-structure  graphs  graph-theory  models  whole-partial-many  evopsych  eden  reference  psych-architecture  article 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
What are the Laws of Biology?
The core finding of systems biology is that only a very small subset of possible network motifs is actually used and that these motifs recur in all kinds of different systems, from transcriptional to biochemical to neural networks. This is because only those arrangements of interactions effectively perform some useful operation, which underlies some necessary function at a cellular or organismal level. There are different arrangements for input summation, input comparison, integration over time, high-pass or low-pass filtering, negative auto-regulation, coincidence detection, periodic oscillation, bistability, rapid onset response, rapid offset response, turning a graded signal into a sharp pulse or boundary, and so on, and so on.

These are all familiar concepts and designs in engineering and computing, with well-known properties. In living organisms there is one other general property that the designs must satisfy: robustness. They have to work with noisy components, at a scale that’s highly susceptible to thermal noise and environmental perturbations. Of the subset of designs that perform some operation, only a much smaller subset will do it robustly enough to be useful in a living organism. That is, they can still perform their particular functions in the face of noisy or fluctuating inputs or variation in the number of components constituting the elements of the network itself.
scitariat  reflection  proposal  ideas  thinking  conceptual-vocab  lens  bio  complex-systems  selection  evolution  flux-stasis  network-structure  structure  composition-decomposition  IEEE  robust  signal-noise  perturbation  interdisciplinary  graphs  circuits  🌞  big-picture  hi-order-bits  nibble  synthesis 
november 2017 by nhaliday
multivariate analysis - Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? - Cross Validated
The bivariate normal distribution is the exception, not the rule!

It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided.

Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications.

note: there is a multivariate central limit theorem, so those such applications have no problem
nibble  q-n-a  overflow  stats  math  acm  probability  distribution  gotchas  intricacy  characterization  structure  composition-decomposition  counterexample  limits  concentration-of-measure 
october 2017 by nhaliday
Genetic influences on measures of the environment: a systematic review | Psychological Medicine | Cambridge Core
Background. Traditional models of psychiatric epidemiology often assume that the relationship between individuals and their environment is unidirectional, from environment to person. Accumulating evidence from developmental and genetic studies has made this perspective increasingly untenable.

Results. We identified 55 independent studies organized into seven categories: general and specific stressful life events (SLEs), parenting as reported by child, parenting reported by parent, family environment, social support, peer interactions, and marital quality. Thirty-five environmental measures in these categories were examined by at least two studies and produced weighted heritability estimates ranging from 7% to 39%, with most falling between 15% and 35%. The weighted heritability for all environmental measures in all studies was 27%. The weighted heritability for environmental measures by rating method was: self-report 29%, informant report 26%, and direct rater or videotape observation (typically examining 10 min of behavior) 14%.
study  meta-analysis  biodet  behavioral-gen  genetics  population-genetics  🌞  regularizer  environmental-effects  GxE  psychiatry  epidemiology  composition-decomposition 
october 2017 by nhaliday
design patterns - What is MVC, really? - Software Engineering Stack Exchange
The model manages fundamental behaviors and data of the application. It can respond to requests for information, respond to instructions to change the state of its information, and even to notify observers in event-driven systems when information changes. This could be a database, or any number of data structures or storage systems. In short, it is the data and data-management of the application.

The view effectively provides the user interface element of the application. It'll render data from the model into a form that is suitable for the user interface.

The controller receives user input and makes calls to model objects and the view to perform appropriate actions.

...

Though this answer has 21 upvotes, I find the sentence "This could be a database, or any number of data structures or storage systems. (tl;dr : it's the data and data-management of the application)" horrible. The model is the pure business/domain logic. And this can and should be so much more than data management of an application. I also differentiate between domain logic and application logic. A controller should not ever contain business/domain logic or talk to a database directly.
q-n-a  stackex  explanation  concept  conceptual-vocab  structure  composition-decomposition  programming  engineering  best-practices  pragmatic  jargon  thinking  metabuch  working-stiff  tech  🖥  checklists 
october 2017 by nhaliday
Why are children in the same family so different from one another? - PubMed - NCBI
- Plomin et al

The article has three goals: (1) To describe quantitative genetic methods and research that lead to the conclusion that nonshared environment is responsible for most environmental variation relevant to psychological development, (2) to discuss specific nonshared environmental influences that have been studied to date, and (3) to consider relationships between nonshared environmental influences and behavioral differences between children in the same family. The reason for presenting this article in BBS is to draw attention to the far-reaching implications of finding that psychologically relevant environmental influences make children in a family different from, not similar to, one another.
study  essay  article  survey  spearhead  psychology  social-psych  biodet  behavioral-gen  🌞  methodology  environmental-effects  signal-noise  systematic-ad-hoc  composition-decomposition  pdf  piracy  volo-avolo  developmental  iq  cog-psych  variance-components  GxE  nonlinearity  twin-study  personality  sib-study 
october 2017 by nhaliday
Peter Turchin Catalonia Independence Drive: a Case-Study in Applied Cultural Evolution - Peter Turchin
The theoretically interesting question is what is the optimal size of a politically independent unit (“polity”) in today’s world. Clearly, optimal size changes with time and social environment. We know empirically that the optimal size of a European state took a step up following 1500. As a result, the number of independent polities in Europe decreased from many hundreds in 1500 to just over 30 in 1900. The reason was the introduction of gunpowder that greatly elevated war intensity. The new evolutionary regime eliminated almost all of the small states, apart from a few special cases (like the Papacy or Monaco).

In today’s Europe, however, war has ceased to be an evolutionary force. It may change, but since 1945 the success or failure of European polities has been largely determined by their ability to deliver high levels of living standards to their citizens. Economics is not the only aspect of well-being, but let’s focus on it here because it is clearly the main driver behind Catalonian independence (since culturally and linguistically Catalonia has been given a free rein within Spain).

...

This is applied cultural evolution. We can have lots of theories and models about the optimal polity size, but they are worthless without data.

And it’s much more than a scientific issue. The only way for our societies to become better in all kinds of ways (wealthier, more just, more efficient) is to allow cultural evolution a free rein. More specifically, we need cultural group selection at the level of polities. A major problem for the humanity is finding ways to have such cultural group selection to take place without violence. Which is why I find the current moves by Madrid to suppress the Catalonian independence vote by force criminally reckless. It seems that Madrid still wants to go back to the world as it was in the nineteenth century (or more accurately, Europe between 1500 and 1900).

A World of 1,000 Nations: http://www.unz.com/akarlin/a-world-of-1000-nations/

Brief note on Catalonia: https://nintil.com/brief-note-on-catalonia/
This could be just another footnote in a history book, or an opening passage in the chapter that explains how you got an explosion in the number of states that began around 2017.

Nationalism, Liberalism and the European Paradox: http://quillette.com/2017/10/08/nationalism-liberalism-european-paradox/
Imagine for a moment that an ethnic group declared a referendum of independence in an Asian country and the nation state in question promptly sought to take the act of rebellion down. Imagine that in the ensuing chaos over 800 people were injured in a brutal police crackdown. Imagine the international disgust if this had happened in Asia, or the Middle East, or Latin America, or even in parts of Eastern and Central Europe. There would be calls for interventions, the topic would be urgently raised at the Security Council —and there might even be talks of sanctions or the arming of moderate rebels.

Of course, nothing of that sort happened as the Spanish state declared the Catalonian independence referendum a farce.

...

Remarkably, EU officials have largely remained mute. France’s new great hope, Monsieur Macron has sheepishly supported Spain’s “constitutional unity,” which is weasel-speak for national sovereignty—a concept which is so often dismissed by the very same European nations if it happens immediately outside the geographical region of EU. And this attitude towards nationalism—that it is archaic and obsolete on the one hand, but vitally important on the other—is the core paradox, and, some would say, hypocrisy, that has been laid bare by this sudden outbreak of tension.

It is a hypocrisy because one could argue that since the collapse of the Soviet Union, there has been a consistent and very real attempt to undermine sovereignty in many different parts of the world. To be fair, this has been done with mostly good intentions in the name of institutionalism and global governance, the “responsibility to protect” and universal human rights. With history in the Hegelian sense seemingly over after the collapse of the Berlin Wall, nationalism and great power politics were thought to be a thing of the past—a quaint absurdity—an irrelevance and a barrier to true Enlightenment. But unfortunately history does tend to have a sardonic sense of humour.

The entire European project was built on two fundamentally different ideas. One that promotes economic welfare based on borderless free trade, the free market and social individualism. And the other, promoting a centralized hierarchy, an elite in loco parentis which makes decisions about how many calories one should consume, what plastic one should import, and what gross picture of shredded lungs one should see on the front of a cigarette packet. It endorses sovereignty when it means rule by democracy and the protection of human rights, but not when countries decide to control their borders or their individual monetary and economic policies. Over time, defending these contradictions has become increasingly difficult, with cynical onlookers accusing technocrats of defending an unjustifiable and arbitrary set of principles.

All of this has resulted in three things. Regional ethnic groups in Europe have seen the examples of ethnic groups abroad undermining their own national governments, and they have picked up on these lessons. They also possess the same revolutionary technology—Twitter and the iPhone. Secondly, as Westphalian nation-states have been undermined repeatedly by borderless technocrats, identity movements based on ethnicity have begun to rise up. Humans, tribal at their very core, will always give in to the urge of having a cohesive social group to join, and a flag to wave high. And finally, there really is no logical counterargument to Catalans or Scots wanting to break apart from one union while staying in another. If ultimately, everything is going to be dictated by a handful of liege-lords in Brussels—why even obey the middle-man in Madrid or London?

https://twitter.com/whyvert/status/914521100263890944
https://archive.is/WKfIA
Spain should have either forcibly assimilated Catalonia as France did with its foreign regions, or established a formal federation of states
--
ah those are the premodern and modern methods. The postmodern method is to bring in lots of immigrants (who will vote against separation)
turchin  broad-econ  commentary  current-events  europe  mediterranean  exit-voice  politics  polisci  anthropology  cultural-dynamics  scale  homo-hetero  density  composition-decomposition  increase-decrease  shift  geography  cohesion  multi  ratty  unaffiliated  leviathan  civil-liberty  universalism-particularism  institutions  government  group-selection  natural-experiment  conquest-empire  decentralized  EU  the-great-west-whale  hypocrisy  nationalism-globalism  news  org:mag  org:popup  whiggish-hegelian  elite  vampire-squid  managerial-state  anarcho-tyranny  tribalism  us-them  self-interest  ethnocentrism  prudence  rhetoric  ideology  zeitgeist  competition  latin-america  race  demographics  pop-structure  gnon  data  visualization  maps  history  early-modern  mostly-modern  time-series  twitter  social  discussion  backup  scitariat  rant  migration  modernity 
october 2017 by nhaliday
Does Learning to Read Improve Intelligence? A Longitudinal Multivariate Analysis in Identical Twins From Age 7 to 16
Stuart Richie, Bates, Plomin

SEM: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354297/figure/fig03/

The variance explained by each path in the diagrams included here can be calculated by squaring its path weight. To take one example, reading differences at age 12 in the model shown in Figure​Figure33 explain 7% of intelligence differences at age 16 (.262). However, since our measures are of differences, they are likely to include substantial amounts of noise: Measurement error may produce spurious differences. To remove this error variance, we can take an estimate of the reliability of the measures (generally high, since our measures are normed, standardized tests), which indicates the variance expected purely by the reliability of the measure, and subtract it from the observed variance between twins in our sample. Correcting for reliability in this way, the effect size estimates are somewhat larger; to take the above example, the reliability-corrected effect size of age 12 reading differences on age 16 intelligence differences is around 13% of the “signal” variance. It should be noted that the age 12 reading differences themselves are influenced by many previous paths from both reading and intelligence, as illustrated in Figure​Figure33.

...

The present study provided compelling evidence that improvements in reading ability, themselves caused purely by the nonshared environment, may result in improvements in both verbal and nonverbal cognitive ability, and may thus be a factor increasing cognitive diversity within families (Plomin, 2011). These associations are present at least as early as age 7, and are not—to the extent we were able to test this possibility—driven by differences in reading exposure. Since reading is a potentially remediable ability, these findings have implications for reading instruction: Early remediation of reading problems might not only aid in the growth of literacy, but may also improve more general cognitive abilities that are of critical importance across the life span.

Does Reading Cause Later Intelligence? Accounting for Stability in Models of Change: http://sci-hub.tw/10.1111/cdev.12669
Results from a state–trait model suggest that reported effects of reading ability on later intelligence may be artifacts of previously uncontrolled factors, both environmental in origin and stable during this developmental period, influencing both constructs throughout development.
study  albion  scitariat  spearhead  psychology  cog-psych  psychometrics  iq  intelligence  eden  language  psych-architecture  longitudinal  twin-study  developmental  environmental-effects  studying  🌞  retrofit  signal-noise  intervention  causation  graphs  graphical-models  flexibility  britain  neuro-nitgrit  effect-size  variance-components  measurement  multi  sequential  time  composition-decomposition  biodet  behavioral-gen  direct-indirect  systematic-ad-hoc  debate  hmm  pdf  piracy  flux-stasis 
september 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
Culture, Ethnicity, and Diversity - American Economic Association
We investigate the empirical relationship between ethnicity and culture, defined as a vector of traits reflecting norms, values, and attitudes. Using survey data for 76 countries, we find that ethnic identity is a significant predictor of cultural values, yet that within-group variation in culture trumps between-group variation. Thus, in contrast to a commonly held view, ethnic and cultural diversity are unrelated. Although only a small portion of a country’s overall cultural heterogeneity occurs between groups, we find that various political economy outcomes (such as civil conflict and public goods provision) worsen when there is greater overlap between ethnicity and culture. (JEL D74, H41, J15, O15, O17, Z13)

definition of chi-squared index, etc., under:
II. Measuring Heterogeneity

Table 5—Incidence of Civil Conflict and Diversity
Table 6—Public Goods Provision and Diversity

https://twitter.com/GarettJones/status/924002043576115202
https://archive.is/oqMnC
https://archive.is/sBqqo
https://archive.is/1AcXn
χ2 diversity: raising the risk of civil war. Desmet, Ortuño-Ortín, Wacziarg, in the American Economic Review (1/N)

What predicts higher χ2 diversity? The authors tell us that, too. Here are all of the variables that have a correlation > 0.4: (7/N)

one of them is UK legal origin...

online appendix (with maps, Figures B1-3): http://www.anderson.ucla.edu/faculty_pages/romain.wacziarg/downloads/2017_culture_appendix.pdf
study  economics  growth-econ  broad-econ  world  developing-world  race  diversity  putnam-like  culture  cultural-dynamics  entropy-like  metrics  within-group  anthropology  microfoundations  political-econ  🎩  🌞  pdf  piracy  public-goodish  general-survey  cohesion  ethnocentrism  tribalism  behavioral-econ  sociology  cooperate-defect  homo-hetero  revolution  war  stylized-facts  econometrics  group-level  variance-components  multi  twitter  social  commentary  spearhead  econotariat  garett-jones  backup  summary  maps  data  visualization  correlation  values  poll  composition-decomposition  concept  conceptual-vocab  definition  intricacy  nonlinearity  anglosphere  regression  law  roots  within-without 
september 2017 by nhaliday
Is the economy illegible? | askblog
In the model of the economy as a GDP factory, the most fundamental equation is the production function, Y = f(K,L).

This says that total output (Y) is determined by the total amount of capital (K) and the total amount of labor (L).

Let me stipulate that the economy is legible to the extent that this model can be applied usefully to explain economic developments. I want to point out that the economy, while never as legible as economists might have thought, is rapidly becoming less legible.
econotariat  cracker-econ  economics  macro  big-picture  empirical  legibility  let-me-see  metrics  measurement  econ-metrics  volo-avolo  securities  markets  amazon  business-models  business  tech  sv  corporation  inequality  compensation  polarization  econ-productivity  stagnation  monetary-fiscal  models  complex-systems  map-territory  thinking  nationalism-globalism  time-preference  cost-disease  education  healthcare  composition-decomposition  econometrics  methodology  lens  arrows  labor  capital  trends  intricacy  🎩  moments  winner-take-all  efficiency  input-output 
august 2017 by nhaliday
The Determinants of Trust
Both individual experiences and community characteristics influence how much people trust each other. Using data drawn from US localities we find that the strongest factors that reduce trust are: i) a recent history of traumatic experiences, even though the passage of time reduces this effect fairly rapidly; ii) belonging to a group that historically felt discriminated against, such as minorities (black in particular) and, to a lesser extent, women; iii) being economically unsuccessful in terms of income and education; iv) living in a racially mixed community and/or in one with a high degree of income disparity. Religious beliefs and ethnic origins do not significantly affect trust. The latter result may be an indication that the American melting pot at least up to a point works, in terms of homogenizing attitudes of different cultures, even though racial cleavages leading to low trust are still quite high.

Understanding Trust: http://www.nber.org/papers/w13387
In this paper we resolve this puzzle by recognizing that trust has two components: a belief-based one and a preference based one. While the sender's behavior reflects both, we show that WVS-like measures capture mostly the belief-based component, while questions on past trusting behavior are better at capturing the preference component of trust.

MEASURING TRUST: http://scholar.harvard.edu/files/laibson/files/measuring_trust.pdf
We combine two experiments and a survey to measure trust and trustworthiness— two key components of social capital. Standard attitudinal survey questions about trust predict trustworthy behavior in our experiments much better than they predict trusting behavior. Trusting behavior in the experiments is predicted by past trusting behavior outside of the experiments. When individuals are closer socially, both trust and trustworthiness rise. Trustworthiness declines when partners are of different races or nationalities. High status individuals are able to elicit more trustworthiness in others.

What is Social Capital? The Determinants of Trust and Trustworthiness: http://www.nber.org/papers/w7216
Using a sample of Harvard undergraduates, we analyze trust and social capital in two experiments. Trusting behavior and trustworthiness rise with social connection; differences in race and nationality reduce the level of trustworthiness. Certain individuals appear to be persistently more trusting, but these people do not say they are more trusting in surveys. Survey questions about trust predict trustworthiness not trust. Only children are less trustworthy. People behave in a more trustworthy manner towards higher status individuals, and therefore status increases earnings in the experiment. As such, high status persons can be said to have more social capital.

Trust and Cheating: http://www.nber.org/papers/w18509
We find that: i) both parties to a trust exchange have implicit notions of what constitutes cheating even in a context without promises or messages; ii) these notions are not unique - the vast majority of senders would feel cheated by a negative return on their trust/investment, whereas a sizable minority defines cheating according to an equal split rule; iii) these implicit notions affect the behavior of both sides to the exchange in terms of whether to trust or cheat and to what extent. Finally, we show that individual's notions of what constitutes cheating can be traced back to two classes of values instilled by parents: cooperative and competitive. The first class of values tends to soften the notion while the other tightens it.

Nationalism and Ethnic-Based Trust: Evidence from an African Border Region: https://u.osu.edu/robinson.1012/files/2015/12/Robinson_NationalismTrust-1q3q9u1.pdf
These results offer microlevel evidence that a strong and salient national identity can diminish ethnic barriers to trust in diverse societies.

One Team, One Nation: Football, Ethnic Identity, and Conflict in Africa: http://conference.nber.org/confer//2017/SI2017/DEV/Durante_Depetris-Chauvin.pdf
Do collective experiences that prime sentiments of national unity reduce interethnic tensions and conflict? We examine this question by looking at the impact of national football teams’ victories in sub-Saharan Africa. Combining individual survey data with information on over 70 official matches played between 2000 and 2015, we find that individuals interviewed in the days after a victory of their country’s national team are less likely to report a strong sense of ethnic identity and more likely to trust people of other ethnicities than those interviewed just before. The effect is sizable and robust and is not explained by generic euphoria or optimism. Crucially, national victories do not only affect attitudes but also reduce violence. Indeed, using plausibly exogenous variation from close qualifications to the Africa Cup of Nations, we find that countries that (barely) qualified experience significantly less conflict in the following six months than countries that (barely) did not. Our findings indicate that, even where ethnic tensions have deep historical roots, patriotic shocks can reduce inter-ethnic tensions and have a tangible impact on conflict.

Why Does Ethnic Diversity Undermine Public Goods Provision?: http://www.columbia.edu/~mh2245/papers1/HHPW.pdf
We identify three families of mechanisms that link diversity to public goods provision—–what we term “preferences,” “technology,” and “strategy selection” mechanisms—–and run a series of experimental games that permit us to compare the explanatory power of distinct mechanisms within each of these three families. Results from games conducted with a random sample of 300 subjects from a slum neighborhood of Kampala, Uganda, suggest that successful public goods provision in homogenous ethnic communities can be attributed to a strategy selection mechanism: in similar settings, co-ethnics play cooperative equilibria, whereas non-co-ethnics do not. In addition, we find evidence for a technology mechanism: co-ethnics are more closely linked on social networks and thus plausibly better able to support cooperation through the threat of social sanction. We find no evidence for prominent preference mechanisms that emphasize the commonality of tastes within ethnic groups or a greater degree of altruism toward co-ethnics, and only weak evidence for technology mechanisms that focus on the impact of shared ethnicity on the productivity of teams.

does it generalize to first world?

Higher Intelligence Groups Have Higher Cooperation Rates in the Repeated Prisoner's Dilemma: https://ideas.repec.org/p/iza/izadps/dp8499.html
The initial cooperation rates are similar, it increases in the groups with higher intelligence to reach almost full cooperation, while declining in the groups with lower intelligence. The difference is produced by the cumulation of small but persistent differences in the response to past cooperation of the partner. In higher intelligence subjects, cooperation after the initial stages is immediate and becomes the default mode, defection instead requires more time. For lower intelligence groups this difference is absent. Cooperation of higher intelligence subjects is payoff sensitive, thus not automatic: in a treatment with lower continuation probability there is no difference between different intelligence groups

Why societies cooperate: https://voxeu.org/article/why-societies-cooperate
Three attributes are often suggested to generate cooperative behaviour – a good heart, good norms, and intelligence. This column reports the results of a laboratory experiment in which groups of players benefited from learning to cooperate. It finds overwhelming support for the idea that intelligence is the primary condition for a socially cohesive, cooperative society. Warm feelings towards others and good norms have only a small and transitory effect.

individual payoff, etc.:

Trust, Values and False Consensus: http://www.nber.org/papers/w18460
Trust beliefs are heterogeneous across individuals and, at the same time, persistent across generations. We investigate one mechanism yielding these dual patterns: false consensus. In the context of a trust game experiment, we show that individuals extrapolate from their own type when forming trust beliefs about the same pool of potential partners - i.e., more (less) trustworthy individuals form more optimistic (pessimistic) trust beliefs - and that this tendency continues to color trust beliefs after several rounds of game-play. Moreover, we show that one's own type/trustworthiness can be traced back to the values parents transmit to their children during their upbringing. In a second closely-related experiment, we show the economic impact of mis-calibrated trust beliefs stemming from false consensus. Miscalibrated beliefs lower participants' experimental trust game earnings by about 20 percent on average.

The Right Amount of Trust: http://www.nber.org/papers/w15344
We investigate the relationship between individual trust and individual economic performance. We find that individual income is hump-shaped in a measure of intensity of trust beliefs. Our interpretation is that highly trusting individuals tend to assume too much social risk and to be cheated more often, ultimately performing less well than those with a belief close to the mean trustworthiness of the population. On the other hand, individuals with overly pessimistic beliefs avoid being cheated, but give up profitable opportunities, therefore underperforming. The cost of either too much or too little trust is comparable to the income lost by forgoing college.

...

This framework allows us to show that income-maximizing trust typically exceeds the trust level of the average person as well as to estimate the distribution of income lost to trust mistakes. We find that although a majority of individuals has well calibrated beliefs, a non-trivial proportion of the population (10%) has trust beliefs sufficiently poorly calibrated to lower income by more than 13%.

Do Trust and … [more]
study  economics  alesina  growth-econ  broad-econ  trust  cohesion  social-capital  religion  demographics  race  diversity  putnam-like  compensation  class  education  roots  phalanges  general-survey  multi  usa  GT-101  conceptual-vocab  concept  behavioral-econ  intricacy  composition-decomposition  values  descriptive  correlation  harvard  field-study  migration  poll  status  🎩  🌞  chart  anthropology  cultural-dynamics  psychology  social-psych  sociology  cooperate-defect  justice  egalitarianism-hierarchy  inequality  envy  n-factor  axelrod  pdf  microfoundations  nationalism-globalism  africa  intervention  counter-revolution  tribalism  culture  society  ethnocentrism  coordination  world  developing-world  innovation  econ-productivity  government  stylized-facts  madisonian  wealth-of-nations  identity-politics  public-goodish  s:*  legacy  things  optimization  curvature  s-factor  success  homo-hetero  higher-ed  models  empirical  contracts  human-capital  natural-experiment  endo-exo  data  scale  trade  markets  time  supply-demand  summary 
august 2017 by nhaliday
A combined analysis of genetically correlated traits identifies 107 loci associated with intelligence | bioRxiv
We apply MTAG to three large GWAS: Sniekers et al (2017) on intelligence, Okbay et al. (2016) on Educational attainment, and Hill et al. (2016) on household income. By combining these three samples our functional sample size increased from 78 308 participants to 147 194. We found 107 independent loci associated with intelligence, implicating 233 genes, using both SNP-based and gene-based GWAS. We find evidence that neurogenesis may explain some of the biological differences in intelligence as well as genes expressed in the synapse and those involved in the regulation of the nervous system.

...

Finally, using an independent sample of 6 844 individuals we were able to predict 7% of intelligence using SNP data alone.
study  bio  preprint  biodet  behavioral-gen  GWAS  genetics  iq  education  compensation  composition-decomposition  🌞  gwern  meta-analysis  genetic-correlation  scaling-up  methodology  correlation  state-of-art  neuro  neuro-nitgrit  dimensionality 
july 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software 
june 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Intersection of diverse neuronal genomes and neuropsychiatric disease: The Brain Somatic Mosaicism Network
Towards explaining non-shared-environment effects on intelligence, psychiatric disorders, and other cognitive traits - developmental noise such as post-conception mutations in individual cells or groups of cells
pdf  study  psychology  cog-psych  neuro  neuro-nitgrit  brain-scan  biodet  genetics  genomics  GWAS  🌞  psychiatry  behavioral-gen  mutation  environmental-effects  roots  org:nat  gwern  random  autism  proposal  signal-noise  developmental  composition-decomposition 
may 2017 by nhaliday
How Transparency Kills Information Aggregation: Theory and Experiment
We investigate the potential of transparency to influence committee decision-making. We present a model in which career concerned committee members receive private information of different type-dependent accuracy, deliberate and vote. We study three levels of transparency under which career concerns are predicted to affect behavior differently, and test the model’s key predictions in a laboratory experiment. The model’s predictions are largely borne out – transparency negatively affects information aggregation at the deliberation and voting stages, leading to sharply different committee error rates than under secrecy. This occurs despite subjects revealing more information under transparency than theory predicts.
study  economics  micro  decision-making  decision-theory  collaboration  coordination  info-econ  info-dynamics  behavioral-econ  field-study  clarity  ethics  civic  integrity  error  unintended-consequences  🎩  org:ngo  madisonian  regularizer  enlightenment-renaissance-restoration-reformation  white-paper  microfoundations  open-closed  composition-decomposition  organizing 
april 2017 by nhaliday
Epidemiology, epigenetics and the ‘Gloomy Prospect’: embracing randomness in population health research and practice | International Journal of Epidemiology | Oxford Academic
Despite successes in identifying causes, it is often claimed that there are missing additional causes for even reasonably well-understood conditions such as lung cancer and coronary heart disease. Several lines of evidence suggest that largely chance events, from the biographical down to the sub-cellular, contribute an important stochastic element to disease risk that is not epidemiologically tractable at the individual level. Epigenetic influences provide a fashionable contemporary explanation for such seemingly random processes. Chance events—such as a particular lifelong smoker living unharmed to 100 years—are averaged out at the group level. As a consequence population-level differences (for example, secular trends or differences between administrative areas) can be entirely explicable by causal factors that appear to account for only a small proportion of individual-level risk. In public health terms, a modifiable cause of the large majority of cases of a disease may have been identified, with a wild goose chase continuing in an attempt to discipline the random nature of the world with respect to which particular individuals will succumb.

choice quote:
"With the perception (in my view exaggerated) that genome-wide association studies (GWASs) have failed to deliver on initial expectations,5 the next phase of enhanced risk prediction will certainly shift to ‘epigenetics’6,7—the currently fashionable response to any question to which you do not know the answer."
study  bio  medicine  genetics  genomics  sib-study  twin-study  cancer  cardio  essay  variance-components  signal-noise  random  causation  roots  gwern  explanation  methodology  🌞  biodet  QTL  correlation  epigenetics  GWAS  epidemiology  big-picture  public-health  composition-decomposition 
march 2017 by nhaliday
Relationships among probability distributions - Wikipedia
- One distribution is a special case of another with a broader parameter space
- Transforms (function of a random variable);
- Combinations (function of several variables);
- Approximation (limit) relationships;
- Compound relationships (useful for Bayesian inference);
- Duality;
- Conjugate priors.
stats  probability  characterization  list  levers  wiki  reference  objektbuch  calculation  distribution  nibble  cheatsheet  closure  composition-decomposition  properties 
february 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition 
january 2017 by nhaliday
Information Processing: Random microworlds: the mystery of nonshared environment
Nonshared environmental contributions to development, which are the largest environmental contributions, are effectively random. They are not amenable to control, either by parents or policy makers. Note, this picture -- that each child creates their own environment, or experiences an effectively random one -- does not seem to support the hypothesis that observed group differences in cognitive ability are primarily of non-genetic origin. Nor does it suggest that any simple intervention (for example, equalizing average SES levels) will eliminate group differences. However, it's fair to say our understanding of these complex questions is limited.

Technical remark: if n is large, and factors uncorrelated, the observed environmental variation in a population will be suppressed as n^{-1/2} relative to the maximum environmental effect. That means that the best or worst case scenarios for environmental effect, although hard to achieve, could be surprisingly large. In other words, if the environment is perfectly suited to the child, there could be an anomalously large non-genetic effect, relative to the variance observed in the population as a whole. Of course, for large n these perfect conditions are also harder to arrange. (As a super-high investment parent I am actually involved in attempting to fine tune n-vectors ;-)

Environmental effects cause regression to the mean of a child relative to the parental midpoint. Parents who are well above average likely benefited from a good match between their environment and individual proclivities, as well as from good genes. This match is difficult to replicate for their children -- only genes are passed on with certainty.
hsu  methodology  variance-components  speculation  parenting  thinking  🌞  frontier  environmental-effects  models  developmental  scitariat  signal-noise  biodet  nibble  s:*  roots  genetics  behavioral-gen  random  volo-avolo  composition-decomposition  systematic-ad-hoc 
november 2016 by nhaliday
Why Information Grows – Paul Romer
thinking like a physicist:

The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/
books  summary  review  economics  growth-econ  interdisciplinary  hmm  physics  thinking  feynman  tradeoffs  paul-romer  econotariat  🎩  🎓  scholar  aphorism  lens  signal-noise  cartoons  skeleton  s:**  giants  electromag  mutation  genetics  genomics  bits  nibble  stories  models  metameta  metabuch  problem-solving  composition-decomposition  structure  abstraction  zooming  examples  knowledge  human-capital  behavioral-econ  network-structure  info-econ  communication  learning  information-theory  applications  volo-avolo  map-territory  externalities  duplication  spreading  property-rights  lattice  multi  government  polisci  policy  counterfactual  insight  paradox  parallax  reduction  empirical  detail-architecture  methodology  crux  visual-understanding  theory-practice  matching  analytical-holistic  branches  complement-substitute  local-global  internet  technology  cost-benefit  investing  micro  signaling  limits  public-goodish  interpretation 
september 2016 by nhaliday
Information Processing: High V, Low M
http://www.unz.com/article/iq-or-the-mathverbal-split/
Commenter Gwen on the blog Infoproc hints at a possible neurological basis for this phenomenon, stating that “one bit of speculation I have: the neuroimaging studies seem to consistently point towards efficiency of global connectivity rather than efficiency or other traits of individual regions; you could interpret this as a general factor across a wide battery of tasks because they are all hindered to a greater or lesser degree by simply difficulties in coordination while performing the task; so perhaps what causes Spearman is global connectivity becoming around as efficient as possible and no longer a bottleneck for most tasks, and instead individual brain regions start dominating additional performance improvements. So up to a certain level of global communication efficiency, there is a general intelligence factor but then specific abilities like spatial vs verbal come apart and cease to have common bottlenecks and brain tilts manifest themselves much more clearly.” [10] This certainly seem plausible enough. Let’s hope that those far smarter than ourselves will slowly get to the bottom of these matters over the coming decades.

...

My main prediction here then is that based on HBD, I don’t expect China or East Asia to rival the Anglosphere in the life sciences and medicine or other verbally loaded scientific fields. Perhaps China can mirror Japan in developing pockets of strengths in various areas of the life sciences. Given its significantly larger population, this might indeed translate into non-trivial high-end output in the fields of biology and biomedicine. The core strengths of East Asian countries though, as science in the region matures, will lie primarily in quantitative areas such as physics or chemistry, and this is where I predict the region will shine in the coming years. China’s recent forays into quantum cryptography provide one such example. [40]

...

In fact, as anyone who’s been paying attention has noticed, modern day tech is essentially a California and East Asian affair, with the former focused on software and the latter more so on hardware. American companies dominate in the realm of internet infrastructure and platforms, while East Asia is predominant in consumer electronics hardware, although as noted, China does have its own versions of general purpose tech giants in companies like Baidu, Alibaba, and Tencent. By contrast, Europe today has relatively few well known tech companies apart from some successful apps such as Spotify or Skype and entities such as Nokia or Ericsson. [24] It used to have more established technology companies back in the day, but the onslaught of competition from the US and East Asia put a huge dent in Europe’s technology industry.

...

Although many will point to institutional factors such as China or the United States enjoying large, unfragmented markets to explain the decline of European tech, I actually want to offer a more HBD oriented explanation not only for why Europe seems to lag in technology and engineering relative to America and East Asia, but also for why tech in the United States is skewed towards software, while tech in East Asia is skewed towards hardware. I believe that the various phenomenon described above can all be explained by one common underlying mechanism, namely the math/verbal split. Simply put, if you’re really good at math, you gravitate towards hardware. If your skills are more verbally inclined, you gravitate towards software. In general, your chances of working in engineering and technology are greatly bolstered by being spatially and quantitatively adept.

...

If my assertions here are correct, I predict that over the coming decades, we’ll increasingly see different groups of people specialize in areas where they’re most proficient at. This means that East Asians and East Asian societies will be characterized by a skew towards quantitative STEM fields such as physics, chemistry, and engineering and towards hardware and high-tech manufacturing, while Western societies will be characterized by a skew towards the biological sciences and medicine, social sciences, humanities, and software and services. [41] Likewise, India also appears to be a country whose strengths lie more in software and services as opposed to hardware and manufacturing. My fundamental thesis is that all of this is ultimately a reflection of underlying HBD, in particular the math/verbal split. I believe this is the crucial insight lacking in the analyses others offer.

http://www.unz.com/article/iq-or-the-mathverbal-split/#comment-2230751

Sailer In TakiMag: What Does the Deep History of China and India Tell Us About Their Futures?: http://takimag.com/article/a_pair_of_giants_steve_sailer/print#axzz5BHqRM5nD
In an age of postmodern postnationalism that worships diversity, China is old-fashioned. It’s homogeneous, nationalist, and modernist. China seems to have utilitarian 1950s values.

For example, Chinese higher education isn’t yet competitive on the world stage, but China appears to be doing a decent job of educating the masses in the basics. High Chinese scores on the international PISA test for 15-year-olds shouldn’t be taken at face value, but it’s likely that China is approaching first-world norms in providing equality of opportunity through adequate schooling.

Due to censorship and language barriers, Chinese individuals aren’t well represented in English-language cyberspace. Yet in real life, the Chinese build things, such as bridges that don’t fall down, and they make stuff, employing tens of millions of proletarians in their factories.

The Chinese seem, on average, to be good with their hands, which is something that often makes American intellectuals vaguely uncomfortable. But at least the Chinese proles are over there merely manufacturing things cheaply, so American thinkers don’t resent them as much as they do American tradesmen.

Much of the class hatred in America stems from the suspicions of the intelligentsia that plumbers and mechanics are using their voodoo cognitive ability of staring at 3-D physical objects and somehow understanding why they are broken to overcharge them for repairs. Thus it’s only fair, America’s white-collar managers assume, that they export factory jobs to lower-paid China so that they can afford to throw manufactured junk away when it breaks and buy new junk rather than have to subject themselves to the humiliation of admitting to educationally inferior American repairmen that they don’t understand what is wrong with their own gizmos.

...

This Chinese lack of diversity is out of style, and yet it seems to make it easier for the Chinese to get things done.

In contrast, India appears more congenial to current-year thinkers. India seems postmodern and postnationalist, although it might be more accurately called premodern and prenationalist.

...

Another feature that makes our commentariat comfortable with India is that Indians don’t seem to be all that mechanically facile, perhaps especially not the priestly Brahmin caste, with whom Western intellectuals primarily interact.

And the Indians tend to be more verbally agile than the Chinese and more adept at the kind of high-level abstract thinking required by modern computer science, law, and soft major academia. Thousands of years of Brahmin speculations didn’t do much for India’s prosperity, but somehow have prepared Indians to make fortunes in 21st-century America.

http://www.sciencedirect.com/science/article/pii/S0160289616300757
- Study used two moderately large American community samples.
- Verbal and not nonverbal ability drives relationship between ability and ideology.
- Ideology and ability appear more related when ability assessed professionally.
- Self-administered or nonverbal ability measures will underestimate this relationship.

https://www.unz.com/gnxp/the-universal-law-of-interpersonal-dynamics/
Every once in a while I realize something with my conscious mind that I’ve understood implicitly for a long time. Such a thing happened to me yesterday, while reading a post on Stalin, by Amritas. It is this:

S = P + E

Social Status equals Political Capital plus Economic Capital

...

Here’s an example of its explanatory power: If we assume that a major human drive is to maximize S, we can predict that people with high P will attempt to minimize the value of E (since S-maximization is a zero-sum game). And so we see. Throughout history there has been an attempt to ennoble P while stigmatizing E. Conversely, throughout history, people with high E use it to acquire P. Thus, in today’s society we see that socially adept people, who have inborn P skills, tend to favor socialism or big government – where their skills are most valuable, while economically productive people are often frustrated by the fact that their concrete contribution to society is deplored.

Now, you might ask yourself why the reverse isn’t true, why people with high P don’t use it to acquire E, while people with high E don’t attempt to stigmatize P? Well, I think that is true. But, while the equation is mathematically symmetrical, the nature of P-talent and E-talent is not. P-talent can be used to acquire E from the E-adept, but the E-adept are no match for the P-adept in the attempt to stigmatize P. Furthermore, P is endogenous to the system, while E is exogenous. In other words, the P-adept have the ability to manipulate the system itself to make P-talent more valuable in acquiring E, while the E-adept have no ability to manipulate the external environment to make E-talent more valuable in acquiring P.

...

1. All institutions will tend to be dominated by the P-adept
2. All institutions that have no in-built exogenous criteria for measuring its members’ status will inevitably be dominated by the P-adept
3. Universities will inevitably be dominated by the P-adept
4. Within a university, humanities and social sciences will be more dominated by the P-adept than … [more]
iq  science  culture  critique  lol  hsu  pre-2013  scitariat  rationality  epistemic  error  bounded-cognition  descriptive  crooked  realness  being-right  info-dynamics  truth  language  intelligence  kumbaya-kult  quantitative-qualitative  multi  study  psychology  cog-psych  social-psych  ideology  politics  elite  correlation  roots  signaling  psychometrics  status  capital  human-capital  things  phalanges  chart  metabuch  institutions  higher-ed  academia  class-warfare  symmetry  coalitions  strategy  class  s:*  c:**  communism  inequality  socs-and-mops  twitter  social  commentary  gnon  unaffiliated  zero-positive-sum  rot  gnxp  adversarial  🎩  stylized-facts  gender  gender-diff  cooperate-defect  ratty  yvain  ssc  tech  sv  identity-politics  culture-war  reddit  subculture  internet  🐸  discrimination  trump  systematic-ad-hoc  urban  britain  brexit  populism  diversity  literature  fiction  media  military  anomie  essay  rhetoric  martial  MENA  history  mostly-modern  stories  government  polisci  org:popup  right-wing  propaganda  counter-r 
september 2016 by nhaliday
Overcoming Bias : A Future Of Pipes
The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.

We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.

Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.

Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?

Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.
hanson  futurism  prediction  street-fighting  essay  len:short  ratty  computation  hardware  thermo  structure  composition-decomposition  complex-systems  magnitude  analysis  urban-rural  power-law  phys-energy  detail-architecture  efficiency  economics  supply-demand  labor  planning  long-term  physics  temperature  flux-stasis  fluid  measure  technology  frontier  speedometer  career  cost-benefit  identity  stylized-facts  objektbuch  data  trivia  cocktail 
august 2016 by nhaliday

bundles : abstract

related tags

:/  aaronson  ability-competence  abstraction  academia  accuracy  acm  acmtariat  adversarial  advice  africa  age-generation  aging  agriculture  ai  ai-control  albion  alesina  algebra  algebraic-complexity  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  analogy  analysis  analytical-holistic  anarcho-tyranny  anglo  anglosphere  announcement  anomie  anthropic  anthropology  antidemos  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  aristos  arms  arrows  art  article  asia  atmosphere  atoms  attention  authoritarianism  autism  automata  automation  axelrod  axioms  backup  bare-hands  barons  bayesian  behavioral-econ  behavioral-gen  being-becoming  being-right  ben-recht  benchmarks  benevolence  best-practices  better-explained  biases  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  biotech  bits  books  bostrom  bounded-cognition  brain-scan  branches  brands  brexit  britain  broad-econ  business  business-models  c:**  caching  calculation  california  cancer  canon  capital  capitalism  cardio  career  cartoons  causation  characterization  chart  cheatsheet  checking  checklists  chemistry  china  christianity  circuits  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  classification  clever-rats  climate-change  closure  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  cold-war  collaboration  commentary  communication  communism  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confounding  conquest-empire  consilience  context  contracts  contradiction  contrarianism  convergence  convexity-curvature  cooperate-defect  coordination  corporation  correlation  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  courage  course  cracker-econ  creative  crime  critique  crooked  crux  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  dark-arts  darwinian  data  data-science  data-structures  database  dataviz  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographics  dennett  density  dependence-independence  descriptive  detail-architecture  deterrence  developing-world  developmental  dimensionality  direct-indirect  direction  dirty-hands  discipline  discrimination  discussion  disease  distribution  divergence  diversity  dropbox  drugs  duality  duplication  duty  dynamical  early-modern  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  eden-heaven  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  einstein  elections  electromag  elite  embodied  emergent  emotion  empirical  ems  endo-exo  endocrine  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epigenetics  epistemic  equilibrium  error  essay  essence-existence  estimate  ethics  ethnocentrism  EU  europe  evolution  evopsych  examples  existence  exit-voice  exocortex  expanders  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  exposition  expression-survival  externalities  extra-introversion  extratricky  extrema  facebook  farmers-and-foragers  fashun  FDA  features  fermi  fertility  feudal  feynman  fiction  field-study  fields  finance  finiteness  flexibility  fluid  flux-stasis  focus  food  foreign-policy  formal-values  forms-instances  fourier  frequency  frontier  futurism  gallic  games  garett-jones  gavisti  gedanken  gender  gender-diff  gene-drift  general-survey  generalization  genetic-correlation  genetics  genomics  geoengineering  geography  geopolitics  germanic  giants  gnon  gnosis-logos  gnxp  god-man-beast-victim  google  gotchas  government  gradient-descent  graph-theory  graphical-models  graphs  gravity  gray-econ  gregory-clark  ground-up  group-level  group-selection  growth-econ  GT-101  guide  GWAS  gwern  GxE  hanson  hard-tech  hardness  hardware  hari-seldon  harvard  healthcare  heavy-industry  heterodox  heuristic  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hive-mind  hmm  homo-hetero  honor  housing  howto  hsu  human-capital  human-ml  humanity  hypocrisy  ideas  identity  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impetus  impro  incentives  increase-decrease  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  information-theory  init  innovation  input-output  insight  instinct  institutions  integrity  intel  intelligence  interdisciplinary  interests  internet  interpretability  interpretation  intervention  intricacy  intuition  investing  iq  iron-age  ising  israel  iteration-recursion  janus  japan  jargon  judaism  justice  kernels  knowledge  korea  kumbaya-kult  labor  language  large-factor  latin-america  lattice  law  leadership  learning  lecture-notes  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lifts-projections  limits  linear-algebra  liner-notes  linguistics  links  list  literature  local-global  logic  lol  long-short-run  long-term  longevity  longitudinal  love-hate  lower-bounds  machiavelli  machine-learning  macro  madisonian  magnitude  malaise  male-variability  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-power  markets  markov  martial  matching  math  math.CA  math.CO  math.DS  math.GR  matrix-factorization  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  MENA  meta-analysis  meta:math  meta:prediction  meta:rhetoric  metabuch  metameta  methodology  metrics  michael-jordan  michael-nielsen  micro  microfoundations  microsoft  migration  military  minimalism  miri-cfar  missing-heritability  ML-MAP-E  mobile  model-class  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  morality  mostly-modern  motivation  multi  multiplicative  musk  mutation  myth  n-factor  narrative  nationalism-globalism  natural-experiment  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nitty-gritty  no-go  noble-lie  nonlinearity  northeast  novelty  nuclear  number  numerics  nutrition  nyc  obesity  objective-measure  objektbuch  occam  occident  off-convex  old-anglo  oly  open-closed  optimism  optimization  order-disorder  orders  org:bleg  org:edu  org:inst  org:junk  org:mag  org:mat  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  oscillation  other-xtian  outcome-risk  outliers  overflow  p:someday  papers  paradox  parallax  parasites-microbiome  parenting  parsimony  patho-altruism  patience  paul-romer  pdf  peace-violence  people  percolation  performance  personality  perturbation  pessimism  phalanges  pharma  philosophy  phys-energy  physics  pic  piracy  planning  play  plots  polanyi-marx  polarization  policy  polisci  political-econ  politics  poll  pop-structure  popsci  population  population-genetics  populism  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  predictive-processing  preimage  preprint  primitivism  princeton  priors-posteriors  privacy  pro-rata  probability  problem-solving  programming  proofs  propaganda  properties  property-rights  proposal  prudence  pseudoE  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  public-health  publishing  putnam-like  puzzles  q-n-a  QTL  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quotes  race  random  randy-ayndy  ranking  rant  rationality  ratty  reading  realness  realpolitik  reason  rec-math  recruiting  reddit  redistribution  reduction  reference  reflection  regression  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  research  responsibility  retention  retrofit  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  risk  ritual  robotics  robust  roots  rot  rounding  russia  s-factor  s:*  s:**  s:***  sapiens  scale  scaling-up  scholar  science  scifi-fantasy  scitariat  search  securities  selection  self-interest  self-report  sentiment  sequential  series  sex  shakespeare  shannon  shift  sib-study  SIGGRAPH  signal-noise  signaling  similarity  singularity  sinosphere  skeleton  skunkworks  slides  smoothness  soccer  social  social-capital  social-choice  social-norms  social-psych  social-science  society  sociology  socs-and-mops  software  space  spatial  spearhead  speculation  speed  speedometer  spengler  sports  spreading  ssc  stackex  stagnation  stanford  startups  stat-power  state-of-art  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  street-fighting  stress  structure  study  studying  stylized-facts  subculture  subjective-objective  success  summary  supply-demand  survey  sv  symmetry  synchrony  syntax  synthesis  systematic-ad-hoc  tactics  tails  talks  taxes  tcs  tcstariat  tech  technocracy  technology  techtariat  telos-atelos  temperature  tetlock  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  thurston  tightness  time  time-preference  time-series  tip-of-tongue  todo  top-n  track-record  trade  tradeoffs  transportation  trees  trends  tribalism  tricki  tricks  trivia  trump  trust  truth  turchin  turing  tutorial  twin-study  twitter  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  unsupervised  urban  urban-rural  us-them  usa  utopia-dystopia  vague  values  vampire-squid  variance-components  venture  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  walls  war  water  waves  wealth  wealth-of-nations  web  welfare-state  whiggish-hegelian  white-paper  whole-partial-many  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-group  within-without  wonkish  wordlessness  working-stiff  world  world-war  wormholes  worrydream  wtf  X-not-about-Y  xenobio  yoga  yvain  zeitgeist  zero-positive-sum  zooming  🌞  🎓  🎩  🐸  👳  🔬  🖥 

Copy this bookmark:



description:


tags: