nhaliday + structure   75

Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Society of Mind - Wikipedia
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

https://en.wikipedia.org/wiki/Modularity_of_mind

The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
books  ideas  speculation  structure  composition-decomposition  complex-systems  neuro  ai  psychology  cog-psych  intelligence  reduction  wiki  giants  philosophy  number  cohesion  diversity  systematic-ad-hoc  detail-architecture  pdf  study  neuro-nitgrit  brain-scan  nitty-gritty  network-structure  graphs  graph-theory  models  whole-partial-many  evopsych  eden  reference  psych-architecture  article 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
What are the Laws of Biology?
The core finding of systems biology is that only a very small subset of possible network motifs is actually used and that these motifs recur in all kinds of different systems, from transcriptional to biochemical to neural networks. This is because only those arrangements of interactions effectively perform some useful operation, which underlies some necessary function at a cellular or organismal level. There are different arrangements for input summation, input comparison, integration over time, high-pass or low-pass filtering, negative auto-regulation, coincidence detection, periodic oscillation, bistability, rapid onset response, rapid offset response, turning a graded signal into a sharp pulse or boundary, and so on, and so on.

These are all familiar concepts and designs in engineering and computing, with well-known properties. In living organisms there is one other general property that the designs must satisfy: robustness. They have to work with noisy components, at a scale that’s highly susceptible to thermal noise and environmental perturbations. Of the subset of designs that perform some operation, only a much smaller subset will do it robustly enough to be useful in a living organism. That is, they can still perform their particular functions in the face of noisy or fluctuating inputs or variation in the number of components constituting the elements of the network itself.
scitariat  reflection  proposal  ideas  thinking  conceptual-vocab  lens  bio  complex-systems  selection  evolution  flux-stasis  network-structure  structure  composition-decomposition  IEEE  robust  signal-noise  perturbation  interdisciplinary  graphs  circuits  🌞  big-picture  hi-order-bits  nibble  synthesis 
november 2017 by nhaliday
multivariate analysis - Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? - Cross Validated
The bivariate normal distribution is the exception, not the rule!

It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided.

Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications.

note: there is a multivariate central limit theorem, so those such applications have no problem
nibble  q-n-a  overflow  stats  math  acm  probability  distribution  gotchas  intricacy  characterization  structure  composition-decomposition  counterexample  limits  concentration-of-measure 
october 2017 by nhaliday
design patterns - What is MVC, really? - Software Engineering Stack Exchange
The model manages fundamental behaviors and data of the application. It can respond to requests for information, respond to instructions to change the state of its information, and even to notify observers in event-driven systems when information changes. This could be a database, or any number of data structures or storage systems. In short, it is the data and data-management of the application.

The view effectively provides the user interface element of the application. It'll render data from the model into a form that is suitable for the user interface.

The controller receives user input and makes calls to model objects and the view to perform appropriate actions.

...

Though this answer has 21 upvotes, I find the sentence "This could be a database, or any number of data structures or storage systems. (tl;dr : it's the data and data-management of the application)" horrible. The model is the pure business/domain logic. And this can and should be so much more than data management of an application. I also differentiate between domain logic and application logic. A controller should not ever contain business/domain logic or talk to a database directly.
q-n-a  stackex  explanation  concept  conceptual-vocab  structure  composition-decomposition  programming  engineering  best-practices  pragmatic  jargon  thinking  metabuch  working-stiff  tech  🖥  checklists 
october 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software 
june 2017 by nhaliday
Kinship Systems, Cooperation and the Evolution of Culture
In the data, societies with loose ancestral kinship ties cooperate and trust broadly, which is apparently sustained through a belief in moralizing gods, universally applicable moral principles, feelings of guilt, and large-scale institutions. Societies with a historically tightly knit kinship structure, on the other hand, exhibit strong in-group favoritism: they cheat on and are distrusting of out-group members, but readily support in-group members in need. This cooperation scheme is enforced by moral values of in-group loyalty, conformity to tight social norms, emotions of shame, and strong local institutions.

Henrich, Joseph, The Secret of Our Success: How Culture is Driving Human Evolution,
Domesticating Our Species, and Making Us Smarter, Princeton University Press, 2015.
—, W.E.I.R.D People: How Westerners became Individualistic, Self-Obsessed, Guilt-Ridden,
Analytic, Patient, Principled and Prosperous, Princeton University Press, n.d.
—, Jean Ensminger, Richard McElreath, Abigail Barr, Clark Barrett, Alexander Bolyanatz, Juan Camilo Cardenas, Michael Gurven, Edwins Gwako, Natalie Hen- rich et al., “Markets, Religion, Community Size, and the Evolution of Fairness and Punishment,” Science, 2010, 327 (5972), 1480–1484.

...

—, —, Will M. Gervais, Aiyana K. Willard, Rita A. McNamara, Edward Slingerland, and Joseph Henrich, “The Cultural Evolution of Prosocial Religions,” Behavioral and Brain Sciences, 2016, 39, e1.

...

Purzycki, Benjamin Grant, Coren Apicella, Quentin D. Atkinson, Emma Cohen, Rita Anne McNamara, Aiyana K. Willard, Dimitris Xygalatas, Ara Norenzayan, and Joseph Henrich, “Moralistic Gods, Supernatural Punishment and the Expansion of Human Sociality,” Nature, 2016.

Table 1 summarizes
Figure 1 has map of kinship tightness
Figure 2 has cheating and in-group vs. out-group
Table 2 has regression
Figure 3 has univeralism and shame-guilt
Figure 4 has individualism-collectivism/conformity
Table 4 has radius of trust, Table 5 same for within-country variation (ethnic)
Tables 7 and 8 do universalism

Haidt moral foundations:
In line with the research hypothesis discussed in Section 3, the analysis employs two dependent variables, i.e., (i) the measure of in-group loyalty, and (ii) an index of the importance of communal values relative to the more universal (individualizing) ones. That is, the hypothesis is explicitly not about some societies being more or less moral than others, but merely about heterogeneity in the relative importance that people attach to structurally different types of values. To construct the index, I compute the first principal component of fairness / reciprocity, harm / care, in-group / loyalty, and respect /authority. The resulting score endogenously has the appealing property that – in line with the research hypothesis – it loads positively on the first two values and negatively on the latter two, with roughly equal weights, see Appendix F for details.²⁴I compute country-level scores by averaging responses by country of residence of respondents. Importantly, in Enke (2017) I document that – in a nationally representative sample of Americans – this same index of moral communalism is strongly correlated with individuals’ propensity to favor their local community over society as a whole in issues ranging from taxation and redistribution to donations and volunteering. Thus, there is evidence that the index of communal moral values captures economically meaningful behavioral heterogeneity.

The coevolution of kinship systems, cooperation, and culture: http://voxeu.org/article/kinship-cooperation-and-culture
- Benjamin Enke

pretty short

good linguistics reference cited in this paper:
On the biological and cultural evolution of shame: Using internet search tools to weight values in many cultures: https://arxiv.org/abs/1401.1100v2
Here we explore the relative importance between shame and guilt by using Google Translate [>_>...] to produce translation for the words "shame", "guilt", "pain", "embarrassment" and "fear" to the 64 languages covered. We also explore the meanings of these concepts among the Yanomami, a horticulturist hunter-gatherer tribe in the Orinoquia. Results show that societies previously described as “guilt societies” have more words for guilt than for shame, but *the large majority*, including the societies previously described as “shame societies”, *have more words for shame than for guilt*. Results are consistent with evolutionary models of shame which predict a wide scatter in the relative importance between guilt and shame, suggesting that cultural evolution of shame has continued the work of biological evolution, and that neither provides a strong adaptive advantage to either shame or guilt [? did they not just say that most languages favor shame?].

...

The roots of the word "shame" are thought to derive from an older word meaning "to cover". The emotion of shame has clear physiological consequences. Its facial and corporal expression is a human universal, as was recognized already by Darwin (5). Looking away, reddening of the face, sinking the head, obstructing direct view, hiding the face and downing the eyelids, are the unequivocal expressions signaling shame. Shame might be an emotion specific to humans, as no clear description of it is known for animals.
...
Classical Greek philosophers, such as Aristotle, explicitly mention shame as a key element in building society.

Guilt is the emotion of being responsible for the commission of an offense, however, it seems to be distinct from shame. Guilt says “what I did was not good”, whereas shame says “I am no good"(2). For Benedict (1), shame is a violation of cultural or social values, while guilt feelings arise from violations of one's internal values.

...

Unobservable emotions such as guilt may be of value to the receiver but constitutes in economy “private information”. Thus, in economic and biological terms, adaptive pressures acting upon the evolution of shame differ from those acting on that of guilt.

Shame has evolutionary advantages to both individual and society, but the lack ofshame also has evolutionary advantages as it allows cheating and thus benefiting from public goods without paying the costs of its build up.

...

Dodds (7) coined the distinction between guilt and shame cultures and postulated that in Greek cultural history, shame as a social value was displaced, at least in part, by guilt in guiding moral behavior.
...
"[...]True guilt cultures rely on an internalized conviction of sin as the enforcer of good behavior, not, as shame cultures do, on external sanctions. Guilt cultures emphasize punishment and forgiveness as ways of restoring the moral order; shame cultures stress self-denial and humility as ways of restoring the social order”.

...

For example, Wikipedia is less error prone than Encyclopedia Britannica (12, 17); and Google Translate is as accurate as more traditional methods (35).

Table 1, Figure 1

...

This regression is close to a proportional line of two words for shame for each word for guilt.

...

For example, in the case of Chinese, no overlap between the five concepts is reported using Google Translate in Figure 1. Yet, linguistic-conceptual studies of guilt and shame revealed an important overlap between several of these concepts in Chinese (29).

...

Our results using Google Translate show no overlap between Guilt and Shame in any of the languages studied.
...
[lol:] Examples of the context when they feel “kili” are: a tiger appears in the forest; you kill somebody from another community; your daughter is going to die; everybody looks at your underwear; you are caught stealing; you soil your pants while among others; a doctor gives you an injection; you hit your wife and others find out; you are unfaithful to your husband and others find out; you are going to be hit with a machete.

...

Linguistic families do not aggregate according to the relationship of the number of synonyms for shame and guilt (Figure 3).

...

The ratios are 0.89 and 2.5 respectively, meaning a historical transition from guilt-culture in Latin to shame-culture in Italian, suggesting a historical development that is inverse to that suggested byDodds for ancient to classical Greek. [I hope their Latin corpus doesn't include stuff from Catholics...]

Joe Henrich presentation: https://www.youtube.com/watch?v=f-unD4ZzWB4

relevant video:
Johnny Cash - God's Gonna Cut You Down: https://www.youtube.com/watch?v=eJlN9jdQFSc

https://en.wikipedia.org/wiki/Guilt_society
https://en.wikipedia.org/wiki/Shame_society
https://en.wikipedia.org/wiki/Guilt-Shame-Fear_spectrum_of_cultures
this says Dems more guilt-driven but Peter Frost says opposite here (and matches my perception of the contemporary breakdown both including minorities and focusing only on whites): https://pinboard.in/u:nhaliday/b:9b75881f6861
http://honorshame.com/global-map-of-culture-types/

this is an amazing paper:
The Origins of WEIRD Psychology: https://psyarxiv.com/d6qhu/
Recent research not only confirms the existence of substantial psychological variation around the globe but also highlights the peculiarity of populations that are Western, Educated, Industrialized, Rich and Democratic (WEIRD). We propose that much of this variation arose as people psychologically adapted to differing kin-based institutions—the set of social norms governing descent, marriage, residence and related domains. We further propose that part of the variation in these institutions arose historically from the Catholic Church’s marriage and family policies, which contributed to the dissolution of Europe’s traditional kin-based institutions, leading eventually to the predominance of nuclear families and impersonal institutions. By combining data on 20 psychological outcomes with historical measures of both kinship and Church exposure, we find support for these ideas in a comprehensive array of analyses across countries, among European regions and between individuals with … [more]
study  economics  broad-econ  pseudoE  roots  anthropology  sociology  culture  cultural-dynamics  society  civilization  religion  theos  kinship  individualism-collectivism  universalism-particularism  europe  the-great-west-whale  orient  integrity  morality  ethics  trust  institutions  things  pdf  piracy  social-norms  cooperate-defect  patho-altruism  race  world  developing-world  pop-diff  n-factor  ethnography  ethnocentrism  🎩  🌞  s:*  us-them  occident  political-econ  altruism  self-interest  books  todo  multi  old-anglo  big-peeps  poetry  aristos  homo-hetero  north-weingast-like  maps  data  modernity  tumblr  social  ratty  gender  history  iron-age  mediterranean  the-classics  christianity  speculation  law  public-goodish  tribalism  urban  china  asia  sinosphere  decision-making  polanyi-marx  microfoundations  open-closed  alien-character  axelrod  eden  growth-econ  social-capital  values  phalanges  usa  within-group  group-level  regional-scatter-plots  comparison  psychology  social-psych  behavioral-eco 
june 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Estimating the number of unseen variants in the human genome
To find all common variants (frequency at least 1%) the number of individuals that need to be sequenced is small (∼350) and does not differ much among the different populations; our data show that, subject to sequence accuracy, the 1000 Genomes Project is likely to find most of these common variants and a high proportion of the rarer ones (frequency between 0.1 and 1%). The data reveal a rule of diminishing returns: a small number of individuals (∼150) is sufficient to identify 80% of variants with a frequency of at least 0.1%, while a much larger number (> 3,000 individuals) is necessary to find all of those variants.

A map of human genome variation from population-scale sequencing: http://www.internationalgenome.org/sites/1000genomes.org/files/docs/nature09534.pdf

Scientists using data from the 1000 Genomes Project, which sequenced one thousand individuals from 26 human populations, found that "a typical [individual] genome differs from the reference human genome at 4.1 million to 5.0 million sites … affecting 20 million bases of sequence."[11] Nearly all (>99.9%) of these sites are small differences, either single nucleotide polymorphisms or brief insertion-deletions in the genetic sequence, but structural variations account for a greater number of base-pairs than the SNPs and indels.[11]

Human genetic variation: https://en.wikipedia.org/wiki/Human_genetic_variation

Singleton Variants Dominate the Genetic Architecture of Human Gene Expression: https://www.biorxiv.org/content/early/2017/12/15/219238
study  sapiens  genetics  genomics  population-genetics  bioinformatics  data  prediction  cost-benefit  scale  scaling-up  org:nat  QTL  methodology  multi  pdf  curvature  convexity-curvature  nonlinearity  measurement  magnitude  🌞  distribution  missing-heritability  pop-structure  genetic-load  mutation  wiki  reference  article  structure  bio  preprint  biodet  variance-components  nibble  chart 
may 2017 by nhaliday
Backwardness | West Hunter
Back around the time I was born, anthropologists sometimes talked about some cultures being more advanced than others. This was before they decided that all cultures are equal, except that some are more equal than others.

...

I’ve been trying to estimate the gap between Eurasian and Amerindian civilization. The Conquistadors were, in a sense, invaders from the future: but just how far in the future? What point in the history of the Middle East is most similar to the state of the Amerindian civilizations of 1500 AD ?

I would argue that the Amerindian civilizations were less advanced than the Akkadian Empire, circa 2300 BC. The Mayans had writing, but were latecomers in metallurgy. The Inca had tin and arsenical bronze, but didn’t have written records. The Akkadians had both – as well as draft animals and the wheel. You can maybe push the time as far back as 2600 BC, since Sumerian cuneiform was in pretty full swing by then. So the Amerindians were around four thousand years behind.

https://westhunt.wordpress.com/2012/02/10/backwardness/#comment-1520
Excepting the use of iron, sub-Saharan Africa, excepting Ethiopia, was well behind the most advanced Amerindian civilizations circa 1492. I am right now resisting the temptation to get into a hammer-and-tongs discussion of Isandlwana, Rorke’s Drift, Blood River, etc. – and we would all be better off if I continued to do so.

https://en.wikipedia.org/wiki/Battle_of_Blood_River
The Battle of Blood River (Afrikaans: Slag van Bloedrivier; Zulu: iMpi yaseNcome) is the name given for the battle fought between _470 Voortrekkers_ ("Pioneers"), led by Andries Pretorius, and _an estimated 80,000 Zulu attackers_ on the bank of the Ncome River on 16 December 1838, in what is today KwaZulu-Natal, South Africa. Casualties amounted to over 3,000 of king Dingane's soldiers dead, including two Zulu princes competing with Prince Mpande for the Zulu throne. _Three Pioneers commando members were lightly wounded_, including Pretorius himself.

https://en.wikipedia.org/wiki/Battle_of_Rorke%27s_Drift
https://en.wikipedia.org/wiki/Battle_of_Isandlwana

https://twitter.com/tcjfs/status/895719621218541568
In the morning of Tuesday, June 15, while we sat at Dr. Adams's, we talked of a printed letter from the Reverend Herbert Croft, to a young gentleman who had been his pupil, in which he advised him to read to the end of whatever books he should begin to read. JOHNSON. 'This is surely a strange advice; you may as well resolve that whatever men you happen to get acquainted with, you are to keep to them for life. A book may be good for nothing; or there may be only one thing in it worth knowing; are we to read it all through? These Voyages, (pointing to the three large volumes of Voyages to the South Sea, which were just come out) WHO will read them through? A man had better work his way before the mast, than read them through; they will be eaten by rats and mice, before they are read through. There can be little entertainment in such books; one set of Savages is like another.' BOSWELL. 'I do not think the people of Otaheite can be reckoned Savages.' JOHNSON. 'Don't cant in defence of Savages.' BOSWELL. 'They have the art of navigation.' JOHNSON. 'A dog or a cat can swim.' BOSWELL. 'They carve very ingeniously.' JOHNSON. 'A cat can scratch, and a child with a nail can scratch.' I perceived this was none of the mollia tempora fandi; so desisted.

Déjà Vu all over again: America and Europe: https://westhunt.wordpress.com/2014/11/12/deja-vu-all-over-again-america-and-europe/
In terms of social organization and technology, it seems to me that Mesolithic Europeans (around 10,000 years ago) were like archaic Amerindians before agriculture. Many Amerindians on the west coast were still like that when Europeans arrived – foragers with bows and dugout canoes.

On the other hand, the farmers of Old Europe were in important ways a lot like English settlers: the pioneers planted wheat, raised pigs and cows and sheep, hunted deer, expanded and pushed aside the previous peoples, without much intermarriage. Sure, Anglo pioneers were literate, had guns and iron, were part of a state, all of which gave them a much bigger edge over the Amerindians than Old Europe ever had over the Mesolithic hunter-gatherers and made the replacement about ten times faster – but in some ways it was similar. Some of this similarity was the product of historical accidents: the local Amerindians were thin on the ground, like Europe’s Mesolithic hunters – but not so much because farming hadn’t arrived (it had in most of the United States), more because of an ongoing population crash from European diseases.

On the gripping hand, the Indo-Europeans seem to have been something like the Plains Indians: sure, they raised cattle rather than living off abundant wild buffalo, but they too were transformed into troublemakers by the advent of the horse. Both still did a bit of farming. They were also alike in that neither of them really knew what they were doing: neither were the perfected product of thousands of years of horse nomadry. The Indo-Europeans were the first raiders on horseback, and the Plains Indians had only been at it for a century, without any opportunity to learn state-of-the-art tricks from Eurasian horse nomads.

The biggest difference is that the Indo-Europeans won, while the Plains Indians were corralled into crappy reservations.

Quantitative historical analysis uncovers a single dimension of complexity that structures global variation in human social organization: http://www.pnas.org/content/early/2017/12/20/1708800115.full
Do human societies from around the world exhibit similarities in the way that they are structured, and show commonalities in the ways that they have evolved? These are long-standing questions that have proven difficult to answer. To test between competing hypotheses, we constructed a massive repository of historical and archaeological information known as “Seshat: Global History Databank.” We systematically coded data on 414 societies from 30 regions around the world spanning the last 10,000 years. We were able to capture information on 51 variables reflecting nine characteristics of human societies, such as social scale, economy, features of governance, and information systems. Our analyses revealed that these different characteristics show strong relationships with each other and that a single principal component captures around three-quarters of the observed variation. Furthermore, we found that different characteristics of social complexity are highly predictable across different world regions. These results suggest that key aspects of social organization are functionally related and do indeed coevolve in predictable ways. Our findings highlight the power of the sciences and humanities working together to rigorously test hypotheses about general rules that may have shaped human history.

Fig. 2.

The General Social Complexity Factor Is A Thing: https://www.gnxp.com/WordPress/2017/12/21/the-general-social-complexity-factor-is-a-thing/
west-hunter  scitariat  discussion  civilization  westminster  egalitarianism-hierarchy  history  early-modern  age-of-discovery  comparison  europe  usa  latin-america  farmers-and-foragers  technology  the-great-west-whale  divergence  conquest-empire  modernity  ranking  aphorism  rant  ideas  innovation  multi  africa  poast  war  track-record  death  nihil  nietzschean  lmao  wiki  attaq  data  twitter  social  commentary  gnon  unaffiliated  right-wing  inequality  quotes  big-peeps  old-anglo  aristos  literature  expansionism  world  genetics  genomics  gene-flow  gavisti  roots  analogy  absolute-relative  studying  sapiens  anthropology  archaeology  truth  primitivism  evolution  study  org:nat  turchin  broad-econ  deep-materialism  social-structure  sociology  cultural-dynamics  variance-components  exploratory  matrix-factorization  things  🌞  structure  scale  dimensionality  degrees-of-freedom  infrastructure  leviathan  polisci  religion  philosophy  government  institutions  money  monetary-fiscal  population  density  urban-rural  values  phalanges  cultu 
may 2017 by nhaliday
Typos | West Hunter
In a simple model, a given mutant has an equilibrium frequency μ/s, when μ is the mutation rate from good to bad alleles and s is the size of the selective disadvantage. To estimate the total impact of mutation at that locus, you multiply the frequency by the expected harm, s: which means that the fitness decrease (from effects at that locus) is just μ, the mutation rate. If we assume that these fitness effects are multiplicative, the total fitness decrease (also called ‘mutational load’) is approximately 1 – exp(-U), when U is where U=Σ2μ, the total number of new harmful mutations per diploid individual.

https://westhunt.wordpress.com/2012/10/17/more-to-go-wrong/

https://westhunt.wordpress.com/2012/07/13/sanctuary/
interesting, suggestive comment on Africa:
https://westhunt.wordpress.com/2012/07/13/sanctuary/#comment-3671
https://westhunt.wordpress.com/2012/07/14/too-darn-hot/
http://infoproc.blogspot.com/2012/07/rare-variants-and-human-genetic.html
https://westhunt.wordpress.com/2012/07/18/changes-in-attitudes/
https://westhunt.wordpress.com/2012/08/24/men-and-macaques/
I have reason to believe that few people understand genetic load very well, probably for self-referential reasons, but better explanations are possible.

One key point is that the amount of neutral variation is determined by the long-term mutational rate and population history, while the amount of deleterious variation [genetic load] is set by the selective pressures and the prevailing mutation rate over a much shorter time scale. For example, if you consider the class of mutations that reduce fitness by 1%, what matters is the past few thousand years, not the past few tens or hundreds of of thousands of years.

...

So, assuming that African populations have more neutral variation than non-African populations (which is well-established), what do we expect to see when we compare the levels of probably-damaging mutations in those two populations? If the Africans and non-Africans had experienced essentially similar mutation rates and selective pressures over the past few thousand years, we would expect to see the same levels of probably-damaging mutations. Bottlenecks that happened at the last glacial maximum or in the expansion out of Africa are irrelevant – too long ago to matter.

But we don’t. The amount of rare synonymous stuff is about 22% higher in Africans. The amount of rare nonsynonymous stuff (usually at least slightly deleterious) is 20.6% higher. The number of rare variants predicted to be more deleterious is ~21.6% higher. The amount of stuff predicted to be even more deleterious is ~27% higher. The number of harmful looking loss-of-function mutations (yet more deleterious) is 25% higher.

It looks as if the excess grows as the severity of the mutations increases. There is a scenario in which this is possible: the mutation rate in Africa has increased recently. Not yesterday, but, say, over the past few thousand years.

...

What is the most likely cause of such variations in the mutation rate? Right now, I’d say differences in average paternal age. We know that modest differences (~5 years) in average paternal age can easily generate ~20% differences in the mutation rate. Such between-population differences in mutation rates seem quite plausible, particularly since the Neolithic.
https://westhunt.wordpress.com/2016/04/10/bugs-versus-drift/
more recent: https://westhunt.wordpress.com/2017/06/06/happy-families-are-all-alike-every-unhappy-family-is-unhappy-in-its-own-way/#comment-92491
Probably not, but the question is complex: depends on the shape of the deleterious mutational spectrum [which we don’t know], ancient and recent demography, paternal age, and the extent of truncation selection in the population.
west-hunter  scitariat  discussion  bio  sapiens  biodet  evolution  mutation  genetics  genetic-load  population-genetics  nibble  stylized-facts  methodology  models  equilibrium  iq  neuro  neuro-nitgrit  epidemiology  selection  malthus  temperature  enhancement  CRISPR  genomics  behavioral-gen  multi  poast  africa  roots  pop-diff  ideas  gedanken  paternal-age  🌞  environment  speculation  gene-drift  longevity  immune  disease  parasites-microbiome  scifi-fantasy  europe  asia  race  migration  hsu  study  summary  commentary  shift  the-great-west-whale  nordic  intelligence  eden  long-short-run  debate  hmm  idk  explanans  comparison  structure  occident  mediterranean  geography  within-group  correlation  direction  volo-avolo  demographics  age-generation  measurement  data  applicability-prereqs  aging 
may 2017 by nhaliday
A Unified Theory of Randomness | Quanta Magazine
Beyond the one-dimensional random walk, there are many other kinds of random shapes. There are varieties of random paths, random two-dimensional surfaces, random growth models that approximate, for example, the way a lichen spreads on a rock. All of these shapes emerge naturally in the physical world, yet until recently they’ve existed beyond the boundaries of rigorous mathematical thought. Given a large collection of random paths or random two-dimensional shapes, mathematicians would have been at a loss to say much about what these random objects shared in common.

Yet in work over the past few years, Sheffield and his frequent collaborator, Jason Miller, a professor at the University of Cambridge, have shown that these random shapes can be categorized into various classes, that these classes have distinct properties of their own, and that some kinds of random objects have surprisingly clear connections with other kinds of random objects. Their work forms the beginning of a unified theory of geometric randomness.
news  org:mag  org:sci  math  research  probability  profile  structure  geometry  random  popsci  nibble  emergent  org:inst 
february 2017 by nhaliday
The language of geometry: Fast comprehension of geometrical primitives and rules in human adults and preschoolers
The child’s acquisition of language has been suggested to rely on the ability to build hierarchically structured representations from sequential inputs. Does a similar mechanism also underlie the acquisition of geometrical rules? Here, we introduce a learning situation in which human participants had to grasp simple spatial sequences and try to predict the next location. Sequences were generated according to a “geometrical language” endowed with simple primitives of symmetries and rotations, and combinatorial rules. Analyses of error rates of various populations—a group of French educated adults, two groups of 5 years-old French children, and a rare group of teenagers and adults from an Amazonian population, the Mundurukus, who have limited access to formal schooling and a reduced geometrical lexicon—revealed that subjects’ learning indeed rests on internal language-like representations. A theoretical model, based on minimum description length, proved to fit well participants’ behavior, suggesting that human subjects “compress” spatial sequences into a minimal internal rule or program.
study  psychology  cog-psych  visuo  spatial  structure  neurons  occam  computation  models  eden  intelligence  neuro  learning  language  psych-architecture  🌞  retrofit 
february 2017 by nhaliday
probability - Variance of maximum of Gaussian random variables - Cross Validated
In full generality it is rather hard to find the right order of magnitude of the variance of a Gaussien supremum since the tools from concentration theory are always suboptimal for the maximum function.

order ~ 1/log n
q-n-a  overflow  stats  probability  acm  orders  tails  bias-variance  moments  concentration-of-measure  magnitude  tidbits  distribution  yoga  structure  extrema  nibble 
february 2017 by nhaliday
Mikhail Leonidovich Gromov - Wikipedia
Gromov's style of geometry often features a "coarse" or "soft" viewpoint, analyzing asymptotic or large-scale properties.

Gromov is also interested in mathematical biology,[11] the structure of the brain and the thinking process, and the way scientific ideas evolve.[8]
math  people  giants  russia  differential  geometry  topology  math.GR  wiki  structure  meta:math  meta:science  interdisciplinary  bio  neuro  magnitude  limits  science  nibble  coarse-fine  wild-ideas  convergence  info-dynamics  ideas 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition 
january 2017 by nhaliday
"Design Patterns" Aren't
The "design patterns" movement in software claims to have been inspired by the works of architect Christopher Alexander. But an examination of Alexander's books reveals that he was actually talking about something much more interesting.

patterns in Alexander sense = vocabulary not dogma
thinking  architecture  design  programming  engineering  carcinisation  models  slides  presentation  techtariat  structure  conceptual-vocab  systematic-ad-hoc 
november 2016 by nhaliday
Why Information Grows – Paul Romer
thinking like a physicist:

The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/
books  summary  review  economics  growth-econ  interdisciplinary  hmm  physics  thinking  feynman  tradeoffs  paul-romer  econotariat  🎩  🎓  scholar  aphorism  lens  signal-noise  cartoons  skeleton  s:**  giants  electromag  mutation  genetics  genomics  bits  nibble  stories  models  metameta  metabuch  problem-solving  composition-decomposition  structure  abstraction  zooming  examples  knowledge  human-capital  behavioral-econ  network-structure  info-econ  communication  learning  information-theory  applications  volo-avolo  map-territory  externalities  duplication  spreading  property-rights  lattice  multi  government  polisci  policy  counterfactual  insight  paradox  parallax  reduction  empirical  detail-architecture  methodology  crux  visual-understanding  theory-practice  matching  analytical-holistic  branches  complement-substitute  local-global  internet  technology  cost-benefit  investing  micro  signaling  limits  public-goodish  interpretation 
september 2016 by nhaliday
Information Processing: High V, Low M
http://www.unz.com/article/iq-or-the-mathverbal-split/
Commenter Gwen on the blog Infoproc hints at a possible neurological basis for this phenomenon, stating that “one bit of speculation I have: the neuroimaging studies seem to consistently point towards efficiency of global connectivity rather than efficiency or other traits of individual regions; you could interpret this as a general factor across a wide battery of tasks because they are all hindered to a greater or lesser degree by simply difficulties in coordination while performing the task; so perhaps what causes Spearman is global connectivity becoming around as efficient as possible and no longer a bottleneck for most tasks, and instead individual brain regions start dominating additional performance improvements. So up to a certain level of global communication efficiency, there is a general intelligence factor but then specific abilities like spatial vs verbal come apart and cease to have common bottlenecks and brain tilts manifest themselves much more clearly.” [10] This certainly seem plausible enough. Let’s hope that those far smarter than ourselves will slowly get to the bottom of these matters over the coming decades.

...

My main prediction here then is that based on HBD, I don’t expect China or East Asia to rival the Anglosphere in the life sciences and medicine or other verbally loaded scientific fields. Perhaps China can mirror Japan in developing pockets of strengths in various areas of the life sciences. Given its significantly larger population, this might indeed translate into non-trivial high-end output in the fields of biology and biomedicine. The core strengths of East Asian countries though, as science in the region matures, will lie primarily in quantitative areas such as physics or chemistry, and this is where I predict the region will shine in the coming years. China’s recent forays into quantum cryptography provide one such example. [40]

...

In fact, as anyone who’s been paying attention has noticed, modern day tech is essentially a California and East Asian affair, with the former focused on software and the latter more so on hardware. American companies dominate in the realm of internet infrastructure and platforms, while East Asia is predominant in consumer electronics hardware, although as noted, China does have its own versions of general purpose tech giants in companies like Baidu, Alibaba, and Tencent. By contrast, Europe today has relatively few well known tech companies apart from some successful apps such as Spotify or Skype and entities such as Nokia or Ericsson. [24] It used to have more established technology companies back in the day, but the onslaught of competition from the US and East Asia put a huge dent in Europe’s technology industry.

...

Although many will point to institutional factors such as China or the United States enjoying large, unfragmented markets to explain the decline of European tech, I actually want to offer a more HBD oriented explanation not only for why Europe seems to lag in technology and engineering relative to America and East Asia, but also for why tech in the United States is skewed towards software, while tech in East Asia is skewed towards hardware. I believe that the various phenomenon described above can all be explained by one common underlying mechanism, namely the math/verbal split. Simply put, if you’re really good at math, you gravitate towards hardware. If your skills are more verbally inclined, you gravitate towards software. In general, your chances of working in engineering and technology are greatly bolstered by being spatially and quantitatively adept.

...

If my assertions here are correct, I predict that over the coming decades, we’ll increasingly see different groups of people specialize in areas where they’re most proficient at. This means that East Asians and East Asian societies will be characterized by a skew towards quantitative STEM fields such as physics, chemistry, and engineering and towards hardware and high-tech manufacturing, while Western societies will be characterized by a skew towards the biological sciences and medicine, social sciences, humanities, and software and services. [41] Likewise, India also appears to be a country whose strengths lie more in software and services as opposed to hardware and manufacturing. My fundamental thesis is that all of this is ultimately a reflection of underlying HBD, in particular the math/verbal split. I believe this is the crucial insight lacking in the analyses others offer.

http://www.unz.com/article/iq-or-the-mathverbal-split/#comment-2230751

Sailer In TakiMag: What Does the Deep History of China and India Tell Us About Their Futures?: http://takimag.com/article/a_pair_of_giants_steve_sailer/print#axzz5BHqRM5nD
In an age of postmodern postnationalism that worships diversity, China is old-fashioned. It’s homogeneous, nationalist, and modernist. China seems to have utilitarian 1950s values.

For example, Chinese higher education isn’t yet competitive on the world stage, but China appears to be doing a decent job of educating the masses in the basics. High Chinese scores on the international PISA test for 15-year-olds shouldn’t be taken at face value, but it’s likely that China is approaching first-world norms in providing equality of opportunity through adequate schooling.

Due to censorship and language barriers, Chinese individuals aren’t well represented in English-language cyberspace. Yet in real life, the Chinese build things, such as bridges that don’t fall down, and they make stuff, employing tens of millions of proletarians in their factories.

The Chinese seem, on average, to be good with their hands, which is something that often makes American intellectuals vaguely uncomfortable. But at least the Chinese proles are over there merely manufacturing things cheaply, so American thinkers don’t resent them as much as they do American tradesmen.

Much of the class hatred in America stems from the suspicions of the intelligentsia that plumbers and mechanics are using their voodoo cognitive ability of staring at 3-D physical objects and somehow understanding why they are broken to overcharge them for repairs. Thus it’s only fair, America’s white-collar managers assume, that they export factory jobs to lower-paid China so that they can afford to throw manufactured junk away when it breaks and buy new junk rather than have to subject themselves to the humiliation of admitting to educationally inferior American repairmen that they don’t understand what is wrong with their own gizmos.

...

This Chinese lack of diversity is out of style, and yet it seems to make it easier for the Chinese to get things done.

In contrast, India appears more congenial to current-year thinkers. India seems postmodern and postnationalist, although it might be more accurately called premodern and prenationalist.

...

Another feature that makes our commentariat comfortable with India is that Indians don’t seem to be all that mechanically facile, perhaps especially not the priestly Brahmin caste, with whom Western intellectuals primarily interact.

And the Indians tend to be more verbally agile than the Chinese and more adept at the kind of high-level abstract thinking required by modern computer science, law, and soft major academia. Thousands of years of Brahmin speculations didn’t do much for India’s prosperity, but somehow have prepared Indians to make fortunes in 21st-century America.

http://www.sciencedirect.com/science/article/pii/S0160289616300757
- Study used two moderately large American community samples.
- Verbal and not nonverbal ability drives relationship between ability and ideology.
- Ideology and ability appear more related when ability assessed professionally.
- Self-administered or nonverbal ability measures will underestimate this relationship.

https://www.unz.com/gnxp/the-universal-law-of-interpersonal-dynamics/
Every once in a while I realize something with my conscious mind that I’ve understood implicitly for a long time. Such a thing happened to me yesterday, while reading a post on Stalin, by Amritas. It is this:

S = P + E

Social Status equals Political Capital plus Economic Capital

...

Here’s an example of its explanatory power: If we assume that a major human drive is to maximize S, we can predict that people with high P will attempt to minimize the value of E (since S-maximization is a zero-sum game). And so we see. Throughout history there has been an attempt to ennoble P while stigmatizing E. Conversely, throughout history, people with high E use it to acquire P. Thus, in today’s society we see that socially adept people, who have inborn P skills, tend to favor socialism or big government – where their skills are most valuable, while economically productive people are often frustrated by the fact that their concrete contribution to society is deplored.

Now, you might ask yourself why the reverse isn’t true, why people with high P don’t use it to acquire E, while people with high E don’t attempt to stigmatize P? Well, I think that is true. But, while the equation is mathematically symmetrical, the nature of P-talent and E-talent is not. P-talent can be used to acquire E from the E-adept, but the E-adept are no match for the P-adept in the attempt to stigmatize P. Furthermore, P is endogenous to the system, while E is exogenous. In other words, the P-adept have the ability to manipulate the system itself to make P-talent more valuable in acquiring E, while the E-adept have no ability to manipulate the external environment to make E-talent more valuable in acquiring P.

...

1. All institutions will tend to be dominated by the P-adept
2. All institutions that have no in-built exogenous criteria for measuring its members’ status will inevitably be dominated by the P-adept
3. Universities will inevitably be dominated by the P-adept
4. Within a university, humanities and social sciences will be more dominated by the P-adept than … [more]
iq  science  culture  critique  lol  hsu  pre-2013  scitariat  rationality  epistemic  error  bounded-cognition  descriptive  crooked  realness  being-right  info-dynamics  truth  language  intelligence  kumbaya-kult  quantitative-qualitative  multi  study  psychology  cog-psych  social-psych  ideology  politics  elite  correlation  roots  signaling  psychometrics  status  capital  human-capital  things  phalanges  chart  metabuch  institutions  higher-ed  academia  class-warfare  symmetry  coalitions  strategy  class  s:*  c:**  communism  inequality  socs-and-mops  twitter  social  commentary  gnon  unaffiliated  zero-positive-sum  rot  gnxp  adversarial  🎩  stylized-facts  gender  gender-diff  cooperate-defect  ratty  yvain  ssc  tech  sv  identity-politics  culture-war  reddit  subculture  internet  🐸  discrimination  trump  systematic-ad-hoc  urban  britain  brexit  populism  diversity  literature  fiction  media  military  anomie  essay  rhetoric  martial  MENA  history  mostly-modern  stories  government  polisci  org:popup  right-wing  propaganda  counter-r 
september 2016 by nhaliday
Overcoming Bias : A Future Of Pipes
The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.

We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.

Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.

Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?

Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.
hanson  futurism  prediction  street-fighting  essay  len:short  ratty  computation  hardware  thermo  structure  composition-decomposition  complex-systems  magnitude  analysis  urban-rural  power-law  phys-energy  detail-architecture  efficiency  economics  supply-demand  labor  planning  long-term  physics  temperature  flux-stasis  fluid  measure  technology  frontier  speedometer  career  cost-benefit  identity  stylized-facts  objektbuch  data  trivia  cocktail 
august 2016 by nhaliday
What is up with carbon dioxide and cognition? An offer - Less Wrong Discussion
study: http://ehp.niehs.nih.gov/1104789/
n=22, p-values < .001 generally, no multiple comparisons or anything, right?
chart: http://ehp.niehs.nih.gov/wp-content/uploads/2012/11/ehp.1104789.g002.png
- note it's CO2 not oxygen that's relevant
- some interesting debate in comments about whether you would find similar effects for similar levels of variation in oxygen, implications for high-altitude living, etc.
- CO2 levels can range quite to quite high levels indoors (~1500, and even ~7000 in some of Gwern's experiments); this seems to be enough to impact cognition to a significant degree
- outdoor air quality often better than indoor even in urban areas (see other studies)

the solution: houseplants, http://lesswrong.com/lw/nk0/what_is_up_with_carbon_dioxide_and_cognition_an/d956

https://twitter.com/menangahela/status/965167009083379712
https://archive.is/k0I0U
except that environmental instability tends to be harder on more 'complex' adaptations and co2 ppm directly correlates with decreased effectiveness of cognition-enhancing traits vis chronic low-grade acidosis
productivity  study  gotchas  workflow  money-for-time  neuro  gwern  embodied  hypochondria  hmm  lesswrong  🤖  spock  nootropics  embodied-cognition  evidence-based  ratty  clever-rats  atmosphere  rat-pack  psychology  cog-psych  🌞  field-study  multi  c:**  2016  human-study  acmtariat  embodied-street-fighting  biodet  objective-measure  decision-making  s:*  embodied-pack  intervention  iq  environmental-effects  branches  unintended-consequences  twitter  social  discussion  backup  gnon  mena4  land  🐸  environment  climate-change  intelligence  structure 
may 2016 by nhaliday

bundles : abstractmath

related tags

:/  aaronson  ability-competence  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  adversarial  advice  aesthetics  africa  age-generation  age-of-discovery  aging  agriculture  ai  ai-control  algebra  algebraic-complexity  algorithms  alien-character  alignment  allodium  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  anomie  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  archaeology  archaics  architecture  aristos  arms  arrows  art  article  asia  atmosphere  atoms  attaq  attention  authoritarianism  autism  automata  automation  aversion  axelrod  axioms  backup  bare-hands  barons  bayesian  behavioral-econ  behavioral-gen  being-becoming  being-right  benevolence  best-practices  better-explained  bias-variance  biases  big-list  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blowhards  books  bostrom  bounded-cognition  brain-scan  branches  brands  brexit  britain  broad-econ  business  business-models  c:**  c:***  caching  calculation  california  cancer  canon  capital  capitalism  carcinisation  career  cartoons  causation  characterization  charity  chart  checklists  chemistry  china  christianity  circuits  civil-liberty  civilization  clarity  class  class-warfare  classification  clever-rats  climate-change  cliometrics  closure  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  cold-war  collaboration  coloring  comedy  commentary  communication  communism  commutativity  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confluence  confusion  conquest-empire  consilience  constraint-satisfaction  context  contracts  contradiction  contrarianism  convergence  convexity-curvature  cool  cooperate-defect  coordination  corporation  correlation  corruption  cost-benefit  counter-revolution  counterexample  counterfactual  courage  course  creative  crime  criminal-justice  CRISPR  critique  crooked  crux  crypto-anarchy  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  dark-arts  darwinian  data  data-science  database  death  debate  debt  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographics  dennett  density  dependence-independence  descriptive  design  detail-architecture  deterrence  developing-world  developmental  differential  dignity  dimensionality  direct-indirect  direction  dirty-hands  discovery  discrete  discrimination  discussion  disease  distribution  divergence  diversity  DP  drugs  duality  duplication  duty  dysgenics  early-modern  eastern-europe  ecology  econ-metrics  economics  econotariat  eden  eden-heaven  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  einstein  elections  electromag  elite  embodied  embodied-cognition  embodied-pack  embodied-street-fighting  emergent  emotion  empirical  ems  encyclopedic  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epistemic  equilibrium  eric-kaufmann  error  essay  essence-existence  estimate  ethanol  ethics  ethnocentrism  ethnography  europe  evidence-based  evolution  evopsych  examples  existence  expanders  expansionism  expectancy  expert-experience  explanans  explanation  exploratory  exposition  expression-survival  externalities  extra-introversion  extrema  facebook  faq  farmers-and-foragers  fashun  FDA  features  fermi  fertility  feudal  feynman  fiction  field-study  fields  finance  finiteness  flexibility  fluid  flux-stasis  focus  food  foreign-lang  foreign-policy  formal-values  forms-instances  fourier  frequency  frontier  fungibility-liquidity  futurism  gallic  games  gavisti  gedanken  gender  gender-diff  gene-drift  gene-flow  general-survey  generalization  genetic-correlation  genetic-load  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  giants  gibbon  gnon  gnosis-logos  gnxp  god-man-beast-victim  good-evil  google  gotchas  government  gowers  graph-theory  graphs  gravity  gray-econ  greedy  gregory-clark  ground-up  group-level  group-selection  growth-econ  GT-101  guilt-shame  gwern  haidt  hanson  happy-sad  hard-tech  hardness  hardware  hari-seldon  harvard  heavy-industry  henrich  heterodox  heuristic  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hmm  hn  homo-hetero  homogeneity  honor  horror  hsu  human-capital  human-ml  human-study  humanity  humility  hypochondria  hypocrisy  ideas  identity  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impetus  incentives  increase-decrease  india  individualism-collectivism  induction  industrial-revolution  inequality  inference  info-dynamics  info-econ  information-theory  infrastructure  inner-product  innovation  input-output  insight  instinct  institutions  integral  integrity  intel  intelligence  interdisciplinary  interests  internet  interpretability  interpretation  intervention  interview  intricacy  intuition  invariance  investing  iq  iron-age  islam  iteration-recursion  janus  japan  jargon  judaism  justice  kernels  kinship  knowledge  kumbaya-kult  labor  land  language  large-factor  latin-america  lattice  law  leadership  learning  learning-theory  lecture-notes  lectures  left-wing  legacy  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  life-history  lifts-projections  limits  linear-algebra  linear-models  liner-notes  links  list  literature  lived-experience  lmao  local-global  logic  lol  long-short-run  long-term  longevity  love-hate  lower-bounds  luca-trevisan  machiavelli  machine-learning  macro  magnitude  malthus  management  manifolds  map-territory  maps  marginal  marginal-rev  market-power  markets  markov  martial  matching  math  math.AC  math.CA  math.CO  math.CT  math.DS  math.FA  math.GN  math.GR  math.NT  mathtariat  matrix-factorization  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  MENA  mena4  meta:math  meta:prediction  meta:rhetoric  meta:science  metabolic  metabuch  metameta  methodology  metrics  micro  microfoundations  microsoft  migration  military  minimalism  miri-cfar  missing-heritability  mit  ML-MAP-E  mobile  model-class  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  money-for-time  monotonicity  monte-carlo  morality  mostly-modern  multi  multiplicative  music  musk  mutation  mystic  myth  n-factor  narrative  nationalism-globalism  naturality  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nlp  no-go  noble-lie  noblesse-oblige  nonlinearity  nootropics  nordic  north-weingast-like  northeast  novelty  nuclear  number  nutrition  nyc  objective-measure  objektbuch  occam  occident  ocw  offense-defense  old-anglo  oly  online-learning  open-closed  operational  optimate  optimism  optimization  order-disorder  orders  org:bleg  org:edu  org:inst  org:junk  org:mag  org:mat  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  outcome-risk  outliers  overflow  p:someday  p:whenever  papers  paradox  parallax  parasites-microbiome  parenting  parsimony  paternal-age  path-dependence  patho-altruism  patience  paul-romer  pdf  peace-violence  people  performance  personality  perturbation  pessimism  phalanges  pharma  philosophy  phys-energy  physics  pic  piracy  planning  play  plots  poast  poetry  polanyi-marx  polarization  policy  polisci  political-econ  politics  polynomials  pop-diff  pop-structure  popsci  population  population-genetics  populism  positivity  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  prediction-markets  predictive-processing  prejudice  preprint  presentation  primitivism  princeton  priors-posteriors  privacy  pro-rata  probability  problem-solving  productivity  profile  programming  proofs  propaganda  properties  property-rights  proposal  protestant-catholic  prudence  pseudoE  pseudorandomness  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  publishing  putnam-like  puzzles  q-n-a  QTL  quality  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  questions  quotes  race  random  random-networks  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  reading  realness  reason  rec-math  recent-selection  recommendations  recruiting  reddit  redistribution  reduction  reference  reflection  regional-scatter-plots  regularization  regulation  reinforcement  relativity  relativization  religion  rent-seeking  research  responsibility  retention  retrofit  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  risk  ritual  robotics  robust  roots  rot  russia  s:*  s:**  s:***  s:null  sanctity-degradation  sanjeev-arora  sapiens  scale  scaling-up  scholar  science  science-anxiety  scifi-fantasy  scitariat  search  sebastien-bubeck  securities  security  selection  self-interest  selfish-gene  sex  shakespeare  shift  SIGGRAPH  signal-noise  signaling  signum  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  slides  slippery-slope  smoothness  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  sociality  society  sociology  socs-and-mops  soft-question  software  solid-study  space  spatial  spearhead  speculation  speed  speedometer  spengler  spock  spreading  ssc  stackex  stagnation  stanford  startups  stat-mech  state-of-art  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  straussian  street-fighting  stress  structure  study  studying  stylized-facts  subculture  subjective-objective  success  sum-of-squares  summary  supply-demand  survey  sv  symmetry  synchrony  synthesis  systematic-ad-hoc  tactics  tails  tapes  taxes  tcs  tcstariat  teaching  tech  technology  techtariat  telos-atelos  temperature  tensors  tetlock  the-basilisk  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  tidbits  tightness  time  time-complexity  time-preference  todo  toolkit  top-n  topics  topology  track-record  trade  tradeoffs  tradition  transportation  trends  tribalism  tricki  tricks  trivia  trump  trust  truth  tumblr  turchin  turing  tv  twitter  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  urban  urban-rural  us-them  usa  utopia-dystopia  vague  values  variance-components  venture  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  walls  war  waves  wealth  wealth-of-nations  web  welfare-state  west-hunter  westminster  whole-partial-many  wigderson  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-group  within-without  wordlessness  workflow  working-stiff  world  world-war  writing  X-not-about-Y  xenobio  yoga  yvain  zero-positive-sum  zooming  🌞  🎓  🎩  🐸  👳  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: