nhaliday + abstraction   27

Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Moravec's paradox - Wikipedia
Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are unconscious. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".[2]

...

One possible explanation of the paradox, offered by Moravec, is based on evolution. All human skills are implemented biologically, using machinery designed by the process of natural selection. In the course of their evolution, natural selection has tended to preserve design improvements and optimizations. The older a skill is, the more time natural selection has had to improve the design. Abstract thought developed only very recently, and consequently, we should not expect its implementation to be particularly efficient.

As Moravec writes:

Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.[3]

A compact way to express this argument would be:

- We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
- The oldest human skills are largely unconscious and so appear to us to be effortless.
- Therefore, we should expect skills that appear effortless to be difficult to reverse-engineer, but skills that require effort may not necessarily be difficult to engineer at all.
concept  wiki  reference  paradox  ai  intelligence  reason  instinct  neuro  psychology  cog-psych  hardness  logic  deep-learning  time  evopsych  evolution  sapiens  the-self  EEA  embodied  embodied-cognition  abstraction  universalism-particularism  gnosis-logos  robotics 
june 2018 by nhaliday
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Why I see academic economics moving left | askblog
http://www.arnoldkling.com/blog/on-the-state-of-economics/
http://www.nationalaffairs.com/publications/detail/how-effective-is-economic-theory
I have a long essay on the scientific status of economics in National Affairs. A few excerpts from the conclusion:

In the end, can we really have effective theory in economics? If by effective theory we mean theory that is verifiable and reliable for prediction and control, the answer is likely no. Instead, economics deals in speculative interpretations and must continue to do so.

Young economists who employ pluralistic methods to study problems are admired rather than marginalized, as they were in 1980. But economists who question the wisdom of interventionist economic policies seem headed toward the fringes of the profession.

This is my essay in which I say that academic economics is on the road to sociology.

example...?:
Property Is Only Another Name for Monopoly: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2818494
Hanson's take more positive: http://www.overcomingbias.com/2017/10/for-stability-rents.html

women:
http://www.arnoldkling.com/blog/college-women-and-the-future-of-economics/
http://www.arnoldkling.com/blog/road-to-sociology-watch-2/
http://www.arnoldkling.com/blog/road-to-sociology-watch-3/
econotariat  cracker-econ  commentary  prediction  trends  economics  social-science  ideology  politics  left-wing  regulation  empirical  measurement  methodology  academia  multi  links  news  org:mag  essay  longform  randy-ayndy  sociology  technocracy  realness  hypocrisy  letters  study  property-rights  taxes  civil-liberty  efficiency  arbitrage  alt-inst  proposal  incentives  westminster  lens  truth  info-foraging  ratty  hanson  summary  review  biases  concrete  abstraction  managerial-state  gender  identity-politics  higher-ed 
may 2017 by nhaliday
general topology - What should be the intuition when working with compactness? - Mathematics Stack Exchange
http://math.stackexchange.com/questions/485822/why-is-compactness-so-important

The situation with compactness is sort of like the above. It turns out that finiteness, which you think of as one concept (in the same way that you think of "Foo" as one concept above), is really two concepts: discreteness and compactness. You've never seen these concepts separated before, though. When people say that compactness is like finiteness, they mean that compactness captures part of what it means to be finite in the same way that shortness captures part of what it means to be Foo.

--

As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors.

--

Compactness does for continuous functions what finiteness does for functions in general.

If a set A is finite then every function f:A→R has a max and a min, and every function f:A→R^n is bounded. If A is compact, the every continuous function from A to R has a max and a min and every continuous function from A to R^n is bounded.

If A is finite then every sequence of members of A has a subsequence that is eventually constant, and "eventually constant" is the only kind of convergence you can talk about without talking about a topology on the set. If A is compact, then every sequence of members of A has a convergent subsequence.
q-n-a  overflow  math  topology  math.GN  concept  finiteness  atoms  intuition  oly  mathtariat  multi  discrete  gowers  motivation  synthesis  hi-order-bits  soft-question  limits  things  nibble  definition  convergence  abstraction 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition 
january 2017 by nhaliday
SteveStewartWilliams on Twitter: "Effect sizes for a selection of sex differences (.2 = small, .5 = medium, .8 = large) https://t.co/5O5rsjxazJ https://t.co/OHduHnVBqD"
https://archive.is/JlOBS
https://link.springer.com/article/10.1007/s11199-016-0622-1
http://sci-hub.tw/10.1007/s11199-016-0622-1
https://twitter.com/StuartJRitchie/status/776092982491709440
https://archive.is/vuuov
https://public.psych.iastate.edu/zkrizan/pdf/Zell%20Krizan%20Teeter.pdf

https://twitter.com/KajaPerina/status/889962891281133569
https://archive.is/HguAu
Sex diffs. in frequency/severity of neuro and psych conditions well-known; diffs in age of onset less so. (paywall: (link: http://go.nature.com/2vGL2Ea) go.nature.com/2vGL2Ea)

https://twitter.com/sentientist/status/459624000369729536
https://archive.is/2JaW4
Sex differences that suggest men are designed for combat (Sell et al. 2012) http://t.co/Dxj99XSjgV

https://twitter.com/DegenRolf/status/897142350031486976
https://archive.is/Fbay6
This text on the tragedy of the male sex drive is one of the best the great Roy Baumeister has written.

plot ordered by effect size:
https://twitter.com/SteveStuWill/status/942932641296269313
https://archive.is/9k13b
Sex Differences in Personality
>0: higher average score for men
<0: higher average score for women

https://twitter.com/WiringTheBrain/status/951531827885420549
https://archive.is/LJRHC
Since a couple people have asked my opinion, this is where I think the science stands on sex differences in psychological traits + what the implications are:
twitter  social  pic  objektbuch  evopsych  gender  data  study  survey  links  scitariat  multi  albion  commentary  personality  things  coordination  collaboration  spatial  iq  comparison  effect-size  stylized-facts  correlation  gender-diff  chart  behavioral-gen  pop-diff  piracy  list  meta-analysis  psychiatry  disease  epidemiology  discussion  evolution  sapiens  roots  EEA  🌞  biodet  peace-violence  fighting  embodied  sex  sexuality  visualization  scale  top-n  creative  psych-architecture  open-closed  abstraction  phalanges  backup  visuo 
december 2016 by nhaliday
Why Information Grows – Paul Romer
thinking like a physicist:

The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/
books  summary  review  economics  growth-econ  interdisciplinary  hmm  physics  thinking  feynman  tradeoffs  paul-romer  econotariat  🎩  🎓  scholar  aphorism  lens  signal-noise  cartoons  skeleton  s:**  giants  electromag  mutation  genetics  genomics  bits  nibble  stories  models  metameta  metabuch  problem-solving  composition-decomposition  structure  abstraction  zooming  examples  knowledge  human-capital  behavioral-econ  network-structure  info-econ  communication  learning  information-theory  applications  volo-avolo  map-territory  externalities  duplication  spreading  property-rights  lattice  multi  government  polisci  policy  counterfactual  insight  paradox  parallax  reduction  empirical  detail-architecture  methodology  crux  visual-understanding  theory-practice  matching  analytical-holistic  branches  complement-substitute  local-global  internet  technology  cost-benefit  investing  micro  signaling  limits  public-goodish  interpretation 
september 2016 by nhaliday
Answer to What is it like to understand advanced mathematics? - Quora
thinking like a mathematician

some of the points:
- small # of tricks (echoes Rota)
- web of concepts and modularization (zooming out) allow quick reasoning
- comfort w/ ambiguity and lack of understanding, study high-dimensional objects via projections
- above is essential for research (and often what distinguishes research mathematicians from people who were good at math, or majored in math)
math  reflection  thinking  intuition  expert  synthesis  wormholes  insight  q-n-a  🎓  metabuch  tricks  scholar  problem-solving  aphorism  instinct  heuristic  lens  qra  soft-question  curiosity  meta:math  ground-up  cartoons  analytical-holistic  lifts-projections  hi-order-bits  scholar-pack  nibble  giants  the-trenches  innovation  novelty  zooming  tricki  virtu  humility  metameta  wisdom  abstraction  skeleton  s:***  knowledge  expert-experience 
may 2016 by nhaliday

bundles : abstractmetameta

related tags

:/  aaronson  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  adversarial  ai  ai-control  albion  algebra  algorithms  alien-character  alignment  alt-inst  analogy  analysis  analytical-holistic  anglo  anglosphere  aphorism  apollonian-dionysian  applications  arbitrage  arms  arrows  art  atoms  attention  authoritarianism  automation  aversion  axioms  backup  behavioral-econ  behavioral-gen  being-becoming  best-practices  biases  big-list  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bits  blog  books  bostrom  bounded-cognition  brain-scan  branches  c(pp)  cartoons  causation  chart  cheatsheet  checklists  civil-liberty  classification  clever-rats  coarse-fine  coding-theory  cog-psych  collaboration  commentary  communication  comparison  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concept  conceptual-vocab  concrete  confidence  consilience  contradiction  contrarianism  convergence  convexity-curvature  cooperate-defect  coordination  correlation  cost-benefit  counterexample  counterfactual  cracker-econ  creative  critique  crux  cs  curiosity  cybernetics  cycles  data  data-structures  death  debate  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  definition  degrees-of-freedom  dennett  descriptive  detail-architecture  deterrence  differential  dimensionality  direction  discrete  discussion  disease  distribution  duplication  economics  econotariat  eden  eden-heaven  EEA  effect-size  efficiency  egalitarianism-hierarchy  EGT  electromag  embodied  embodied-cognition  emotion  empirical  ems  endogenous-exogenous  ends-means  engineering  entropy-like  epidemiology  epistemic  error  essay  essence-existence  ethics  europe  evolution  evopsych  examples  existence  expanders  expert  expert-experience  explanans  explanation  explore-exploit  exposition  externalities  extratricky  extrema  farmers-and-foragers  features  feynman  fiction  fields  fighting  finiteness  flexibility  flux-stasis  formal-values  forms-instances  fourier  frequency  frontier  functional  futurism  gender  gender-diff  generalization  genetics  genomics  geometry  giants  gnon  gnosis-logos  government  gowers  graph-theory  graphs  gregory-clark  ground-up  growth-econ  hanson  hardness  hardware  haskell  heavy-industry  heuristic  hi-order-bits  higher-ed  hmm  hn  homo-hetero  howto  human-capital  humanity  humility  hypocrisy  ideas  identity-politics  ideology  idk  IEEE  iidness  illusion  impetus  incentives  increase-decrease  individualism-collectivism  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  innovation  insight  instinct  intelligence  interdisciplinary  interests  internet  interpretation  intricacy  intuition  investing  iq  iteration-recursion  jvm  knowledge  labor  language  large-factor  lattice  learning  left-wing  legacy  len:long  lens  lesswrong  letters  lifts-projections  limits  linear-algebra  links  list  local-global  logic  long-short-run  longform  lower-bounds  machine-learning  magnitude  malthus  managerial-state  map-territory  marginal  matching  math  math.AC  math.AG  math.CO  math.DS  math.GN  math.RT  mathtariat  measure  measurement  medicine  meta-analysis  meta:math  meta:prediction  metabuch  metameta  methodology  metrics  micro  miri-cfar  model-class  model-organism  models  moloch  moments  monotonicity  morality  motivation  multi  multiplicative  mutation  n-factor  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nitty-gritty  no-go  nonlinearity  notation  novelty  number  objektbuch  ocaml-sml  occam  occident  oly  open-closed  optimism  optimization  order-disorder  orders  org:bleg  org:edu  org:inst  org:mag  org:mat  org:med  org:sci  overflow  papers  paradox  parallax  parenting  parsimony  paul-romer  pdf  peace-violence  performance  personality  perturbation  pessimism  phalanges  philosophy  physics  pic  piracy  plots  pls  plt  policy  polisci  politics  polynomials  pop-diff  popsci  positivity  pragmatic  prediction  predictive-processing  preprint  problem-solving  programming  proofs  properties  property-rights  proposal  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  publishing  q-n-a  qra  quality  quantitative-qualitative  questions  quotes  random  randy-ayndy  rat-pack  rationality  ratty  realness  reason  reduction  reference  reflection  regularity  regulation  reinforcement  research  research-program  responsibility  retention  review  rhetoric  rigor  risk  robotics  robust  roots  rust  s:**  s:***  sapiens  scale  scholar  scholar-pack  science  scitariat  search  selection  sex  sexuality  shift  signal-noise  signaling  signum  similarity  singularity  skeleton  smoothness  social  social-psych  social-science  society  sociology  soft-question  software  spatial  speculation  speed  speedometer  spengler  spock  spreading  stackex  state-of-art  stats  stereotypes  stories  strategy  stream  stress  structure  study  studying  stylized-facts  subculture  subjective-objective  summary  survey  symmetry  synthesis  systematic-ad-hoc  taxes  tcs  tcstariat  technocracy  technology  techtariat  telos-atelos  the-great-west-whale  the-self  the-trenches  theory-of-mind  theory-practice  theos  thick-thin  things  thinking  threat-modeling  tidbits  time  time-preference  todo  top-n  topology  track-record  tradeoffs  trends  tribalism  tricki  tricks  trivia  truth  turing  twitter  types  uncertainty  unintended-consequences  universalism-particularism  urban-rural  us-them  values  virtu  visual-understanding  visualization  visuo  volo-avolo  walls  war  waves  wealth  weird  westminster  whole-partial-many  wiki  winner-take-all  wire-guided  wisdom  within-without  wormholes  zero-positive-sum  zooming  🌞  🎓  🎩  🖥  🤖 

Copy this bookmark:



description:


tags: