nhaliday + computation   61

Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
april 2018 by nhaliday
Ultimate fate of the universe - Wikipedia
The fate of the universe is determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below.[8] However, observations are not conclusive, and alternative models are still possible.[9]

Big Freeze or heat death
Main articles: Future of an expanding universe and Heat death of the universe
The Big Freeze is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature.[10] This scenario, in combination with the Big Rip scenario, is currently gaining ground as the most important hypothesis.[11] It could, in the absence of dark energy, occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe. In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, which themselves will disappear over time as they emit Hawking radiation.[12] Over infinite time, there would be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations,[13][14] and the fluctuation theorem.[15][16]

A related scenario is heat death, which states that the universe goes to a state of maximum entropy in which everything is evenly distributed and there are no gradients—which are needed to sustain information processing, one form of which is life. The heat death scenario is compatible with any of the three spatial models, but requires that the universe reach an eventual temperature minimum.[17]
physics  big-picture  world  space  long-short-run  futurism  singularity  wiki  reference  article  nibble  thermo  temperature  entropy-like  order-disorder  death  nihil  bio  complex-systems  cybernetics  increase-decrease  trends  computation  local-global  prediction  time  spatial  spreading  density  distribution  manifolds  geometry  janus 
april 2018 by nhaliday
AI-complete - Wikipedia
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.[1] To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.[2]

Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property can be useful, for instance to test for the presence of humans as with CAPTCHAs, and for computer security to circumvent brute-force attacks.[3][4]

...

AI-complete problems are hypothesised to include:

Bongard problems
Computer vision (and subproblems such as object recognition)
Natural language understanding (and subproblems such as text mining, machine translation, and word sense disambiguation[8])
Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.

...

Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on.[9]
concept  reduction  cs  computation  complexity  wiki  reference  properties  computer-vision  ai  risk  ai-control  machine-learning  deep-learning  language  nlp  order-disorder  tactics  strategy  intelligence  humanity  speculation  crux 
march 2018 by nhaliday
[1410.0369] The Universe of Minds
kinda dumb, don't think this guy is anywhere close to legit (e.g., he claims set of mind designs is countable, but gives no actual reason to believe that)
papers  preprint  org:mat  ratty  miri-cfar  ai  intelligence  philosophy  logic  software  cs  computation  the-self 
march 2018 by nhaliday
If Quantum Computers are not Possible Why are Classical Computers Possible? | Combinatorics and more
As most of my readers know, I regard quantum computing as unrealistic. You can read more about it in my Notices AMS paper and its extended version (see also this post) and in the discussion of Puzzle 4 from my recent puzzles paper (see also this post). The amazing progress and huge investment in quantum computing (that I presented and update  routinely in this post) will put my analysis to test in the next few years.
tcstariat  mathtariat  org:bleg  nibble  tcs  cs  computation  quantum  volo-avolo  no-go  contrarianism  frontier  links  quantum-info  analogy  comparison  synthesis  hi-order-bits  speedometer  questions  signal-noise 
november 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Talks
Quantum Supremacy: Office of Science and Technology Policy QIS Forum, Eisenhower Executive Office Building, White House Complex, Washington DC, October 18, 2016. Another version at UTCS Faculty Lunch, October 26, 2016. Another version at UT Austin Physics Colloquium, Austin, TX, November 9, 2016.

Complexity-Theoretic Foundations of Quantum Supremacy Experiments: Quantum Algorithms Workshop, Aspen Center for Physics, Aspen, CO, March 25, 2016

When Exactly Do Quantum Computers Provide A Speedup?: Yale Quantum Institute Seminar, Yale University, New Haven, CT, October 10, 2014. Another version at UT Austin Physics Colloquium, Austin, TX, November 19, 2014; Applied and Interdisciplinary Mathematics Seminar, Northeastern University, Boston, MA, November 25, 2014; Hebrew University Physics Colloquium, Jerusalem, Israel, January 5, 2015; Computer Science Colloquium, Technion, Haifa, Israel, January 8, 2015; Stanford University Physics Colloquium, January 27, 2015
tcstariat  aaronson  tcs  complexity  quantum  quantum-info  talks  list  slides  accretion  algorithms  applications  physics  nibble  frontier  computation  volo-avolo  speedometer  questions 
may 2017 by nhaliday
The language of geometry: Fast comprehension of geometrical primitives and rules in human adults and preschoolers
The child’s acquisition of language has been suggested to rely on the ability to build hierarchically structured representations from sequential inputs. Does a similar mechanism also underlie the acquisition of geometrical rules? Here, we introduce a learning situation in which human participants had to grasp simple spatial sequences and try to predict the next location. Sequences were generated according to a “geometrical language” endowed with simple primitives of symmetries and rotations, and combinatorial rules. Analyses of error rates of various populations—a group of French educated adults, two groups of 5 years-old French children, and a rare group of teenagers and adults from an Amazonian population, the Mundurukus, who have limited access to formal schooling and a reduced geometrical lexicon—revealed that subjects’ learning indeed rests on internal language-like representations. A theoretical model, based on minimum description length, proved to fit well participants’ behavior, suggesting that human subjects “compress” spatial sequences into a minimal internal rule or program.
study  psychology  cog-psych  visuo  spatial  structure  neurons  occam  computation  models  eden  intelligence  neuro  learning  language  psych-architecture  🌞  retrofit 
february 2017 by nhaliday
Information Processing: Machine Dreams
This is a controversial book because it demolishes not just the conventional history of the discipline, but its foundational assumptions. For example, once you start thinking about the information processing requirements that each agent (or even the entire system) must satisfy to find the optimal neoclassical equilibrium points, you realize the task is impossible. In fact, in some cases it has been rigorously shown to be beyond the capability of any universal Turing machine. Certainly, it seems beyond the plausible capabilities of a primitive species like homo sapiens. Once this bounded rationality (see also here) is taken into account, the whole notion of optimality of market equilibrium becomes far-fetched and speculative. It cannot be justified in any formal sense, and therefore cries out for experimental justification, which is not to be found.

I like this quote: This polymath who prognosticated that "science and technology would shift from a past emphasis on subjects of motion, force and energy to a future emphasis on subjects of communications, organization, programming and control," was spot on the money.
hsu  scitariat  economics  cs  computation  interdisciplinary  map-territory  models  market-failure  von-neumann  giants  history  quotes  links  debate  critique  review  big-picture  turing  heterodox  complex-systems  lens  s:*  books  🎩  thinking  markets  bounded-cognition 
february 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Logicians on safari
So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

the sequel: http://www.scottaaronson.com/blog/?p=153
tcstariat  aaronson  tcs  computation  complexity  aphorism  examples  list  reflection  philosophy  multi  summary  synthesis  hi-order-bits  interdisciplinary  lens  big-picture  survey  nibble  org:bleg  applications  big-surf  s:*  p:whenever  ideas  elegance 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition  coupling-cohesion 
january 2017 by nhaliday
Edge.org: 2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT?
highlights:
- quantum supremacy [Scott Aaronson]
- gene drive
- gene editing/CRISPR
- carcinogen may be entropy
- differentiable programming
- quantitative biology
soft:
- antisocial punishment of pro-social cooperators
- "strongest prejudice" (politics) [Haidt]
- Europeans' origins [Cochran]
- "Anthropic Capitalism And The New Gimmick Economy" [Eric Weinstein]

https://twitter.com/toad_spotted/status/986253381344907265
https://archive.is/gNGDJ
There's an underdiscussed contradiction between the idea that our society would make almost all knowledge available freely and instantaneously to almost everyone and that almost everyone would find gainful employment as knowledge workers. Value is in scarcity not abundance.
--
You’d need to turn reputational-based systems into an income stream
technology  discussion  trends  gavisti  west-hunter  aaronson  haidt  list  expert  science  biotech  geoengineering  top-n  org:edge  frontier  multi  CRISPR  2016  big-picture  links  the-world-is-just-atoms  quantum  quantum-info  computation  metameta  🔬  scitariat  q-n-a  zeitgeist  speedometer  cancer  random  epidemiology  mutation  GT-101  cooperate-defect  cultural-dynamics  anthropology  expert-experience  tcs  volo-avolo  questions  thiel  capitalism  labor  supply-demand  internet  tech  economics  broad-econ  prediction  automation  realness  gnosis-logos  iteration-recursion  similarity  uniqueness  homo-hetero  education  duplication  creative  software  programming  degrees-of-freedom  futurism  order-disorder  flux-stasis  public-goodish  markets  market-failure  piracy  property-rights  free-riding  twitter  social  backup  ratty  unaffiliated  gnon  contradiction  career  planning  hmm  idk  knowledge  higher-ed  pro-rata  sociality  reinforcement  tribalism  us-them  politics  coalitions  prejudice  altruism  human-capital  engineering  unintended-consequences 
november 2016 by nhaliday
Are You Living in a Computer Simulation?
Bostrom's anthropic arguments

https://www.jetpress.org/volume7/simulation.htm
In sum, if your descendants might make simulations of lives like yours, then you might be living in a simulation. And while you probably cannot learn much detail about the specific reasons for and nature of the simulation you live in, you can draw general conclusions by making analogies to the types and reasons of simulations today. If you might be living in a simulation then all else equal it seems that you should care less about others, live more for today, make your world look likely to become eventually rich, expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.

Theological Implications of the Simulation Argument: https://www.tandfonline.com/doi/pdf/10.1080/15665399.2010.10820012
Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly).

https://www.gwern.net/Simulation-inferences
lesswrong  philosophy  weird  idk  thinking  insight  links  summary  rationality  ratty  bostrom  sampling-bias  anthropic  theos  simulation  hanson  decision-making  advice  mystic  time-preference  futurism  letters  entertainment  multi  morality  humility  hypocrisy  wealth  malthus  power  drama  gedanken  pdf  article  essay  religion  christianity  the-classics  big-peeps  iteration-recursion  aesthetics  nietzschean  axioms  gwern  analysis  realness  von-neumann  space  expansionism  duplication  spreading  sequential  cs  computation  outcome-risk  measurement  empirical  questions  bits  information-theory  efficiency  algorithms  physics  relativity  ems  neuro  data  scale  magnitude  complexity  risk  existence  threat-modeling  civilization  forms-instances 
september 2016 by nhaliday
Overcoming Bias : A Future Of Pipes
The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.

We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.

Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.

Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?

Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.
hanson  futurism  prediction  street-fighting  essay  len:short  ratty  computation  hardware  thermo  structure  composition-decomposition  complex-systems  magnitude  analysis  urban-rural  power-law  phys-energy  detail-architecture  efficiency  economics  supply-demand  labor  planning  long-term  physics  temperature  flux-stasis  fluid  measure  technology  frontier  speedometer  career  cost-benefit  identity  stylized-facts  objektbuch  data  trivia  cocktail  aphorism 
august 2016 by nhaliday

bundles : academeframe

related tags

aaronson  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  adversarial  advice  aesthetics  agriculture  ai  ai-control  algorithmic-econ  algorithms  alien-character  alignment  allodium  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anthropic  anthropology  antidemos  aphorism  apollonian-dionysian  apple  applications  approximation  aristos  arms  arrows  art  article  asia  atmosphere  atoms  attention  authoritarianism  automata  automation  average-case  axioms  backup  barons  bayesian  beauty  being-becoming  benevolence  berkeley  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biotech  bits  boaz-barak  books  bostrom  bounded-cognition  brain-scan  brands  britain  broad-econ  buddhism  business  business-models  calculation  california  caltech  cancer  canon  capital  capitalism  career  cartoons  causation  chart  chemistry  china  christianity  circuits  civil-liberty  civilization  class  classification  climate-change  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cold-war  collaboration  comics  commentary  communication  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concept  conceptual-vocab  concrete  concurrency  confidence  consilience  contradiction  contrarianism  convexity-curvature  cool  cooperate-defect  correlation  cost-benefit  counterexample  coupling-cohesion  courage  course  creative  crime  CRISPR  critique  crooked  crux  crypto  cs  cultural-dynamics  curiosity  cybernetics  cycles  cynicism-idealism  dark-arts  darwinian  data  data-structures  death  debate  debt  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  definite-planning  definition  degrees-of-freedom  democracy  dennett  density  descriptive  detail-architecture  differential  dignity  dimensionality  direction  dirty-hands  discrete  discussion  disease  distribution  drama  drugs  duplication  duty  early-modern  ecology  economics  econotariat  eden  eden-heaven  education  efficiency  egalitarianism-hierarchy  eh  einstein  electromag  elegance  elite  emergent  empirical  ems  encyclopedic  ends-means  energy-resources  engineering  enhancement  entanglement  entertainment  entrepreneurialism  entropy-like  environment  envy  epidemiology  equilibrium  error  essay  essence-existence  estimate  ethics  europe  evolution  examples  existence  expanders  expansionism  expectancy  expert  expert-experience  explanans  explanation  exploratory  exposition  extra-introversion  extrema  facebook  faq  fashun  FDA  features  fermi  feudal  fiction  fields  finance  finiteness  flexibility  fluid  flux-stasis  focus  formal-values  forms-instances  free-riding  frequency  frontier  futurism  gallic  games  gavisti  gedanken  generalization  genetics  genomics  geoengineering  geography  geometry  germanic  giants  gnon  gnosis-logos  god-man-beast-victim  google  government  gowers  graph-theory  graphs  gravity  ground-up  GT-101  gwern  hacker  haidt  hanson  hard-tech  hardness  hardware  harvard  heterodox  heuristic  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hmm  homo-hetero  honor  hsu  human-capital  human-ml  humanity  humility  hypocrisy  hypothesis-testing  ideas  identity  idk  IEEE  illusion  impetus  increase-decrease  india  individualism-collectivism  inequality  inference  info-dynamics  infographic  information-theory  init  innovation  input-output  insight  instinct  institutions  intel  intelligence  interdisciplinary  interests  internet  interpretability  interview  intricacy  intuition  invariance  investing  iq  iron-age  iteration-recursion  janus  japan  justice  knowledge  labor  language  large-factor  latin-america  law  leadership  learning  learning-theory  lecture-notes  len:long  len:short  lens  lesswrong  let-me-see  letters  leviathan  limits  linear-algebra  liner-notes  links  list  literature  local-global  logic  long-short-run  long-term  longevity  love-hate  low-hanging  lower-bounds  machine-learning  macro  magnitude  malthus  management  manifolds  map-territory  marginal  marginal-rev  market-failure  market-power  markets  matching  math  math.CA  math.CO  math.DS  mathtariat  measure  measurement  mechanics  media  medicine  mediterranean  meta:math  metabuch  metameta  methodology  metrics  micro  microsoft  miri-cfar  ML-MAP-E  mobile  model-class  models  moments  monetary-fiscal  money  monotonicity  morality  mostly-modern  motivation  multi  multiplicative  musk  mutation  mystic  myth  n-factor  narrative  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nlp  no-go  noble-lie  nonlinearity  northeast  novelty  nuclear  number  nutrition  nyc  objektbuch  occam  occident  offense-defense  old-anglo  oly  open-closed  open-problems  operational  optimism  optimization  order-disorder  orders  org:bleg  org:edge  org:edu  org:inst  org:junk  org:mag  org:mat  org:nat  org:sci  organizing  orient  outcome-risk  outliers  overflow  p:*  p:whenever  papadimitriou  papers  parable  paradox  parallax  parsimony  patience  pdf  peace-violence  people  performance  personality  pessimism  phalanges  pharma  philosophy  phys-energy  physics  piracy  planning  plots  polanyi-marx  polarization  polisci  politics  popsci  postrat  power  power-law  pragmatic  pre-ww2  prediction  predictive-processing  prejudice  preprint  primitivism  princeton  priors-posteriors  pro-rata  probability  problem-solving  programming  proof-systems  proofs  properties  property-rights  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  q-n-a  qra  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quotes  rand-approx  random  randy-ayndy  ranking  rationality  ratty  realness  reason  recruiting  redistribution  reduction  reference  reflection  regulation  reinforcement  relativity  religion  rent-seeking  reputation  research  responsibility  retention  retrofit  review  revolution  rhetoric  rhythm  rigidity  rigor  rigorous-crypto  risk  ritual  robotics  robust  roots  s:*  sampling-bias  scale  science  scifi-fantasy  scitariat  SDP  search  securities  security  sequential  shakespeare  shift  signal-noise  signaling  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  slides  smoothness  social  social-choice  social-norms  sociality  socs-and-mops  soft-question  software  space  spatial  speculation  speed  speedometer  spengler  spock  spreading  stackex  stagnation  stanford  startups  stat-mech  state  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  street-fighting  stress  structure  study  stylized-facts  subjective-objective  success  summary  supply-demand  survey  sv  symmetry  synchrony  synthesis  systematic-ad-hoc  tactics  tails  talks  tcs  tcstariat  teaching  tech  technology  telos-atelos  temperature  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  time  time-complexity  time-preference  top-n  topics  track-record  trade  tradeoffs  transportation  trends  tribalism  trivia  troll  trust  truth  turing  tutorial  twitter  UGC  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  urban-rural  us-them  usa  vague  values  vazirani  venture  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  walls  war  wealth  web  weird  welfare-state  west-hunter  whole-partial-many  wiki  winner-take-all  winter-2016  wire-guided  wisdom  within-without  world  world-war  worrydream  X-not-about-Y  xenobio  zeitgeist  zero-positive-sum  zooming  🌞  🎓  🎩  👳  🔬  🖥 

Copy this bookmark:



description:


tags: