nhaliday + detail-architecture 39
Etsy’s experiment with immutable documentation | Hacker News
hn commentary techtariat org:com technical-writing collaboration best-practices programming engineering documentation communication flux-stasis interface-compatibility synchrony cost-benefit time sequential ends-means software project yak-shaving detail-architecture map-territory state
25 days ago by nhaliday
hn commentary techtariat org:com technical-writing collaboration best-practices programming engineering documentation communication flux-stasis interface-compatibility synchrony cost-benefit time sequential ends-means software project yak-shaving detail-architecture map-territory state
25 days ago by nhaliday
Build your own X: project-based programming tutorials | Hacker News
5 weeks ago by nhaliday
https://news.ycombinator.com/item?id=21430321
https://www.reddit.com/r/programming/comments/8j0gz3/build_your_own_x/
hn
commentary
repo
paste
programming
minimum-viable
frontier
allodium
list
links
roadmap
accretion
quixotic
🖥
interview-prep
system-design
move-fast-(and-break-things)
graphics
SIGGRAPH
vr
p2p
project
blockchain
cryptocurrency
bitcoin
bots
terminal
dbs
virtualization
frontend
web
javascript
frameworks
libraries
facebook
pls
c(pp)
python
dotnet
jvm
ocaml-sml
haskell
networking
systems
metal-to-virtual
deep-learning
os
physics
mechanics
simulation
automata-languages
compilers
search
internet
huge-data-the-biggest
strings
computer-vision
multi
reddit
social
detail-architecture
https://www.reddit.com/r/programming/comments/8j0gz3/build_your_own_x/
5 weeks ago by nhaliday
How can we develop transformative tools for thought?
michael-nielsen tcstariat techtariat thinking exocortex form-design worrydream frontier metameta neurons design essay rhetoric retention quantum quantum-info communication learning teaching writing technical-writing better-explained education studying composition-decomposition skunkworks detail-architecture mooc lectures games comparison incentives software public-goodish hci ui ux ai neuro interface-compatibility info-dynamics info-foraging books programming pls differential geometry trivia unintended-consequences track-record questions stories examples error math
7 weeks ago by nhaliday
michael-nielsen tcstariat techtariat thinking exocortex form-design worrydream frontier metameta neurons design essay rhetoric retention quantum quantum-info communication learning teaching writing technical-writing better-explained education studying composition-decomposition skunkworks detail-architecture mooc lectures games comparison incentives software public-goodish hci ui ux ai neuro interface-compatibility info-dynamics info-foraging books programming pls differential geometry trivia unintended-consequences track-record questions stories examples error math
7 weeks ago by nhaliday
donnemartin/system-design-primer: Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
systems engineering guide recruiting tech career jobs pragmatic system-design 🖥 techtariat minimum-viable working-stiff transitions progression interview-prep move-fast-(and-break-things) repo hn commentary retention puzzles examples client-server detail-architecture cheatsheet accretion
7 weeks ago by nhaliday
systems engineering guide recruiting tech career jobs pragmatic system-design 🖥 techtariat minimum-viable working-stiff transitions progression interview-prep move-fast-(and-break-things) repo hn commentary retention puzzles examples client-server detail-architecture cheatsheet accretion
7 weeks ago by nhaliday
Leslie Lamport: Thinking Above the Code - YouTube
heavyweights cs distributed systems system-design formal-methods rigor correctness rhetoric contrarianism presentation video detail-architecture engineering programming thinking writing technical-writing concurrency protocol-metadata
august 2019 by nhaliday
heavyweights cs distributed systems system-design formal-methods rigor correctness rhetoric contrarianism presentation video detail-architecture engineering programming thinking writing technical-writing concurrency protocol-metadata
august 2019 by nhaliday
One week of bugs
may 2019 by nhaliday
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.
...
Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.
Given that people aren't going to put any effort into testing, what's the best way to do it?
Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.
...
There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.
John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.
For more on my perspective on testing, there's this.
Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549
https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.
From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.
But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.
Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.
Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.
This combination is clearly a recipe for disaster.
The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.
Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.
Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow
NB: DevGAMM is a game industry conference
- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat
dan-luu
tech
software
error
list
debugging
linux
github
robust
checking
oss
troll
lol
aphorism
webapp
email
google
facebook
games
julia
pls
compilers
communication
mooc
browser
rust
programming
engineering
random
jargon
formal-methods
expert-experience
prof
c(pp)
course
correctness
hn
commentary
video
presentation
carmack
pragmatic
contrarianism
pessimism
sv
unix
rhetoric
critique
worrydream
hardware
performance
trends
multiplicative
roots
impact
comparison
history
iron-age
the-classics
mediterranean
conquest-empire
gibbon
technology
the-world-is-just-atoms
flux-stasis
increase-decrease
graphics
hmm
idk
systems
os
abstraction
intricacy
worse-is-better/the-right-thing
build-packaging
microsoft
osx
apple
reflection
assembly
things
knowledge
detail-architecture
thick-thin
trivia
info-dynamics
caching
frameworks
generalization
systematic-ad-hoc
universalism-particularism
analytical-holistic
structure
tainter
libraries
tradeoffs
prepping
threat-modeling
network-structure
writing
risk
local-glob
...
Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.
Given that people aren't going to put any effort into testing, what's the best way to do it?
Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.
...
There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.
John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.
For more on my perspective on testing, there's this.
Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549
https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.
From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.
But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.
Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.
Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.
This combination is clearly a recipe for disaster.
The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.
Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.
Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow
NB: DevGAMM is a game industry conference
- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
may 2019 by nhaliday
Why is Software Engineering so difficult? - James Miller
may 2019 by nhaliday
basic message: No silver bullet!
most interesting nuggets:
Scale and Complexity
- Windows 7 > 50 million LOC
Expect a staggering number of bugs.
Bugs?
- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.
- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.
Bug removal
- Testing typically exercises only half the code.
Better bug removal?
- There are better ways to do testing that do produce fantastic programs.”
- Are we sure about this fact?
* No, its only an opinion!
* In general Software Engineering has ....
NO FACTS!
So why not do this?
- The costs are unbelievable.
- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.
pdf
slides
engineering
nitty-gritty
programming
best-practices
roots
comparison
cost-benefit
software
systematic-ad-hoc
structure
error
frontier
debugging
checking
formal-methods
context
detail-architecture
intricacy
big-picture
system-design
correctness
scale
scaling-tech
shipping
money
data
stylized-facts
street-fighting
objektbuch
pro-rata
estimate
pessimism
degrees-of-freedom
volo-avolo
no-go
things
thinking
summary
quality
density
methodology
most interesting nuggets:
Scale and Complexity
- Windows 7 > 50 million LOC
Expect a staggering number of bugs.
Bugs?
- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.
- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.
Bug removal
- Testing typically exercises only half the code.
Better bug removal?
- There are better ways to do testing that do produce fantastic programs.”
- Are we sure about this fact?
* No, its only an opinion!
* In general Software Engineering has ....
NO FACTS!
So why not do this?
- The costs are unbelievable.
- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.
may 2019 by nhaliday
What happens when you load a URL?
dan-luu techtariat links list minimum-viable systems interview-prep explanation google networking distributed programming recruiting career init repo synthesis system-design 🖥 paste big-picture working-stiff scaling-tech nibble metal-to-virtual hardware IEEE web internet questions objektbuch client-server nitty-gritty detail-architecture
may 2019 by nhaliday
dan-luu techtariat links list minimum-viable systems interview-prep explanation google networking distributed programming recruiting career init repo synthesis system-design 🖥 paste big-picture working-stiff scaling-tech nibble metal-to-virtual hardware IEEE web internet questions objektbuch client-server nitty-gritty detail-architecture
may 2019 by nhaliday
Lateralization of brain function - Wikipedia
september 2018 by nhaliday
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]
Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69
Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]
...
Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".
Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.
These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.
The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.
The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.
The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.
...
Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.
Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.
The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.
...
RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.
The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.
Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.
Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.
...
Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.
The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.
...
We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.
If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.
...
Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.
Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon
reflection
books
summary
review
neuro
neuro-nitgrit
things
thinking
metabuch
order-disorder
apollonian-dionysian
bio
examples
near-far
symmetry
homo-hetero
logic
inference
intuition
problem-solving
analytical-holistic
n-factor
europe
the-great-west-whale
occident
alien-character
detail-architecture
art
theory-practice
philosophy
being-becoming
essence-existence
language
psychology
cog-psych
egalitarianism-hierarchy
direction
reason
learning
novelty
science
anglo
anglosphere
coarse-fine
neurons
truth
contradiction
matching
empirical
volo-avolo
curiosity
uncertainty
theos
axioms
intricacy
computation
analogy
essay
rhetoric
deep-materialism
new-religion
knowledge
expert-experience
confidence
biases
optimism
pessimism
realness
whole-partial-many
theory-of-mind
values
competition
reduction
subjective-objective
communication
telos-atelos
ends-means
turing
fiction
increase-decrease
innovation
creative
thick-thin
spengler
multi
ratty
hanson
complex-systems
structure
concrete
abstraction
network-s
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]
Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69
Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]
...
Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".
Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.
These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.
The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.
The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.
The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.
...
Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.
Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.
The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.
...
RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.
The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.
Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.
Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.
...
Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.
The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.
...
We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.
If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.
...
Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.
Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
september 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
april 2018 by nhaliday
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty
gwern
analysis
faq
ai
risk
speedometer
intelligence
futurism
cs
computation
complexity
tcs
linear-algebra
nonlinearity
convexity-curvature
average-case
adversarial
article
time-complexity
singularity
iteration-recursion
magnitude
multiplicative
lower-bounds
no-go
performance
hardware
humanity
psychology
cog-psych
psychometrics
iq
distribution
moments
complement-substitute
hanson
ems
enhancement
parable
detail-architecture
universalism-particularism
neuro
ai-control
environment
climate-change
threat-modeling
security
theory-practice
hacker
academia
realness
crypto
rigorous-crypto
usa
government
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
april 2018 by nhaliday
[0712.3329] Universal Intelligence: A Definition of Machine Intelligence
april 2018 by nhaliday
- Shane Legg, Marcus Hutter
nibble
papers
org:mat
preprint
machine-learning
ai
intelligence
intricacy
definition
philosophy
psychology
cog-psych
psychometrics
decision-theory
order-disorder
deepgoog
flux-stasis
learning
list
links
quotes
rigor
lens
skeleton
metameta
search
problem-solving
generalization
complex-systems
cybernetics
volo-avolo
impetus
optimization
abstraction
big-peeps
academia
bio
measurement
universalism-particularism
big-picture
ideas
big-surf
synthesis
structure
large-factor
dimensionality
things
properties
detail-architecture
telos-atelos
values
descriptive
flexibility
occam
parsimony
cs
computation
bits
information-theory
complexity
absolute-relative
humanity
pragmatic
biases
turing
thick-thin
dennett
within-without
creative
theory-of-mind
nitty-gritty
spock
survey
reinforcement
april 2018 by nhaliday
[0706.3639] A Collection of Definitions of Intelligence
april 2018 by nhaliday
- Shane Legg, Marcus Hutter
nibble
papers
org:mat
preprint
machine-learning
ai
intelligence
intricacy
definition
philosophy
psychology
cog-psych
psychometrics
decision-theory
order-disorder
deepgoog
flux-stasis
learning
list
links
quotes
rigor
lens
skeleton
metameta
search
problem-solving
generalization
complex-systems
cybernetics
volo-avolo
impetus
optimization
abstraction
big-peeps
academia
bio
measurement
universalism-particularism
big-picture
ideas
big-surf
synthesis
things
properties
detail-architecture
telos-atelos
values
descriptive
flexibility
humanity
nitty-gritty
spock
survey
the-self
april 2018 by nhaliday
Society of Mind - Wikipedia
april 2018 by nhaliday
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.
This idea is perhaps best summarized by the following quote:
What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308
https://en.wikipedia.org/wiki/Modularity_of_mind
The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
books
ideas
speculation
structure
composition-decomposition
complex-systems
neuro
ai
psychology
cog-psych
intelligence
reduction
wiki
giants
philosophy
number
cohesion
diversity
systematic-ad-hoc
detail-architecture
pdf
study
neuro-nitgrit
brain-scan
nitty-gritty
network-structure
graphs
graph-theory
models
whole-partial-many
evopsych
eden
reference
psych-architecture
article
coupling-cohesion
multi
This idea is perhaps best summarized by the following quote:
What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308
https://en.wikipedia.org/wiki/Modularity_of_mind
The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
april 2018 by nhaliday
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.
The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.
However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.
Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:
...
In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:
I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.
“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.
If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.
We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.
...
In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?
...
In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.
Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.
Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.
What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.
...
Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.
Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).
I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.
(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty
lesswrong
subculture
miri-cfar
ai
risk
ai-control
futurism
books
debate
hanson
big-yud
prediction
contrarianism
singularity
local-global
speed
speedometer
time
frontier
distribution
smoothness
shift
pdf
economics
track-record
abstraction
analogy
links
wiki
list
evolution
mutation
selection
optimization
search
iteration-recursion
intelligence
metameta
chart
analysis
number
ems
coordination
cooperate-defect
death
values
formal-values
flux-stasis
philosophy
farmers-and-foragers
malthus
scale
studying
innovation
insight
conceptual-vocab
growth-econ
egalitarianism-hierarchy
inequality
authoritarianism
wealth
near-far
rationality
epistemic
biases
cycles
competition
arms
zero-positive-sum
deterrence
war
peace-violence
winner-take-all
technology
moloch
multi
plots
research
science
publishing
humanity
labor
marginal
urban-rural
structure
composition-decomposition
complex-systems
gregory-clark
decentralized
heavy-industry
magnitude
multiplicative
endogenous-exogenous
models
uncertainty
decision-theory
time-prefer
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.
The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.
However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.
Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:
...
In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:
I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.
“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.
If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.
We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.
...
In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?
...
In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.
Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.
Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.
What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.
...
Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.
Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).
I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.
(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
april 2018 by nhaliday
Mind uploading - Wikipedia
concept wiki reference article hanson ratty ems futurism ai technology speedometer frontier simulation death prediction estimate time computation scale magnitude plots neuro neuro-nitgrit complexity coarse-fine brain-scan accuracy skunkworks bostrom enhancement ideas singularity eden-heaven speed risk ai-control paradox competition arms unintended-consequences offense-defense trust duty tribalism us-them volo-avolo strategy hardware software mystic religion theos hmm dennett within-without philosophy deep-materialism complex-systems structure reduction detail-architecture analytical-holistic approximation cs trends threat-modeling
march 2018 by nhaliday
concept wiki reference article hanson ratty ems futurism ai technology speedometer frontier simulation death prediction estimate time computation scale magnitude plots neuro neuro-nitgrit complexity coarse-fine brain-scan accuracy skunkworks bostrom enhancement ideas singularity eden-heaven speed risk ai-control paradox competition arms unintended-consequences offense-defense trust duty tribalism us-them volo-avolo strategy hardware software mystic religion theos hmm dennett within-without philosophy deep-materialism complex-systems structure reduction detail-architecture analytical-holistic approximation cs trends threat-modeling
march 2018 by nhaliday
Prisoner's dilemma - Wikipedia
march 2018 by nhaliday
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]
Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]
https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.
Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.
In life, you can either cooperate or defect.
Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.
Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.
Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.
The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.
This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.
With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.
Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.
Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.
If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..
They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.
https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.
In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.
Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).
...
For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.
implications for fractionalized Europe vis-a-vis unified China?
and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?
Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that
https://en.wikipedia.org/wiki/Ultimatum_game
analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?
The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher
Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.
...
Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.
However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.
...
We will show that the interaction between selfish and strongly reciprocal … [more]
concept
conceptual-vocab
wiki
reference
article
models
GT-101
game-theory
anthropology
cultural-dynamics
trust
cooperate-defect
coordination
iteration-recursion
sequential
axelrod
discrete
smoothness
evolution
evopsych
EGT
economics
behavioral-econ
sociology
new-religion
deep-materialism
volo-avolo
characterization
hsu
scitariat
altruism
justice
group-selection
decision-making
tribalism
organizing
hari-seldon
theory-practice
applicability-prereqs
bio
finiteness
multi
history
science
social-science
decision-theory
commentary
study
summary
giants
the-trenches
zero-positive-sum
🔬
bounded-cognition
info-dynamics
org:edge
explanation
exposition
org:nat
eden
retention
long-short-run
darwinian
markov
equilibrium
linear-algebra
nitty-gritty
competition
war
explanans
n-factor
europe
the-great-west-whale
occident
china
asia
sinosphere
orient
decentralized
markets
market-failure
cohesion
metabuch
stylized-facts
interdisciplinary
physics
pdf
pessimism
time
insight
the-basilisk
noblesse-oblige
the-watchers
ideas
l
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]
Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]
https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.
Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.
In life, you can either cooperate or defect.
Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.
Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.
Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.
The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.
This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.
With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.
Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.
Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.
If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..
They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.
https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.
In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.
Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).
...
For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.
implications for fractionalized Europe vis-a-vis unified China?
and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?
Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that
https://en.wikipedia.org/wiki/Ultimatum_game
analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?
The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher
Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.
...
Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.
However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.
...
We will show that the interaction between selfish and strongly reciprocal … [more]
march 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
february 2018 by nhaliday
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html
A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.
By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.
We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).
AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/
https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI
Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES
When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.
If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.
As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.
...
If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”
...
One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.
How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.
...
Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.
...
The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.
The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”
https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"
US AI researchers: "No."
US military: "But... maybe just a computer vision app."
US AI researchers: "NO."
https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu
scitariat
commentary
video
presentation
comparison
usa
china
asia
sinosphere
frontier
technology
science
ai
speedometer
innovation
google
barons
deepgoog
stories
white-paper
strategy
migration
iran
human-capital
corporation
creative
alien-character
military
human-ml
nationalism-globalism
security
investing
government
games
deterrence
defense
nuclear
arms
competition
risk
ai-control
musk
optimism
multi
news
org:mag
europe
EU
80000-hours
effective-altruism
proposal
article
realness
offense-defense
war
biotech
altruism
language
foreign-lang
philosophy
the-great-west-whale
enhancement
foreign-policy
geopolitics
anglo
jobs
career
planning
hmm
travel
charity
tech
intel
media
teaching
tutoring
russia
india
miri-cfar
pdf
automation
class
labor
polisci
society
trust
n-factor
corruption
leviathan
ethics
authoritarianism
individualism-collectivism
revolution
economics
inequality
civic
law
regulation
data
scale
pro-rata
capital
zero-positive-sum
cooperate-defect
distribution
time-series
tre
A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.
By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.
We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).
AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/
https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI
Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES
When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.
If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.
As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.
...
If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”
...
One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.
How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.
...
Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.
...
The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.
The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”
https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"
US AI researchers: "No."
US military: "But... maybe just a computer vision app."
US AI researchers: "NO."
https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
february 2018 by nhaliday
AlphaGo Zero: Minimal Policy Improvement, Expectation Propagation and other Connections
deepgoog acmtariat org:bleg nibble research summary papers liner-notes machine-learning deep-learning games auto-learning speedometer org:nat state-of-art ai reinforcement fixed-point detail-architecture
november 2017 by nhaliday
deepgoog acmtariat org:bleg nibble research summary papers liner-notes machine-learning deep-learning games auto-learning speedometer org:nat state-of-art ai reinforcement fixed-point detail-architecture
november 2017 by nhaliday
Superintelligence Risk Project Update II
july 2017 by nhaliday
https://www.jefftk.com/p/superintelligence-risk-project-update
https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.
The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.
He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]
https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:
They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.
Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.
https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty
core-rats
ai
risk
ai-control
prediction
expert
machine-learning
deep-learning
speedometer
links
research
research-program
frontier
multi
interview
deepgoog
games
hardware
performance
roots
impetus
chart
big-picture
state-of-art
reinforcement
futurism
🤖
🖥
expert-experience
singularity
miri-cfar
empirical
evidence-based
speculation
volo-avolo
clever-rats
acmtariat
robust
ideas
crux
atoms
detail-architecture
software
gradient-descent
https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.
The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.
He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]
https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:
They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.
Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.
https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
july 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
june 2017 by nhaliday
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty
hanson
speculation
automation
labor
economics
ems
futurism
prediction
complex-systems
network-structure
intricacy
thinking
engineering
management
law
compensation
psychology
cog-psych
ideas
structure
gray-econ
competition
moloch
coordination
cooperate-defect
risk
ai
ai-control
singularity
number
humanity
complement-substitute
cybernetics
detail-architecture
legacy
threat-modeling
degrees-of-freedom
composition-decomposition
order-disorder
analogy
parsimony
institutions
software
coupling-cohesion
june 2017 by nhaliday
Outline of academic disciplines - Wikipedia
may 2017 by nhaliday
Outline of philosophy: https://en.wikipedia.org/wiki/Outline_of_philosophy
Figurative system of human knowledge: https://en.wikipedia.org/wiki/Figurative_system_of_human_knowledge
Branches of science: https://en.wikipedia.org/wiki/Branches_of_science
Outline of mathematics: https://en.wikipedia.org/wiki/Outline_of_mathematics
Outline of physics: https://en.wikipedia.org/wiki/Outline_of_physics
Branches of physics: https://en.wikipedia.org/wiki/Branches_of_physics
Outline of biology: https://en.wikipedia.org/wiki/Outline_of_biology
nibble
skeleton
accretion
links
wiki
reference
physics
mechanics
electromag
relativity
quantum
trees
synthesis
hi-order-bits
conceptual-vocab
summary
big-picture
lens
🔬
encyclopedic
chart
multi
knowledge
philosophy
theos
ideology
science
academia
religion
christianity
reason
epistemic
bio
nature
engineering
dirty-hands
art
poetry
math
ethics
morality
metameta
objektbuch
law
retention
logic
inference
thinking
technology
social-science
cs
theory-practice
detail-architecture
stats
apollonian-dionysian
letters
quixotic
Figurative system of human knowledge: https://en.wikipedia.org/wiki/Figurative_system_of_human_knowledge
Branches of science: https://en.wikipedia.org/wiki/Branches_of_science
Outline of mathematics: https://en.wikipedia.org/wiki/Outline_of_mathematics
Outline of physics: https://en.wikipedia.org/wiki/Outline_of_physics
Branches of physics: https://en.wikipedia.org/wiki/Branches_of_physics
Outline of biology: https://en.wikipedia.org/wiki/Outline_of_biology
may 2017 by nhaliday
'Capital in the Twenty-First Century' by Thomas Piketty, reviewed | New Republic
april 2017 by nhaliday
by Robert Solow (positive)
The data then exhibit a clear pattern. In France and Great Britain, national capital stood fairly steadily at about seven times national income from 1700 to 1910, then fell sharply from 1910 to 1950, presumably as a result of wars and depression, reaching a low of 2.5 in Britain and a bit less than 3 in France. The capital-income ratio then began to climb in both countries, and reached slightly more than 5 in Britain and slightly less than 6 in France by 2010. The trajectory in the United States was slightly different: it started at just above 3 in 1770, climbed to 5 in 1910, fell slightly in 1920, recovered to a high between 5 and 5.5 in 1930, fell to below 4 in 1950, and was back to 4.5 in 2010.
The wealth-income ratio in the United States has always been lower than in Europe. The main reason in the early years was that land values bulked less in the wide open spaces of North America. There was of course much more land, but it was very cheap. Into the twentieth century and onward, however, the lower capital-income ratio in the United States probably reflects the higher level of productivity: a given amount of capital could support a larger production of output than in Europe. It is no surprise that the two world wars caused much less destruction and dissipation of capital in the United States than in Britain and France. The important observation for Piketty’s argument is that, in all three countries, and elsewhere as well, the wealth-income ratio has been increasing since 1950, and is almost back to nineteenth-century levels. He projects this increase to continue into the current century, with weighty consequences that will be discussed as we go on.
...
Now if you multiply the rate of return on capital by the capital-income ratio, you get the share of capital in the national income. For example, if the rate of return is 5 percent a year and the stock of capital is six years worth of national income, income from capital will be 30 percent of national income, and so income from work will be the remaining 70 percent. At last, after all this preparation, we are beginning to talk about inequality, and in two distinct senses. First, we have arrived at the functional distribution of income—the split between income from work and income from wealth. Second, it is always the case that wealth is more highly concentrated among the rich than income from labor (although recent American history looks rather odd in this respect); and this being so, the larger the share of income from wealth, the more unequal the distribution of income among persons is likely to be. It is this inequality across persons that matters most for good or ill in a society.
...
The data are complicated and not easily comparable across time and space, but here is the flavor of Piketty’s summary picture. Capital is indeed very unequally distributed. Currently in the United States, the top 10 percent own about 70 percent of all the capital, half of that belonging to the top 1 percent; the next 40 percent—who compose the “middle class”—own about a quarter of the total (much of that in the form of housing), and the remaining half of the population owns next to nothing, about 5 percent of total wealth. Even that amount of middle-class property ownership is a new phenomenon in history. The typical European country is a little more egalitarian: the top 1 percent own 25 percent of the total capital, and the middle class 35 percent. (A century ago the European middle class owned essentially no wealth at all.) If the ownership of wealth in fact becomes even more concentrated during the rest of the twenty-first century, the outlook is pretty bleak unless you have a taste for oligarchy.
Income from wealth is probably even more concentrated than wealth itself because, as Piketty notes, large blocks of wealth tend to earn a higher return than small ones. Some of this advantage comes from economies of scale, but more may come from the fact that very big investors have access to a wider range of investment opportunities than smaller investors. Income from work is naturally less concentrated than income from wealth. In Piketty’s stylized picture of the United States today, the top 1 percent earns about 12 percent of all labor income, the next 9 percent earn 23 percent, the middle class gets about 40 percent, and the bottom half about a quarter of income from work. Europe is not very different: the top 10 percent collect somewhat less and the other two groups a little more.
You get the picture: modern capitalism is an unequal society, and the rich-get-richer dynamic strongly suggest that it will get more so. But there is one more loose end to tie up, already hinted at, and it has to do with the advent of very high wage incomes. First, here are some facts about the composition of top incomes. About 60 percent of the income of the top 1 percent in the United States today is labor income. Only when you get to the top tenth of 1 percent does income from capital start to predominate. The income of the top hundredth of 1 percent is 70 percent from capital. The story for France is not very different, though the proportion of labor income is a bit higher at every level. Evidently there are some very high wage incomes, as if you didn’t know.
This is a fairly recent development. In the 1960s, the top 1 percent of wage earners collected a little more than 5 percent of all wage incomes. This fraction has risen pretty steadily until nowadays, when the top 1 percent of wage earners receive 10–12 percent of all wages. This time the story is rather different in France. There the share of total wages going to the top percentile was steady at 6 percent until very recently, when it climbed to 7 percent. The recent surge of extreme inequality at the top of the wage distribution may be primarily an American development. Piketty, who with Emmanuel Saez has made a careful study of high-income tax returns in the United States, attributes this to the rise of what he calls “supermanagers.” The very highest income class consists to a substantial extent of top executives of large corporations, with very rich compensation packages. (A disproportionate number of these, but by no means all of them, come from the financial services industry.) With or without stock options, these large pay packages get converted to wealth and future income from wealth. But the fact remains that much of the increased income (and wealth) inequality in the United States is driven by the rise of these supermanagers.
and Deirdre McCloskey (p critical): https://ejpe.org/journal/article/view/170
nice discussion of empirical economics, economic history, market failures and statism, etc., with several bon mots
Piketty’s great splash will undoubtedly bring many young economically interested scholars to devote their lives to the study of the past. That is good, because economic history is one of the few scientifically quantitative branches of economics. In economic history, as in experimental economics and a few other fields, the economists confront the evidence (as they do not for example in most macroeconomics or industrial organization or international trade theory nowadays).
...
Piketty gives a fine example of how to do it. He does not get entangled as so many economists do in the sole empirical tool they are taught, namely, regression analysis on someone else’s “data” (one of the problems is the word data, meaning “things given”: scientists should deal in capta, “things seized”). Therefore he does not commit one of the two sins of modern economics, the use of meaningless “tests” of statistical significance (he occasionally refers to “statistically insignificant” relations between, say, tax rates and growth rates, but I am hoping he does not suppose that a large coefficient is “insignificant” because R. A. Fisher in 1925 said it was). Piketty constructs or uses statistics of aggregate capital and of inequality and then plots them out for inspection, which is what physicists, for example, also do in dealing with their experiments and observations. Nor does he commit the other sin, which is to waste scientific time on existence theorems. Physicists, again, don’t. If we economists are going to persist in physics envy let us at least learn what physicists actually do. Piketty stays close to the facts, and does not, for example, wander into the pointless worlds of non-cooperative game theory, long demolished by experimental economics. He also does not have recourse to non-computable general equilibrium, which never was of use for quantitative economic science, being a branch of philosophy, and a futile one at that. On both points, bravissimo.
...
Since those founding geniuses of classical economics, a market-tested betterment (a locution to be preferred to “capitalism”, with its erroneous implication that capital accumulation, not innovation, is what made us better off) has enormously enriched large parts of a humanity now seven times larger in population than in 1800, and bids fair in the next fifty years or so to enrich everyone on the planet. [Not SSA or MENA...]
...
Then economists, many on the left but some on the right, in quick succession from 1880 to the present—at the same time that market-tested betterment was driving real wages up and up and up—commenced worrying about, to name a few of the pessimisms concerning “capitalism” they discerned: greed, alienation, racial impurity, workers’ lack of bargaining strength, workers’ bad taste in consumption, immigration of lesser breeds, monopoly, unemployment, business cycles, increasing returns, externalities, under-consumption, monopolistic competition, separation of ownership from control, lack of planning, post-War stagnation, investment spillovers, unbalanced growth, dual labor markets, capital insufficiency (William Easterly calls it “capital fundamentalism”), peasant irrationality, capital-market imperfections, public … [more]
news
org:mag
big-peeps
econotariat
economics
books
review
capital
capitalism
inequality
winner-take-all
piketty
wealth
class
labor
mobility
redistribution
growth-econ
rent-seeking
history
mostly-modern
trends
compensation
article
malaise
🎩
the-bones
whiggish-hegelian
cjones-like
multi
mokyr-allen-mccloskey
expert
market-failure
government
broad-econ
cliometrics
aphorism
lens
gallic
clarity
europe
critique
rant
optimism
regularizer
pessimism
ideology
behavioral-econ
authoritarianism
intervention
polanyi-marx
politics
left-wing
absolute-relative
regression-to-mean
legacy
empirical
data-science
econometrics
methodology
hypothesis-testing
physics
iron-age
mediterranean
the-classics
quotes
krugman
world
entrepreneurialism
human-capital
education
supply-demand
plots
manifolds
intersection
markets
evolution
darwinian
giants
old-anglo
egalitarianism-hierarchy
optimate
morality
ethics
envy
stagnation
nl-and-so-can-you
expert-experience
courage
stats
randy-ayndy
reason
intersection-connectedness
detail-architect
The data then exhibit a clear pattern. In France and Great Britain, national capital stood fairly steadily at about seven times national income from 1700 to 1910, then fell sharply from 1910 to 1950, presumably as a result of wars and depression, reaching a low of 2.5 in Britain and a bit less than 3 in France. The capital-income ratio then began to climb in both countries, and reached slightly more than 5 in Britain and slightly less than 6 in France by 2010. The trajectory in the United States was slightly different: it started at just above 3 in 1770, climbed to 5 in 1910, fell slightly in 1920, recovered to a high between 5 and 5.5 in 1930, fell to below 4 in 1950, and was back to 4.5 in 2010.
The wealth-income ratio in the United States has always been lower than in Europe. The main reason in the early years was that land values bulked less in the wide open spaces of North America. There was of course much more land, but it was very cheap. Into the twentieth century and onward, however, the lower capital-income ratio in the United States probably reflects the higher level of productivity: a given amount of capital could support a larger production of output than in Europe. It is no surprise that the two world wars caused much less destruction and dissipation of capital in the United States than in Britain and France. The important observation for Piketty’s argument is that, in all three countries, and elsewhere as well, the wealth-income ratio has been increasing since 1950, and is almost back to nineteenth-century levels. He projects this increase to continue into the current century, with weighty consequences that will be discussed as we go on.
...
Now if you multiply the rate of return on capital by the capital-income ratio, you get the share of capital in the national income. For example, if the rate of return is 5 percent a year and the stock of capital is six years worth of national income, income from capital will be 30 percent of national income, and so income from work will be the remaining 70 percent. At last, after all this preparation, we are beginning to talk about inequality, and in two distinct senses. First, we have arrived at the functional distribution of income—the split between income from work and income from wealth. Second, it is always the case that wealth is more highly concentrated among the rich than income from labor (although recent American history looks rather odd in this respect); and this being so, the larger the share of income from wealth, the more unequal the distribution of income among persons is likely to be. It is this inequality across persons that matters most for good or ill in a society.
...
The data are complicated and not easily comparable across time and space, but here is the flavor of Piketty’s summary picture. Capital is indeed very unequally distributed. Currently in the United States, the top 10 percent own about 70 percent of all the capital, half of that belonging to the top 1 percent; the next 40 percent—who compose the “middle class”—own about a quarter of the total (much of that in the form of housing), and the remaining half of the population owns next to nothing, about 5 percent of total wealth. Even that amount of middle-class property ownership is a new phenomenon in history. The typical European country is a little more egalitarian: the top 1 percent own 25 percent of the total capital, and the middle class 35 percent. (A century ago the European middle class owned essentially no wealth at all.) If the ownership of wealth in fact becomes even more concentrated during the rest of the twenty-first century, the outlook is pretty bleak unless you have a taste for oligarchy.
Income from wealth is probably even more concentrated than wealth itself because, as Piketty notes, large blocks of wealth tend to earn a higher return than small ones. Some of this advantage comes from economies of scale, but more may come from the fact that very big investors have access to a wider range of investment opportunities than smaller investors. Income from work is naturally less concentrated than income from wealth. In Piketty’s stylized picture of the United States today, the top 1 percent earns about 12 percent of all labor income, the next 9 percent earn 23 percent, the middle class gets about 40 percent, and the bottom half about a quarter of income from work. Europe is not very different: the top 10 percent collect somewhat less and the other two groups a little more.
You get the picture: modern capitalism is an unequal society, and the rich-get-richer dynamic strongly suggest that it will get more so. But there is one more loose end to tie up, already hinted at, and it has to do with the advent of very high wage incomes. First, here are some facts about the composition of top incomes. About 60 percent of the income of the top 1 percent in the United States today is labor income. Only when you get to the top tenth of 1 percent does income from capital start to predominate. The income of the top hundredth of 1 percent is 70 percent from capital. The story for France is not very different, though the proportion of labor income is a bit higher at every level. Evidently there are some very high wage incomes, as if you didn’t know.
This is a fairly recent development. In the 1960s, the top 1 percent of wage earners collected a little more than 5 percent of all wage incomes. This fraction has risen pretty steadily until nowadays, when the top 1 percent of wage earners receive 10–12 percent of all wages. This time the story is rather different in France. There the share of total wages going to the top percentile was steady at 6 percent until very recently, when it climbed to 7 percent. The recent surge of extreme inequality at the top of the wage distribution may be primarily an American development. Piketty, who with Emmanuel Saez has made a careful study of high-income tax returns in the United States, attributes this to the rise of what he calls “supermanagers.” The very highest income class consists to a substantial extent of top executives of large corporations, with very rich compensation packages. (A disproportionate number of these, but by no means all of them, come from the financial services industry.) With or without stock options, these large pay packages get converted to wealth and future income from wealth. But the fact remains that much of the increased income (and wealth) inequality in the United States is driven by the rise of these supermanagers.
and Deirdre McCloskey (p critical): https://ejpe.org/journal/article/view/170
nice discussion of empirical economics, economic history, market failures and statism, etc., with several bon mots
Piketty’s great splash will undoubtedly bring many young economically interested scholars to devote their lives to the study of the past. That is good, because economic history is one of the few scientifically quantitative branches of economics. In economic history, as in experimental economics and a few other fields, the economists confront the evidence (as they do not for example in most macroeconomics or industrial organization or international trade theory nowadays).
...
Piketty gives a fine example of how to do it. He does not get entangled as so many economists do in the sole empirical tool they are taught, namely, regression analysis on someone else’s “data” (one of the problems is the word data, meaning “things given”: scientists should deal in capta, “things seized”). Therefore he does not commit one of the two sins of modern economics, the use of meaningless “tests” of statistical significance (he occasionally refers to “statistically insignificant” relations between, say, tax rates and growth rates, but I am hoping he does not suppose that a large coefficient is “insignificant” because R. A. Fisher in 1925 said it was). Piketty constructs or uses statistics of aggregate capital and of inequality and then plots them out for inspection, which is what physicists, for example, also do in dealing with their experiments and observations. Nor does he commit the other sin, which is to waste scientific time on existence theorems. Physicists, again, don’t. If we economists are going to persist in physics envy let us at least learn what physicists actually do. Piketty stays close to the facts, and does not, for example, wander into the pointless worlds of non-cooperative game theory, long demolished by experimental economics. He also does not have recourse to non-computable general equilibrium, which never was of use for quantitative economic science, being a branch of philosophy, and a futile one at that. On both points, bravissimo.
...
Since those founding geniuses of classical economics, a market-tested betterment (a locution to be preferred to “capitalism”, with its erroneous implication that capital accumulation, not innovation, is what made us better off) has enormously enriched large parts of a humanity now seven times larger in population than in 1800, and bids fair in the next fifty years or so to enrich everyone on the planet. [Not SSA or MENA...]
...
Then economists, many on the left but some on the right, in quick succession from 1880 to the present—at the same time that market-tested betterment was driving real wages up and up and up—commenced worrying about, to name a few of the pessimisms concerning “capitalism” they discerned: greed, alienation, racial impurity, workers’ lack of bargaining strength, workers’ bad taste in consumption, immigration of lesser breeds, monopoly, unemployment, business cycles, increasing returns, externalities, under-consumption, monopolistic competition, separation of ownership from control, lack of planning, post-War stagnation, investment spillovers, unbalanced growth, dual labor markets, capital insufficiency (William Easterly calls it “capital fundamentalism”), peasant irrationality, capital-market imperfections, public … [more]
april 2017 by nhaliday
Hidden Games | West Hunter
november 2016 by nhaliday
Since we are arguably a lot smarter than ants or bees, you might think that most adaptive personality variation in humans would be learned (a response to exterior cues) rather than heritable. Maybe some is, but much variation looks heritable. People don’t seem to learn to be aggressive or meek – they just are, and in those tendencies resemble their biological parents. I wish I (or anyone else) understood better why this is so, but there are some notions floating around that may explain it. One is that jacks of all trades are masters of none: if you play the same role all the time, you’ll be better at it than someone who keep switching personalities. It could be the case that such switching is physiologically difficult and/or expensive. And in at least some cases, being predictable has social value. Someone who is known to be implacably aggressive will win at ‘chicken’. Being known as the sort of guy who would rush into a burning building to save ugly strangers may pay off, even though actually running into that blaze does not.
...
This kind of game-theoretic genetic variation, driving distinct behavioral strategies, can have some really odd properties. For one thing, there can be more than one possible stable mix of behavioral types even in identical ecological situations. It’s a bit like dropping a marble onto a hilly landscape with many unconnected valleys – it will roll to the bottom of some valley, but initial conditions determine which valley. Small perturbations will not knock the marble out of the valley it lands in. In the same way, two human populations could fall into different states, different stable mixes of behavioral traits, for no reason at all other than chance and then stay there indefinitely. Populations are even more likely to fall into qualitatively different stable states when the ecological situations are significantly different.
...
What this means, think, is that it is entirely possible that human societies fall into fundamentally different patterns because of genetic influences on behavior that are best understood via evolutionary game theory. Sometimes one population might have a psychological type that doesn’t exist at all in another society, or the distribution could be substantially different. Sometimes these different social patterns will be predictable results of different ecological situations, sometimes the purest kind of chance. Sometimes the internal dynamics of these genetic systems will produce oscillatory (or chaotic!) changes in gene frequencies over time, which means changes in behavior and personality over time. In some cases, these internal genetic dynamics may be the fundamental reason for the rise and fall of empires. Societies in one stable distribution, in a particular psychological/behavioral/life history ESS, may simply be unable to replicate some of the institutions found in peoples in a different ESS.
Evolutionary forces themselves vary according to what ESS you’re in. Which ESS you’re in may be the most fundamental ethnic fact, and explain the most profound ethnic behavioral differences
Look, everyone is always looking for the secret principles that underlie human society and history, some algebra that takes mounds of historical and archaeological data – the stuff that happens – and explains it in some compact way, lets us understand it, just as continental drift made a comprehensible story out of geology. On second thought, ‘everyone’ mean that smallish fraction of researchers that are slaves of curiosity…
This approach isn’t going to explain everything – nothing will. But it might explain a lot, which would make it a hell of a lot more valuable than modern sociology or cultural anthropology. I would hope that an analysis of this sort might help explain fundamental long-term flavor difference between different human societies, differences in life-history strategies especially (dads versus cads, etc). If we get particularly lucky, maybe we’ll have some notions of why the Mayans got bored with civilization, why Chinese kids are passive at birth while European and African kids are feisty. We’ll see.
Of course we could be wrong. It’s going to have be tested and checked: it’s not magic. It is based on the realization that the sort of morphs and game-theoretic balances we see in some nonhuman species are if anything more likely to occur in humans, because our societies are so complex, because the effectiveness of a course of action so often depends on the psychologies of other individuals – that and the obvious fact that people are not the same everywhere.
west-hunter
sapiens
game-theory
evolution
personality
thinking
essay
adversarial
GT-101
EGT
scitariat
tradeoffs
equilibrium
strategy
distribution
sociality
variance-components
flexibility
rigidity
diversity
biodet
behavioral-gen
nature
within-without
roots
explanans
psychology
social-psych
evopsych
intricacy
oscillation
pro-rata
iteration-recursion
insight
curiosity
letters
models
theory-practice
civilization
latin-america
farmers-and-foragers
age-of-discovery
china
asia
sinosphere
europe
the-great-west-whale
africa
developmental
empirical
humanity
courage
virtu
theory-of-mind
reputation
cybernetics
random
degrees-of-freedom
manifolds
occam
parsimony
turchin
broad-econ
deep-materialism
cultural-dynamics
anthropology
cliometrics
hari-seldon
learning
ecology
context
leadership
cost-benefit
apollonian-dionysian
detail-architecture
history
antiquity
pop-diff
comparison
plots
being-becoming
number
uniqueness
...
This kind of game-theoretic genetic variation, driving distinct behavioral strategies, can have some really odd properties. For one thing, there can be more than one possible stable mix of behavioral types even in identical ecological situations. It’s a bit like dropping a marble onto a hilly landscape with many unconnected valleys – it will roll to the bottom of some valley, but initial conditions determine which valley. Small perturbations will not knock the marble out of the valley it lands in. In the same way, two human populations could fall into different states, different stable mixes of behavioral traits, for no reason at all other than chance and then stay there indefinitely. Populations are even more likely to fall into qualitatively different stable states when the ecological situations are significantly different.
...
What this means, think, is that it is entirely possible that human societies fall into fundamentally different patterns because of genetic influences on behavior that are best understood via evolutionary game theory. Sometimes one population might have a psychological type that doesn’t exist at all in another society, or the distribution could be substantially different. Sometimes these different social patterns will be predictable results of different ecological situations, sometimes the purest kind of chance. Sometimes the internal dynamics of these genetic systems will produce oscillatory (or chaotic!) changes in gene frequencies over time, which means changes in behavior and personality over time. In some cases, these internal genetic dynamics may be the fundamental reason for the rise and fall of empires. Societies in one stable distribution, in a particular psychological/behavioral/life history ESS, may simply be unable to replicate some of the institutions found in peoples in a different ESS.
Evolutionary forces themselves vary according to what ESS you’re in. Which ESS you’re in may be the most fundamental ethnic fact, and explain the most profound ethnic behavioral differences
Look, everyone is always looking for the secret principles that underlie human society and history, some algebra that takes mounds of historical and archaeological data – the stuff that happens – and explains it in some compact way, lets us understand it, just as continental drift made a comprehensible story out of geology. On second thought, ‘everyone’ mean that smallish fraction of researchers that are slaves of curiosity…
This approach isn’t going to explain everything – nothing will. But it might explain a lot, which would make it a hell of a lot more valuable than modern sociology or cultural anthropology. I would hope that an analysis of this sort might help explain fundamental long-term flavor difference between different human societies, differences in life-history strategies especially (dads versus cads, etc). If we get particularly lucky, maybe we’ll have some notions of why the Mayans got bored with civilization, why Chinese kids are passive at birth while European and African kids are feisty. We’ll see.
Of course we could be wrong. It’s going to have be tested and checked: it’s not magic. It is based on the realization that the sort of morphs and game-theoretic balances we see in some nonhuman species are if anything more likely to occur in humans, because our societies are so complex, because the effectiveness of a course of action so often depends on the psychologies of other individuals – that and the obvious fact that people are not the same everywhere.
november 2016 by nhaliday
Overcoming Bias : Ems Give Longer Human Legacy
hanson ems futurism prediction thinking ratty legacy humanity singularity complement-substitute eden-heaven competition software ai flux-stasis technology long-short-run temperance time-preference institutions moloch coordination cooperate-defect number government bio evolution similarity cultural-dynamics interests telos-atelos impetus values formal-values detail-architecture
november 2016 by nhaliday
hanson ems futurism prediction thinking ratty legacy humanity singularity complement-substitute eden-heaven competition software ai flux-stasis technology long-short-run temperance time-preference institutions moloch coordination cooperate-defect number government bio evolution similarity cultural-dynamics interests telos-atelos impetus values formal-values detail-architecture
november 2016 by nhaliday
Overcoming Bias : Why Men Are Bad At “Feelings”
october 2016 by nhaliday
Mating in mammals has a basic asymmetry – females must invest more in each child than males. This can lead to an equilibrium where males focus on impressing and having sex with as many females as possible, while females do most of the child-rearing and choose impressive males.
Since human kids require extra child-rearing, human foragers developed pair-bonding, wherein for a few years a male gave substantial resource support to help raising a kid in trade for credible signs that the kid was his. Farmers strengthened such bonds into “marriage” — while both lived, the man gave resources sufficient to raise kids, and the woman only had sex with him. Such strong pair-bonds were held together not only by threats of social punishment, but also by strong feelings of attachment.
Such bonds can break, however. And because they are asymmetric, their betrayal is also asymmetric. Women betray bonds more by temporarily having fertile sex with other men, while men betray bonds more by directing resources more permanently to other women. So when farmer husbands and wives watch for signs of betrayal, they watch for different things. Husbands watch wives more for signs of a temporary inclination toward short-term mating with other men, while wives watch husbands more for signs of an inclination to shift toward a long-term resource-giving bond with other women. (Of course they both watch for both sorts of inclinations; the issue is emphasis.)
Emotionally, Men Are Far, Women Near: http://www.overcomingbias.com/2011/08/emotional-men-are-far-women-near.html
Now add two more assumptions:
1. Each gender is more emotional about the topic area (short vs. long term mating) where its feelings are more complex, layered, and opaque.
2. Long term mating thoughts tend to be in far mode, while short term mating thoughts tend to be in near mode. (Love is far, sex is near.)
Given these assumptions we should expect emotional men to be more in far mode, and emotional women to be more in near mode. (At least if mating-related emotions are a big part of emotions overall.) And since far modes tend to have a more positive mood, we should expect men to have more positive emotions, and women more negative.
In fact, even though overall men and women are just as emotional, men report more positive and less negative emotions than women. Also, after listening to an emotional story, male hormones help one remember its far-mode-abstract gist, while female hormones help one remembrer its near-mode-concrete details. (Supporting study quotes below.)
I’ve been wondering for a while why we don’t see a general correlation between near vs. far and emotionality, and I guess this explains it – the correlation is there but it flips between genders. This also helps explain common patterns in when the genders see each other as overly or underly emotional. Women are more emotional about details (e.g., his smell, that song), while men are more emotional about generalities (e.g., patriotism, fairness). Now for those study quotes:
Love Is An Interpretation: http://www.overcomingbias.com/2013/10/love-is-an-interpretation.html
What does it mean to feel loved: http://journals.sagepub.com/doi/abs/10.1177/0265407517724600
Cultural consensus and individual differences in felt love
We examined different romantic and nonromantic scenarios that occur in daily life and asked people if they perceived those scenarios as loving signals and if they aligned with the cultural agreement... More specifically, we found that male participants show less knowledge of the consensus on felt love than female participants... Men are more likely to think about sexual commitment and the pleasure of intercourse when thinking about love, whereas women are more prone to thinking about love as emotional commitment and security... In terms of relationship status, we also found that people in relationships know more about the consensus on felt love than people who are single... Our results also demonstrated personality differences in people’s ability to know the consensus on felt love. Based on our findings, people who were higher in agreeableness and/ or higher in neuroticism showed more knowledge about the consensus on felt love... The finding that neuroticism is related to more knowledge of the consensus on felt love is surprising when considering the literature which typically links neuroticism to problematic relationship outcomes, such as divorce, low relationship satisfaction, marital instability, and shorter relationships... Results indicated that in this U.S. sample Black people showed less knowledge about the consensus on felt love than other racial and ethnic groups. This finding is expected because the majority of the U.S. sample recruited is of White racial/ethnic background and thus this majority (White) mostly influences the consensus on the indicators of love.
Lost For Words, On Purpose: https://www.overcomingbias.com/2014/07/lost-for-words-on-purpose.html
But consider the two cases of food and love/sex (which I’m lumping together here). It seems to me that while these topics are of comparable importance, we have a lot more ways to clearly express distinctions on foods than on love/sex. So when people want to express feelings on love/sex, they often retreat to awkward analogies and suggestive poetry.
hanson
thinking
gender
study
summary
near-far
gender-diff
emotion
ratty
sex
sexuality
signum
endocrine
correlation
phalanges
things
multi
psychology
social-psych
wordlessness
demographics
race
language
signaling
X-not-about-Y
dimensionality
degrees-of-freedom
consilience
homo-hetero
farmers-and-foragers
social-structure
number
duty
morality
symmetry
EEA
evopsych
hidden-motives
illusion
within-without
dennett
open-closed
hypocrisy
detail-architecture
time
apollonian-dionysian
long-short-run
cooperate-defect
Since human kids require extra child-rearing, human foragers developed pair-bonding, wherein for a few years a male gave substantial resource support to help raising a kid in trade for credible signs that the kid was his. Farmers strengthened such bonds into “marriage” — while both lived, the man gave resources sufficient to raise kids, and the woman only had sex with him. Such strong pair-bonds were held together not only by threats of social punishment, but also by strong feelings of attachment.
Such bonds can break, however. And because they are asymmetric, their betrayal is also asymmetric. Women betray bonds more by temporarily having fertile sex with other men, while men betray bonds more by directing resources more permanently to other women. So when farmer husbands and wives watch for signs of betrayal, they watch for different things. Husbands watch wives more for signs of a temporary inclination toward short-term mating with other men, while wives watch husbands more for signs of an inclination to shift toward a long-term resource-giving bond with other women. (Of course they both watch for both sorts of inclinations; the issue is emphasis.)
Emotionally, Men Are Far, Women Near: http://www.overcomingbias.com/2011/08/emotional-men-are-far-women-near.html
Now add two more assumptions:
1. Each gender is more emotional about the topic area (short vs. long term mating) where its feelings are more complex, layered, and opaque.
2. Long term mating thoughts tend to be in far mode, while short term mating thoughts tend to be in near mode. (Love is far, sex is near.)
Given these assumptions we should expect emotional men to be more in far mode, and emotional women to be more in near mode. (At least if mating-related emotions are a big part of emotions overall.) And since far modes tend to have a more positive mood, we should expect men to have more positive emotions, and women more negative.
In fact, even though overall men and women are just as emotional, men report more positive and less negative emotions than women. Also, after listening to an emotional story, male hormones help one remember its far-mode-abstract gist, while female hormones help one remembrer its near-mode-concrete details. (Supporting study quotes below.)
I’ve been wondering for a while why we don’t see a general correlation between near vs. far and emotionality, and I guess this explains it – the correlation is there but it flips between genders. This also helps explain common patterns in when the genders see each other as overly or underly emotional. Women are more emotional about details (e.g., his smell, that song), while men are more emotional about generalities (e.g., patriotism, fairness). Now for those study quotes:
Love Is An Interpretation: http://www.overcomingbias.com/2013/10/love-is-an-interpretation.html
What does it mean to feel loved: http://journals.sagepub.com/doi/abs/10.1177/0265407517724600
Cultural consensus and individual differences in felt love
We examined different romantic and nonromantic scenarios that occur in daily life and asked people if they perceived those scenarios as loving signals and if they aligned with the cultural agreement... More specifically, we found that male participants show less knowledge of the consensus on felt love than female participants... Men are more likely to think about sexual commitment and the pleasure of intercourse when thinking about love, whereas women are more prone to thinking about love as emotional commitment and security... In terms of relationship status, we also found that people in relationships know more about the consensus on felt love than people who are single... Our results also demonstrated personality differences in people’s ability to know the consensus on felt love. Based on our findings, people who were higher in agreeableness and/ or higher in neuroticism showed more knowledge about the consensus on felt love... The finding that neuroticism is related to more knowledge of the consensus on felt love is surprising when considering the literature which typically links neuroticism to problematic relationship outcomes, such as divorce, low relationship satisfaction, marital instability, and shorter relationships... Results indicated that in this U.S. sample Black people showed less knowledge about the consensus on felt love than other racial and ethnic groups. This finding is expected because the majority of the U.S. sample recruited is of White racial/ethnic background and thus this majority (White) mostly influences the consensus on the indicators of love.
Lost For Words, On Purpose: https://www.overcomingbias.com/2014/07/lost-for-words-on-purpose.html
But consider the two cases of food and love/sex (which I’m lumping together here). It seems to me that while these topics are of comparable importance, we have a lot more ways to clearly express distinctions on foods than on love/sex. So when people want to express feelings on love/sex, they often retreat to awkward analogies and suggestive poetry.
october 2016 by nhaliday
Overcoming Bias : All Is Simple Parts Interacting Simply
physics thinking synthesis hanson idk len:long essay philosophy neuro dennett new-religion map-territory models occam minimalism big-picture analytical-holistic parsimony metameta ratty structure complex-systems reduction detail-architecture cybernetics lens emergent composition-decomposition elegance coupling-cohesion
september 2016 by nhaliday
physics thinking synthesis hanson idk len:long essay philosophy neuro dennett new-religion map-territory models occam minimalism big-picture analytical-holistic parsimony metameta ratty structure complex-systems reduction detail-architecture cybernetics lens emergent composition-decomposition elegance coupling-cohesion
september 2016 by nhaliday
Why Information Grows – Paul Romer
september 2016 by nhaliday
thinking like a physicist:
The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.
Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/
books
summary
review
economics
growth-econ
interdisciplinary
hmm
physics
thinking
feynman
tradeoffs
paul-romer
econotariat
🎩
🎓
scholar
aphorism
lens
signal-noise
cartoons
skeleton
s:**
giants
electromag
mutation
genetics
genomics
bits
nibble
stories
models
metameta
metabuch
problem-solving
composition-decomposition
structure
abstraction
zooming
examples
knowledge
human-capital
behavioral-econ
network-structure
info-econ
communication
learning
information-theory
applications
volo-avolo
map-territory
externalities
duplication
spreading
property-rights
lattice
multi
government
polisci
policy
counterfactual
insight
paradox
parallax
reduction
empirical
detail-architecture
methodology
crux
visual-understanding
theory-practice
matching
analytical-holistic
branches
complement-substitute
local-global
internet
technology
cost-benefit
investing
micro
signaling
limits
public-goodish
interpretation
elegance
meta:reading
intellectual-property
writing
The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.
Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/
september 2016 by nhaliday
Cognitive bias cheat sheet
list rationality thinking checklists cheatsheet metabuch org:med biases neurons bounded-cognition info-dynamics retention learning infographic pic multi rat-pack 🤖 spock 2016 reference illusion hypocrisy within-without dennett psychology cog-psych social-psych top-n heuristic novelty weird error stereotypes analogy inference stats methodology reason theory-of-mind near-far time meta:prediction generalization cost-benefit direction homo-hetero parsimony intricacy tradeoffs knowledge detail-architecture abstraction decision-making quality tribalism us-them explore-exploit judgement
september 2016 by nhaliday
list rationality thinking checklists cheatsheet metabuch org:med biases neurons bounded-cognition info-dynamics retention learning infographic pic multi rat-pack 🤖 spock 2016 reference illusion hypocrisy within-without dennett psychology cog-psych social-psych top-n heuristic novelty weird error stereotypes analogy inference stats methodology reason theory-of-mind near-far time meta:prediction generalization cost-benefit direction homo-hetero parsimony intricacy tradeoffs knowledge detail-architecture abstraction decision-making quality tribalism us-them explore-exploit judgement
september 2016 by nhaliday
Information Processing: High V, Low M
september 2016 by nhaliday
http://www.unz.com/article/iq-or-the-mathverbal-split/
Commenter Gwen on the blog Infoproc hints at a possible neurological basis for this phenomenon, stating that “one bit of speculation I have: the neuroimaging studies seem to consistently point towards efficiency of global connectivity rather than efficiency or other traits of individual regions; you could interpret this as a general factor across a wide battery of tasks because they are all hindered to a greater or lesser degree by simply difficulties in coordination while performing the task; so perhaps what causes Spearman is global connectivity becoming around as efficient as possible and no longer a bottleneck for most tasks, and instead individual brain regions start dominating additional performance improvements. So up to a certain level of global communication efficiency, there is a general intelligence factor but then specific abilities like spatial vs verbal come apart and cease to have common bottlenecks and brain tilts manifest themselves much more clearly.” [10] This certainly seem plausible enough. Let’s hope that those far smarter than ourselves will slowly get to the bottom of these matters over the coming decades.
...
My main prediction here then is that based on HBD, I don’t expect China or East Asia to rival the Anglosphere in the life sciences and medicine or other verbally loaded scientific fields. Perhaps China can mirror Japan in developing pockets of strengths in various areas of the life sciences. Given its significantly larger population, this might indeed translate into non-trivial high-end output in the fields of biology and biomedicine. The core strengths of East Asian countries though, as science in the region matures, will lie primarily in quantitative areas such as physics or chemistry, and this is where I predict the region will shine in the coming years. China’s recent forays into quantum cryptography provide one such example. [40]
...
In fact, as anyone who’s been paying attention has noticed, modern day tech is essentially a California and East Asian affair, with the former focused on software and the latter more so on hardware. American companies dominate in the realm of internet infrastructure and platforms, while East Asia is predominant in consumer electronics hardware, although as noted, China does have its own versions of general purpose tech giants in companies like Baidu, Alibaba, and Tencent. By contrast, Europe today has relatively few well known tech companies apart from some successful apps such as Spotify or Skype and entities such as Nokia or Ericsson. [24] It used to have more established technology companies back in the day, but the onslaught of competition from the US and East Asia put a huge dent in Europe’s technology industry.
...
Although many will point to institutional factors such as China or the United States enjoying large, unfragmented markets to explain the decline of European tech, I actually want to offer a more HBD oriented explanation not only for why Europe seems to lag in technology and engineering relative to America and East Asia, but also for why tech in the United States is skewed towards software, while tech in East Asia is skewed towards hardware. I believe that the various phenomenon described above can all be explained by one common underlying mechanism, namely the math/verbal split. Simply put, if you’re really good at math, you gravitate towards hardware. If your skills are more verbally inclined, you gravitate towards software. In general, your chances of working in engineering and technology are greatly bolstered by being spatially and quantitatively adept.
...
If my assertions here are correct, I predict that over the coming decades, we’ll increasingly see different groups of people specialize in areas where they’re most proficient at. This means that East Asians and East Asian societies will be characterized by a skew towards quantitative STEM fields such as physics, chemistry, and engineering and towards hardware and high-tech manufacturing, while Western societies will be characterized by a skew towards the biological sciences and medicine, social sciences, humanities, and software and services. [41] Likewise, India also appears to be a country whose strengths lie more in software and services as opposed to hardware and manufacturing. My fundamental thesis is that all of this is ultimately a reflection of underlying HBD, in particular the math/verbal split. I believe this is the crucial insight lacking in the analyses others offer.
http://www.unz.com/article/iq-or-the-mathverbal-split/#comment-2230751
Sailer In TakiMag: What Does the Deep History of China and India Tell Us About Their Futures?: http://takimag.com/article/a_pair_of_giants_steve_sailer/print#axzz5BHqRM5nD
In an age of postmodern postnationalism that worships diversity, China is old-fashioned. It’s homogeneous, nationalist, and modernist. China seems to have utilitarian 1950s values.
For example, Chinese higher education isn’t yet competitive on the world stage, but China appears to be doing a decent job of educating the masses in the basics. High Chinese scores on the international PISA test for 15-year-olds shouldn’t be taken at face value, but it’s likely that China is approaching first-world norms in providing equality of opportunity through adequate schooling.
Due to censorship and language barriers, Chinese individuals aren’t well represented in English-language cyberspace. Yet in real life, the Chinese build things, such as bridges that don’t fall down, and they make stuff, employing tens of millions of proletarians in their factories.
The Chinese seem, on average, to be good with their hands, which is something that often makes American intellectuals vaguely uncomfortable. But at least the Chinese proles are over there merely manufacturing things cheaply, so American thinkers don’t resent them as much as they do American tradesmen.
Much of the class hatred in America stems from the suspicions of the intelligentsia that plumbers and mechanics are using their voodoo cognitive ability of staring at 3-D physical objects and somehow understanding why they are broken to overcharge them for repairs. Thus it’s only fair, America’s white-collar managers assume, that they export factory jobs to lower-paid China so that they can afford to throw manufactured junk away when it breaks and buy new junk rather than have to subject themselves to the humiliation of admitting to educationally inferior American repairmen that they don’t understand what is wrong with their own gizmos.
...
This Chinese lack of diversity is out of style, and yet it seems to make it easier for the Chinese to get things done.
In contrast, India appears more congenial to current-year thinkers. India seems postmodern and postnationalist, although it might be more accurately called premodern and prenationalist.
...
Another feature that makes our commentariat comfortable with India is that Indians don’t seem to be all that mechanically facile, perhaps especially not the priestly Brahmin caste, with whom Western intellectuals primarily interact.
And the Indians tend to be more verbally agile than the Chinese and more adept at the kind of high-level abstract thinking required by modern computer science, law, and soft major academia. Thousands of years of Brahmin speculations didn’t do much for India’s prosperity, but somehow have prepared Indians to make fortunes in 21st-century America.
http://www.sciencedirect.com/science/article/pii/S0160289616300757
- Study used two moderately large American community samples.
- Verbal and not nonverbal ability drives relationship between ability and ideology.
- Ideology and ability appear more related when ability assessed professionally.
- Self-administered or nonverbal ability measures will underestimate this relationship.
https://www.unz.com/gnxp/the-universal-law-of-interpersonal-dynamics/
Every once in a while I realize something with my conscious mind that I’ve understood implicitly for a long time. Such a thing happened to me yesterday, while reading a post on Stalin, by Amritas. It is this:
S = P + E
Social Status equals Political Capital plus Economic Capital
...
Here’s an example of its explanatory power: If we assume that a major human drive is to maximize S, we can predict that people with high P will attempt to minimize the value of E (since S-maximization is a zero-sum game). And so we see. Throughout history there has been an attempt to ennoble P while stigmatizing E. Conversely, throughout history, people with high E use it to acquire P. Thus, in today’s society we see that socially adept people, who have inborn P skills, tend to favor socialism or big government – where their skills are most valuable, while economically productive people are often frustrated by the fact that their concrete contribution to society is deplored.
Now, you might ask yourself why the reverse isn’t true, why people with high P don’t use it to acquire E, while people with high E don’t attempt to stigmatize P? Well, I think that is true. But, while the equation is mathematically symmetrical, the nature of P-talent and E-talent is not. P-talent can be used to acquire E from the E-adept, but the E-adept are no match for the P-adept in the attempt to stigmatize P. Furthermore, P is endogenous to the system, while E is exogenous. In other words, the P-adept have the ability to manipulate the system itself to make P-talent more valuable in acquiring E, while the E-adept have no ability to manipulate the external environment to make E-talent more valuable in acquiring P.
...
1. All institutions will tend to be dominated by the P-adept
2. All institutions that have no in-built exogenous criteria for measuring its members’ status will inevitably be dominated by the P-adept
3. Universities will inevitably be dominated by the P-adept
4. Within a university, humanities and social sciences will be more dominated by the P-adept than … [more]
iq
science
culture
critique
lol
hsu
pre-2013
scitariat
rationality
epistemic
error
bounded-cognition
descriptive
crooked
realness
being-right
info-dynamics
truth
language
intelligence
kumbaya-kult
quantitative-qualitative
multi
study
psychology
cog-psych
social-psych
ideology
politics
elite
correlation
roots
signaling
psychometrics
status
capital
human-capital
things
phalanges
chart
metabuch
institutions
higher-ed
academia
class-warfare
symmetry
coalitions
strategy
class
s:*
c:**
communism
inequality
socs-and-mops
twitter
social
commentary
gnon
unaffiliated
zero-positive-sum
rot
gnxp
adversarial
🎩
stylized-facts
gender
gender-diff
cooperate-defect
ratty
yvain
ssc
tech
sv
identity-politics
culture-war
reddit
subculture
internet
🐸
discrimination
trump
systematic-ad-hoc
urban
britain
brexit
populism
diversity
literature
fiction
media
military
anomie
essay
rhetoric
martial
MENA
history
mostly-modern
stories
government
polisci
org:popup
right-wing
propaganda
counter-r
Commenter Gwen on the blog Infoproc hints at a possible neurological basis for this phenomenon, stating that “one bit of speculation I have: the neuroimaging studies seem to consistently point towards efficiency of global connectivity rather than efficiency or other traits of individual regions; you could interpret this as a general factor across a wide battery of tasks because they are all hindered to a greater or lesser degree by simply difficulties in coordination while performing the task; so perhaps what causes Spearman is global connectivity becoming around as efficient as possible and no longer a bottleneck for most tasks, and instead individual brain regions start dominating additional performance improvements. So up to a certain level of global communication efficiency, there is a general intelligence factor but then specific abilities like spatial vs verbal come apart and cease to have common bottlenecks and brain tilts manifest themselves much more clearly.” [10] This certainly seem plausible enough. Let’s hope that those far smarter than ourselves will slowly get to the bottom of these matters over the coming decades.
...
My main prediction here then is that based on HBD, I don’t expect China or East Asia to rival the Anglosphere in the life sciences and medicine or other verbally loaded scientific fields. Perhaps China can mirror Japan in developing pockets of strengths in various areas of the life sciences. Given its significantly larger population, this might indeed translate into non-trivial high-end output in the fields of biology and biomedicine. The core strengths of East Asian countries though, as science in the region matures, will lie primarily in quantitative areas such as physics or chemistry, and this is where I predict the region will shine in the coming years. China’s recent forays into quantum cryptography provide one such example. [40]
...
In fact, as anyone who’s been paying attention has noticed, modern day tech is essentially a California and East Asian affair, with the former focused on software and the latter more so on hardware. American companies dominate in the realm of internet infrastructure and platforms, while East Asia is predominant in consumer electronics hardware, although as noted, China does have its own versions of general purpose tech giants in companies like Baidu, Alibaba, and Tencent. By contrast, Europe today has relatively few well known tech companies apart from some successful apps such as Spotify or Skype and entities such as Nokia or Ericsson. [24] It used to have more established technology companies back in the day, but the onslaught of competition from the US and East Asia put a huge dent in Europe’s technology industry.
...
Although many will point to institutional factors such as China or the United States enjoying large, unfragmented markets to explain the decline of European tech, I actually want to offer a more HBD oriented explanation not only for why Europe seems to lag in technology and engineering relative to America and East Asia, but also for why tech in the United States is skewed towards software, while tech in East Asia is skewed towards hardware. I believe that the various phenomenon described above can all be explained by one common underlying mechanism, namely the math/verbal split. Simply put, if you’re really good at math, you gravitate towards hardware. If your skills are more verbally inclined, you gravitate towards software. In general, your chances of working in engineering and technology are greatly bolstered by being spatially and quantitatively adept.
...
If my assertions here are correct, I predict that over the coming decades, we’ll increasingly see different groups of people specialize in areas where they’re most proficient at. This means that East Asians and East Asian societies will be characterized by a skew towards quantitative STEM fields such as physics, chemistry, and engineering and towards hardware and high-tech manufacturing, while Western societies will be characterized by a skew towards the biological sciences and medicine, social sciences, humanities, and software and services. [41] Likewise, India also appears to be a country whose strengths lie more in software and services as opposed to hardware and manufacturing. My fundamental thesis is that all of this is ultimately a reflection of underlying HBD, in particular the math/verbal split. I believe this is the crucial insight lacking in the analyses others offer.
http://www.unz.com/article/iq-or-the-mathverbal-split/#comment-2230751
Sailer In TakiMag: What Does the Deep History of China and India Tell Us About Their Futures?: http://takimag.com/article/a_pair_of_giants_steve_sailer/print#axzz5BHqRM5nD
In an age of postmodern postnationalism that worships diversity, China is old-fashioned. It’s homogeneous, nationalist, and modernist. China seems to have utilitarian 1950s values.
For example, Chinese higher education isn’t yet competitive on the world stage, but China appears to be doing a decent job of educating the masses in the basics. High Chinese scores on the international PISA test for 15-year-olds shouldn’t be taken at face value, but it’s likely that China is approaching first-world norms in providing equality of opportunity through adequate schooling.
Due to censorship and language barriers, Chinese individuals aren’t well represented in English-language cyberspace. Yet in real life, the Chinese build things, such as bridges that don’t fall down, and they make stuff, employing tens of millions of proletarians in their factories.
The Chinese seem, on average, to be good with their hands, which is something that often makes American intellectuals vaguely uncomfortable. But at least the Chinese proles are over there merely manufacturing things cheaply, so American thinkers don’t resent them as much as they do American tradesmen.
Much of the class hatred in America stems from the suspicions of the intelligentsia that plumbers and mechanics are using their voodoo cognitive ability of staring at 3-D physical objects and somehow understanding why they are broken to overcharge them for repairs. Thus it’s only fair, America’s white-collar managers assume, that they export factory jobs to lower-paid China so that they can afford to throw manufactured junk away when it breaks and buy new junk rather than have to subject themselves to the humiliation of admitting to educationally inferior American repairmen that they don’t understand what is wrong with their own gizmos.
...
This Chinese lack of diversity is out of style, and yet it seems to make it easier for the Chinese to get things done.
In contrast, India appears more congenial to current-year thinkers. India seems postmodern and postnationalist, although it might be more accurately called premodern and prenationalist.
...
Another feature that makes our commentariat comfortable with India is that Indians don’t seem to be all that mechanically facile, perhaps especially not the priestly Brahmin caste, with whom Western intellectuals primarily interact.
And the Indians tend to be more verbally agile than the Chinese and more adept at the kind of high-level abstract thinking required by modern computer science, law, and soft major academia. Thousands of years of Brahmin speculations didn’t do much for India’s prosperity, but somehow have prepared Indians to make fortunes in 21st-century America.
http://www.sciencedirect.com/science/article/pii/S0160289616300757
- Study used two moderately large American community samples.
- Verbal and not nonverbal ability drives relationship between ability and ideology.
- Ideology and ability appear more related when ability assessed professionally.
- Self-administered or nonverbal ability measures will underestimate this relationship.
https://www.unz.com/gnxp/the-universal-law-of-interpersonal-dynamics/
Every once in a while I realize something with my conscious mind that I’ve understood implicitly for a long time. Such a thing happened to me yesterday, while reading a post on Stalin, by Amritas. It is this:
S = P + E
Social Status equals Political Capital plus Economic Capital
...
Here’s an example of its explanatory power: If we assume that a major human drive is to maximize S, we can predict that people with high P will attempt to minimize the value of E (since S-maximization is a zero-sum game). And so we see. Throughout history there has been an attempt to ennoble P while stigmatizing E. Conversely, throughout history, people with high E use it to acquire P. Thus, in today’s society we see that socially adept people, who have inborn P skills, tend to favor socialism or big government – where their skills are most valuable, while economically productive people are often frustrated by the fact that their concrete contribution to society is deplored.
Now, you might ask yourself why the reverse isn’t true, why people with high P don’t use it to acquire E, while people with high E don’t attempt to stigmatize P? Well, I think that is true. But, while the equation is mathematically symmetrical, the nature of P-talent and E-talent is not. P-talent can be used to acquire E from the E-adept, but the E-adept are no match for the P-adept in the attempt to stigmatize P. Furthermore, P is endogenous to the system, while E is exogenous. In other words, the P-adept have the ability to manipulate the system itself to make P-talent more valuable in acquiring E, while the E-adept have no ability to manipulate the external environment to make E-talent more valuable in acquiring P.
...
1. All institutions will tend to be dominated by the P-adept
2. All institutions that have no in-built exogenous criteria for measuring its members’ status will inevitably be dominated by the P-adept
3. Universities will inevitably be dominated by the P-adept
4. Within a university, humanities and social sciences will be more dominated by the P-adept than … [more]
september 2016 by nhaliday
Overcoming Bias : A Future Of Pipes
august 2016 by nhaliday
The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.
We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.
Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.
Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?
Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.
hanson
futurism
prediction
street-fighting
essay
len:short
ratty
computation
hardware
thermo
structure
composition-decomposition
complex-systems
magnitude
analysis
urban-rural
power-law
phys-energy
detail-architecture
efficiency
economics
supply-demand
labor
planning
long-term
physics
temperature
flux-stasis
fluid
measure
technology
frontier
speedometer
career
cost-benefit
identity
stylized-facts
objektbuch
data
trivia
cocktail
aphorism
We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.
Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.
Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?
Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.
august 2016 by nhaliday
checkcheckzz/system-design-interview: System design interview for IT company
systems engineering tech career guide jobs recruiting cheatsheet repo pragmatic system-design 🖥 paste links working-stiff transitions scaling-tech progression reference interview-prep move-fast-(and-break-things) puzzles examples client-server detail-architecture accretion
july 2016 by nhaliday
systems engineering tech career guide jobs recruiting cheatsheet repo pragmatic system-design 🖥 paste links working-stiff transitions scaling-tech progression reference interview-prep move-fast-(and-break-things) puzzles examples client-server detail-architecture accretion
july 2016 by nhaliday
Could a neuroscientist understand a microprocessor? | Hacker News
cool bio neuro thinking commentary hn study methodology critique gedanken analogy operational brain-scan complex-systems p:someday neuro-nitgrit nitty-gritty ideas model-organism structure lens research science detail-architecture interdisciplinary
may 2016 by nhaliday
cool bio neuro thinking commentary hn study methodology critique gedanken analogy operational brain-scan complex-systems p:someday neuro-nitgrit nitty-gritty ideas model-organism structure lens research science detail-architecture interdisciplinary
may 2016 by nhaliday
System Design Interview Cheatsheet | Hacker News
systems engineering guide recruiting tech career jobs commentary hn pragmatic system-design 🖥 techtariat paste minimum-viable working-stiff transitions progression interview-prep move-fast-(and-break-things) client-server detail-architecture cheatsheet checklists metabuch accretion
april 2016 by nhaliday
systems engineering guide recruiting tech career jobs commentary hn pragmatic system-design 🖥 techtariat paste minimum-viable working-stiff transitions progression interview-prep move-fast-(and-break-things) client-server detail-architecture cheatsheet checklists metabuch accretion
april 2016 by nhaliday
Notes Essays—Peter Thiel’s CS183: Startup—Stanford, Spring 2012
business startups strategy course thiel contrarianism barons definite-planning entrepreneurialism lecture-notes skunkworks innovation competition market-power winner-take-all usa anglosphere duplication education higher-ed law ranking success envy stanford princeton harvard elite zero-positive-sum war truth realness capitalism markets darwinian rent-seeking google facebook apple microsoft amazon capital scale network-structure tech business-models twitter social media games frontier time rhythm space musk mobile ai transportation examples recruiting venture metabuch metameta skeleton crooked wisdom gnosis-logos thinking polarization synchrony allodium antidemos democracy things exploratory dimensionality nationalism-globalism trade technology distribution moments personality phalanges stereotypes tails plots visualization creative nietzschean thick-thin psych-architecture wealth class morality ethics status extra-introversion info-dynamics narrative stories fashun myth the-classics literature big-peeps crime
february 2016 by nhaliday
business startups strategy course thiel contrarianism barons definite-planning entrepreneurialism lecture-notes skunkworks innovation competition market-power winner-take-all usa anglosphere duplication education higher-ed law ranking success envy stanford princeton harvard elite zero-positive-sum war truth realness capitalism markets darwinian rent-seeking google facebook apple microsoft amazon capital scale network-structure tech business-models twitter social media games frontier time rhythm space musk mobile ai transportation examples recruiting venture metabuch metameta skeleton crooked wisdom gnosis-logos thinking polarization synchrony allodium antidemos democracy things exploratory dimensionality nationalism-globalism trade technology distribution moments personality phalanges stereotypes tails plots visualization creative nietzschean thick-thin psych-architecture wealth class morality ethics status extra-introversion info-dynamics narrative stories fashun myth the-classics literature big-peeps crime
february 2016 by nhaliday
related tags
80000-hours ⊕ absolute-relative ⊕ abstraction ⊕ academia ⊕ accretion ⊕ accuracy ⊕ acm ⊕ acmtariat ⊕ additive ⊕ adversarial ⊕ advice ⊕ africa ⊕ age-of-discovery ⊕ agriculture ⊕ ai ⊕ ai-control ⊕ algorithms ⊕ alien-character ⊕ alignment ⊕ allodium ⊕ altruism ⊕ amazon ⊕ analogy ⊕ analysis ⊕ analytical-holistic ⊕ anglo ⊕ anglosphere ⊕ anomie ⊕ anthropology ⊕ antidemos ⊕ antiquity ⊕ aphorism ⊕ apollonian-dionysian ⊕ apple ⊕ applicability-prereqs ⊕ applications ⊕ approximation ⊕ aristos ⊕ arms ⊕ art ⊕ article ⊕ ascetic ⊕ asia ⊕ assembly ⊕ atmosphere ⊕ atoms ⊕ attention ⊕ audio ⊕ authoritarianism ⊕ auto-learning ⊕ automata-languages ⊕ automation ⊕ average-case ⊕ axelrod ⊕ axioms ⊕ backup ⊕ barons ⊕ behavioral-econ ⊕ behavioral-gen ⊕ being-becoming ⊕ being-right ⊕ benevolence ⊕ best-practices ⊕ better-explained ⊕ biases ⊕ big-peeps ⊕ big-picture ⊕ big-surf ⊕ big-yud ⊕ bio ⊕ biodet ⊕ bioinformatics ⊕ biotech ⊕ bitcoin ⊕ bits ⊕ blockchain ⊕ books ⊕ bostrom ⊕ bots ⊕ bounded-cognition ⊕ brain-scan ⊕ branches ⊕ brands ⊕ brexit ⊕ britain ⊕ broad-econ ⊕ browser ⊕ build-packaging ⊕ business ⊕ business-models ⊕ c(pp) ⊕ c:** ⊕ caching ⊕ california ⊕ cancer ⊕ canon ⊕ capital ⊕ capitalism ⊕ career ⊕ carmack ⊕ cartoons ⊕ characterization ⊕ charity ⊕ chart ⊕ cheatsheet ⊕ checking ⊕ checklists ⊕ china ⊕ christianity ⊕ civic ⊕ civil-liberty ⊕ civilization ⊕ cjones-like ⊕ clarity ⊕ class ⊕ class-warfare ⊕ clever-rats ⊕ client-server ⊕ climate-change ⊕ cliometrics ⊕ coalitions ⊕ coarse-fine ⊕ cocktail ⊕ cog-psych ⊕ cohesion ⊕ cold-war ⊕ collaboration ⊕ commentary ⊕ communication ⊕ communism ⊕ comparison ⊕ compensation ⊕ competition ⊕ compilers ⊕ complement-substitute ⊕ complex-systems ⊕ complexity ⊕ composition-decomposition ⊕ computation ⊕ computer-vision ⊕ concept ⊕ conceptual-vocab ⊕ concrete ⊕ concurrency ⊕ confidence ⊕ conquest-empire ⊕ consilience ⊕ context ⊕ contradiction ⊕ contrarianism ⊕ convexity-curvature ⊕ cool ⊕ cooperate-defect ⊕ coordination ⊕ core-rats ⊕ corporation ⊕ correctness ⊕ correlation ⊕ corruption ⊕ cost-benefit ⊕ counter-revolution ⊕ counterfactual ⊕ coupling-cohesion ⊕ courage ⊕ course ⊕ cracker-prog ⊕ creative ⊕ crime ⊕ critique ⊕ crooked ⊕ crux ⊕ crypto ⊕ cryptocurrency ⊕ cs ⊕ cultural-dynamics ⊕ culture ⊕ culture-war ⊕ curiosity ⊕ cybernetics ⊕ cycles ⊕ cynicism-idealism ⊕ dan-luu ⊕ dark-arts ⊕ darwinian ⊕ data ⊕ data-science ⊕ database ⊕ dbs ⊕ death ⊕ debate ⊕ debt ⊕ debugging ⊕ decentralized ⊕ decision-making ⊕ decision-theory ⊕ deep-learning ⊕ deep-materialism ⊕ deepgoog ⊕ defense ⊕ definite-planning ⊕ definition ⊕ degrees-of-freedom ⊕ democracy ⊕ demographics ⊕ dennett ⊕ density ⊕ descriptive ⊕ design ⊕ desktop ⊕ detail-architecture ⊖ deterrence ⊕ developing-world ⊕ developmental ⊕ differential ⊕ dimensionality ⊕ diogenes ⊕ direction ⊕ dirty-hands ⊕ discrete ⊕ discrimination ⊕ discussion ⊕ disease ⊕ distributed ⊕ distribution ⊕ diversity ⊕ documentation ⊕ domestication ⊕ dotnet ⊕ drugs ⊕ duplication ⊕ duty ⊕ early-modern ⊕ ecology ⊕ econ-metrics ⊕ econometrics ⊕ economics ⊕ econotariat ⊕ eden ⊕ eden-heaven ⊕ education ⊕ EEA ⊕ effective-altruism ⊕ efficiency ⊕ egalitarianism-hierarchy ⊕ EGT ⊕ einstein ⊕ electromag ⊕ elegance ⊕ elite ⊕ email ⊕ embodied ⊕ emergent ⊕ emotion ⊕ empirical ⊕ ems ⊕ encyclopedic ⊕ endocrine ⊕ endogenous-exogenous ⊕ ends-means ⊕ energy-resources ⊕ engineering ⊕ enhancement ⊕ entrepreneurialism ⊕ environment ⊕ envy ⊕ epistemic ⊕ equilibrium ⊕ error ⊕ essay ⊕ essence-existence ⊕ estimate ⊕ ethics ⊕ EU ⊕ europe ⊕ evidence-based ⊕ evolution ⊕ evopsych ⊕ examples ⊕ existence ⊕ exocortex ⊕ expectancy ⊕ expert ⊕ expert-experience ⊕ explanans ⊕ explanation ⊕ exploratory ⊕ explore-exploit ⊕ exposition ⊕ externalities ⊕ extra-introversion ⊕ facebook ⊕ faq ⊕ farmers-and-foragers ⊕ fashun ⊕ FDA ⊕ feudal ⊕ feynman ⊕ fiction ⊕ finance ⊕ finiteness ⊕ fixed-point ⊕ flexibility ⊕ fluid ⊕ flux-stasis ⊕ focus ⊕ foreign-lang ⊕ foreign-policy ⊕ form-design ⊕ formal-methods ⊕ formal-values ⊕ forms-instances ⊕ frameworks ⊕ free-riding ⊕ frontend ⊕ frontier ⊕ futurism ⊕ gallic ⊕ game-theory ⊕ games ⊕ gavisti ⊕ gedanken ⊕ gender ⊕ gender-diff ⊕ generalization ⊕ genetics ⊕ genomics ⊕ geoengineering ⊕ geography ⊕ geometry ⊕ geopolitics ⊕ germanic ⊕ giants ⊕ gibbon ⊕ github ⊕ gnon ⊕ gnosis-logos ⊕ gnxp ⊕ god-man-beast-victim ⊕ google ⊕ government ⊕ gradient-descent ⊕ graph-theory ⊕ graphics ⊕ graphs ⊕ gray-econ ⊕ gregory-clark ⊕ grokkability-clarity ⊕ group-selection ⊕ growth-econ ⊕ GT-101 ⊕ guide ⊕ guilt-shame ⊕ gwern ⊕ hacker ⊕ hanson ⊕ hard-tech ⊕ hardware ⊕ hari-seldon ⊕ harvard ⊕ haskell ⊕ hci ⊕ heavy-industry ⊕ heavyweights ⊕ henrich ⊕ heterodox ⊕ heuristic ⊕ hi-order-bits ⊕ hidden-motives ⊕ high-variance ⊕ higher-ed ⊕ history ⊕ hmm ⊕ hn ⊕ homo-hetero ⊕ honor ⊕ hsu ⊕ huge-data-the-biggest ⊕ human-capital ⊕ human-ml ⊕ humanity ⊕ hypocrisy ⊕ hypothesis-testing ⊕ ideas ⊕ identity ⊕ identity-politics ⊕ ideology ⊕ idk ⊕ IEEE ⊕ iidness ⊕ illusion ⊕ impact ⊕ impetus ⊕ incentives ⊕ increase-decrease ⊕ india ⊕ individualism-collectivism ⊕ industrial-revolution ⊕ inequality ⊕ inference ⊕ info-dynamics ⊕ info-econ ⊕ info-foraging ⊕ infographic ⊕ information-theory ⊕ init ⊕ innovation ⊕ insight ⊕ institutions ⊕ intel ⊕ intellectual-property ⊕ intelligence ⊕ interdisciplinary ⊕ interests ⊕ interface-compatibility ⊕ internet ⊕ interpretation ⊕ intersection ⊕ intersection-connectedness ⊕ intervention ⊕ interview ⊕ interview-prep ⊕ intricacy ⊕ intuition ⊕ investing ⊕ iq ⊕ iran ⊕ iron-age ⊕ iteration-recursion ⊕ janus ⊕ japan ⊕ jargon ⊕ javascript ⊕ jobs ⊕ journos-pundits ⊕ judaism ⊕ judgement ⊕ julia ⊕ justice ⊕ jvm ⊕ knowledge ⊕ krugman ⊕ kumbaya-kult ⊕ labor ⊕ language ⊕ large-factor ⊕ latin-america ⊕ lattice ⊕ law ⊕ leadership ⊕ learning ⊕ lecture-notes ⊕ lectures ⊕ left-wing ⊕ legacy ⊕ len:long ⊕ len:short ⊕ lens ⊕ lesswrong ⊕ let-me-see ⊕ letters ⊕ leviathan ⊕ libraries ⊕ limits ⊕ linear-algebra ⊕ liner-notes ⊕ links ⊕ linux ⊕ list ⊕ literature ⊕ local-global ⊕ logic ⊕ lol ⊕ long-short-run ⊕ long-term ⊕ longevity ⊕ love-hate ⊕ lower-bounds ⊕ machine-learning ⊕ macro ⊕ magnitude ⊕ malaise ⊕ malthus ⊕ management ⊕ manifolds ⊕ map-territory ⊕ marginal ⊕ market-failure ⊕ market-power ⊕ markets ⊕ markov ⊕ martial ⊕ matching ⊕ math ⊕ math.CA ⊕ measure ⊕ measurement ⊕ mechanics ⊕ media ⊕ medicine ⊕ medieval ⊕ mediterranean ⊕ MENA ⊕ meta:prediction ⊕ meta:reading ⊕ meta:rhetoric ⊕ metabuch ⊕ metal-to-virtual ⊕ metameta ⊕ methodology ⊕ michael-nielsen ⊕ micro ⊕ microsoft ⊕ migration ⊕ military ⊕ minimalism ⊕ minimum-viable ⊕ miri-cfar ⊕ mobile ⊕ mobility ⊕ model-organism ⊕ models ⊕ mokyr-allen-mccloskey ⊕ moloch ⊕ moments ⊕ monetary-fiscal ⊕ money ⊕ mooc ⊕ morality ⊕ mostly-modern ⊕ move-fast-(and-break-things) ⊕ multi ⊕ multiplicative ⊕ musk ⊕ mutation ⊕ mystic ⊕ myth ⊕ n-factor ⊕ narrative ⊕ nationalism-globalism ⊕ nature ⊕ near-far ⊕ network-structure ⊕ networking ⊕ neuro ⊕ neuro-nitgrit ⊕ neurons ⊕ new-religion ⊕ news ⊕ nibble ⊕ nietzschean ⊕ nitty-gritty ⊕ nl-and-so-can-you ⊕ no-go ⊕ noble-lie ⊕ noblesse-oblige ⊕ nonlinearity ⊕ northeast ⊕ notation ⊕ novelty ⊕ nuclear ⊕ number ⊕ nutrition ⊕ nyc ⊕ objektbuch ⊕ ocaml-sml ⊕ occam ⊕ occident ⊕ offense-defense ⊕ old-anglo ⊕ open-closed ⊕ operational ⊕ optimate ⊕ optimism ⊕ optimization ⊕ order-disorder ⊕ orders ⊕ org:biz ⊕ org:bleg ⊕ org:com ⊕ org:edge ⊕ org:edu ⊕ org:junk ⊕ org:lite ⊕ org:mag ⊕ org:mat ⊕ org:med ⊕ org:nat ⊕ org:popup ⊕ org:rec ⊕ organizing ⊕ orient ⊕ os ⊕ oscillation ⊕ oss ⊕ osx ⊕ outcome-risk ⊕ outliers ⊕ p2p ⊕ p:someday ⊕ papers ⊕ parable ⊕ paradox ⊕ parallax ⊕ parasites-microbiome ⊕ parenting ⊕ parsimony ⊕ paste ⊕ patho-altruism ⊕ patience ⊕ paul-romer ⊕ pdf ⊕ peace-violence ⊕ people ⊕ performance ⊕ personality ⊕ pessimism ⊕ phalanges ⊕ pharma ⊕ phase-transition ⊕ philosophy ⊕ phys-energy ⊕ physics ⊕ pic ⊕ piketty ⊕ piracy ⊕ planning ⊕ plots ⊕ pls ⊕ podcast ⊕ poetry ⊕ polanyi-marx ⊕ polarization ⊕ policy ⊕ polisci ⊕ politics ⊕ pop-diff ⊕ population ⊕ populism ⊕ power ⊕ power-law ⊕ pragmatic ⊕ pre-2013 ⊕ pre-ww2 ⊕ prediction ⊕ predictive-processing ⊕ prepping ⊕ preprint ⊕ presentation ⊕ primitivism ⊕ princeton ⊕ privacy ⊕ pro-rata ⊕ probability ⊕ problem-solving ⊕ prof ⊕ programming ⊕ progression ⊕ project ⊕ propaganda ⊕ properties ⊕ property-rights ⊕ proposal ⊕ protocol-metadata ⊕ prudence ⊕ psych-architecture ⊕ psychiatry ⊕ psycho-atoms ⊕ psychology ⊕ psychometrics ⊕ public-goodish ⊕ publishing ⊕ putnam-like ⊕ puzzles ⊕ python ⊕ quality ⊕ quantitative-qualitative ⊕ quantum ⊕ quantum-info ⊕ questions ⊕ quixotic ⊕ quotes ⊕ race ⊕ random ⊕ randy-ayndy ⊕ ranking ⊕ rant ⊕ rat-pack ⊕ rationality ⊕ ratty ⊕ realness ⊕ reason ⊕ recruiting ⊕ reddit ⊕ redistribution ⊕ reduction ⊕ reference ⊕ reflection ⊕ regression-to-mean ⊕ regularizer ⊕ regulation ⊕ reinforcement ⊕ relativity ⊕ religion ⊕ rent-seeking ⊕ repo ⊕ reputation ⊕ research ⊕ research-program ⊕ responsibility ⊕ retention ⊕ review ⊕ revolution ⊕ rhetoric ⊕ rhythm ⊕ right-wing ⊕ rigidity ⊕ rigor ⊕ rigorous-crypto ⊕ risk ⊕ ritual ⊕ roadmap ⊕ robotics ⊕ robust ⊕ roots ⊕ rot ⊕ russia ⊕ rust ⊕ s:* ⊕ s:** ⊕ sapiens ⊕ scale ⊕ scaling-tech ⊕ scholar ⊕ science ⊕ scifi-fantasy ⊕ scitariat ⊕ search ⊕ securities ⊕ security ⊕ selection ⊕ self-interest ⊕ sequential ⊕ sex ⊕ sexuality ⊕ shakespeare ⊕ shift ⊕ shipping ⊕ SIGGRAPH ⊕ signal-noise ⊕ signaling ⊕ signum ⊕ similarity ⊕ simplex ⊕ simulation ⊕ singularity ⊕ sinosphere ⊕ skeleton ⊕ skunkworks ⊕ slides ⊕ smoothness ⊕ social ⊕ social-choice ⊕ social-norms ⊕ social-psych ⊕ social-science ⊕ social-structure ⊕ sociality ⊕ society ⊕ sociology ⊕ socs-and-mops ⊕ software ⊕ space ⊕ spatial ⊕ speculation ⊕ speed ⊕ speedometer ⊕ spengler ⊕ spock ⊕ spreading ⊕ ssc ⊕ stagnation ⊕ stanford ⊕ startups ⊕ state ⊕ state-of-art ⊕ statesmen ⊕ stats ⊕ status ⊕ stereotypes ⊕ stochastic-processes ⊕ stock-flow ⊕ stories ⊕ strategy ⊕ street-fighting ⊕ stress ⊕ strings ⊕ structure ⊕ study ⊕ studying ⊕ stylized-facts ⊕ subculture ⊕ subjective-objective ⊕ success ⊕ summary ⊕ supply-demand ⊕ survey ⊕ sv ⊕ symmetry ⊕ synchrony ⊕ synthesis ⊕ system-design ⊕ systematic-ad-hoc ⊕ systems ⊕ tactics ⊕ tails ⊕ tainter ⊕ tcs ⊕ tcstariat ⊕ teaching ⊕ tech ⊕ technical-writing ⊕ technology ⊕ techtariat ⊕ telos-atelos ⊕ temperance ⊕ temperature ⊕ terminal ⊕ the-basilisk ⊕ the-bones ⊕ the-classics ⊕ the-devil ⊕ the-founding ⊕ the-great-west-whale ⊕ the-self ⊕ the-trenches ⊕ the-watchers ⊕ the-west ⊕ the-world-is-just-atoms ⊕ theory-of-mind ⊕ theory-practice ⊕ theos ⊕ thermo ⊕ thick-thin ⊕ thiel ⊕ things ⊕ thinking ⊕ threat-modeling ⊕ time ⊕ time-complexity ⊕ time-preference ⊕ time-series ⊕ top-n ⊕ track-record ⊕ trade ⊕ tradeoffs ⊕ transitions ⊕ transportation ⊕ travel ⊕ trees ⊕ trends ⊕ tribalism ⊕ trivia ⊕ troll ⊕ trump ⊕ trust ⊕ truth ⊕ turchin ⊕ turing ⊕ tutoring ⊕ twitter ⊕ ui ⊕ unaffiliated ⊕ uncertainty ⊕ unintended-consequences ⊕ uniqueness ⊕ universalism-particularism ⊕ unix ⊕ urban ⊕ urban-rural ⊕ us-them ⊕ usa ⊕ ux ⊕ values ⊕ vampire-squid ⊕ variance-components ⊕ venture ⊕ video ⊕ virtu ⊕ virtualization ⊕ visual-understanding ⊕ visualization ⊕ visuo ⊕ vitality ⊕ volo-avolo ⊕ vr ⊕ war ⊕ wealth ⊕ web ⊕ webapp ⊕ weird ⊕ welfare-state ⊕ west-hunter ⊕ westminster ⊕ whiggish-hegelian ⊕ white-paper ⊕ whole-partial-many ⊕ wiki ⊕ winner-take-all ⊕ wire-guided ⊕ wisdom ⊕ within-without ⊕ wordlessness ⊕ working-stiff ⊕ world ⊕ world-war ⊕ worrydream ⊕ worse-is-better/the-right-thing ⊕ writing ⊕ X-not-about-Y ⊕ yak-shaving ⊕ yvain ⊕ zero-positive-sum ⊕ zooming ⊕ 🎓 ⊕ 🎩 ⊕ 🐸 ⊕ 🔬 ⊕ 🖥 ⊕ 🤖 ⊕Copy this bookmark: