nhaliday + structure   97

What is your tale of lasagna code? (Code with too many layers) - DEV Community 👩‍💻👨‍💻
“In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)." - Roberto Waltman
org:com  techtariat  quotes  aphorism  oop  jvm  programming  abstraction  intricacy  direct-indirect  engineering  structure  tip-of-tongue  degrees-of-freedom  coupling-cohesion  scala  error 
8 days ago by nhaliday
Cleaner, more elegant, and harder to recognize | The Old New Thing
Really easy
Writing bad error-code-based code
Writing bad exception-based code

Hard
Writing good error-code-based code

Really hard
Writing good exception-based code

--

Really easy
Recognizing that error-code-based code is badly-written
Recognizing the difference between bad error-code-based code and
not-bad error-code-based code.

Hard
Recognizing that error-code-base code is not badly-written

Really hard
Recognizing that exception-based code is badly-written
Recognizing that exception-based code is not badly-written
Recognizing the difference between bad exception-based code
and not-bad exception-based code

https://ra3s.com/wordpress/dysfunctional-programming/2009/07/15/return-code-vs-exception-handling/
https://nedbatchelder.com/blog/200501/more_exception_handling_debate.html
techtariat  org:com  microsoft  working-stiff  pragmatic  carmack  error  error-handling  programming  rhetoric  debate  critique  pls  search  structure  cost-benefit  comparison  summary  intricacy  certificates-recognition  commentary  multi  contrarianism  correctness  quality  code-dive  cracker-prog 
17 days ago by nhaliday
Interview with Donald Knuth | Interview with Donald Knuth | InformIT
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.
nibble  interview  giants  expert-experience  programming  cs  software  contrarianism  carmack  oss  prediction  trends  linux  concurrency  desktop  comparison  checking  debugging  stories  engineering  hmm  idk  algorithms  books  debate  flux-stasis  duplication  parsimony  best-practices  writing  documentation  latex  intricacy  structure  hardware  caching  workflow  editors  composition-decomposition  coupling-cohesion  exposition  technical-writing  thinking  cracker-prog 
5 weeks ago by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
8 weeks ago by nhaliday
Continuous Code Quality | SonarSource
they have cyclomatic complexity rule
$150/year for dev edition (needed for C++ but not Java/Python)
devtools  software  ruby  saas  programming  python  formal-methods  checking  c(pp)  jvm  structure  intricacy  graphs  golang  scala  metrics  javascript  dotnet  quality 
9 weeks ago by nhaliday
Why is Software Engineering so difficult? - James Miller
basic message: No silver bullet!

most interesting nuggets:
Scale and Complexity
- Windows 7 > 50 million LOC
Expect a staggering number of bugs.

Bugs?
- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.
- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.

Bug removal
- Testing typically exercises only half the code.

Better bug removal?
- There are better ways to do testing that do produce fantastic programs.”
- Are we sure about this fact?
* No, its only an opinion!
* In general Software Engineering has ....
NO FACTS!

So why not do this?
- The costs are unbelievable.
- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.
pdf  slides  engineering  nitty-gritty  programming  best-practices  roots  comparison  cost-benefit  software  systematic-ad-hoc  structure  error  frontier  debugging  checking  formal-methods  context  detail-architecture  intricacy  big-picture  system-design  correctness  scale  scaling-tech  shipping  money  data  stylized-facts  street-fighting  objektbuch  pro-rata  estimate  pessimism  degrees-of-freedom  volo-avolo  no-go  things  thinking  summary 
9 weeks ago by nhaliday
ON THE GEOMETRY OF NASH EQUILIBRIA AND CORRELATED EQUILIBRIA
Abstract: It is well known that the set of correlated equilibrium distributions of an n-player noncooperative game is a convex polytope that includes all the Nash equilibrium distributions. We demonstrate an elementary yet surprising result: the Nash equilibria all lie on the boundary of the polytope.
pdf  nibble  papers  ORFE  game-theory  optimization  geometry  dimensionality  linear-algebra  equilibrium  structure  differential  correlation  iidness  acm  linear-programming  spatial  characterization  levers 
10 weeks ago by nhaliday
Teach debugging
A friend of mine and I couldn't understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed.

1. Write down the problem
2. Think real hard
3. Write down the solution

The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we'd been asked to tackle before. On the day he assigned the project, the professor exhorted us to begin early. Over the next few weeks, we heard rumors that some of our classmates worked day and night without making progress.

...

And then, just after midnight, a number of our newfound buddies from dinner reported successes. Half of those who started from scratch had working designs. Others were despondent, because their design was still broken in some subtle, non-obvious way. As I talked with one of those students, I began poring over his design. And after a few minutes, I realized that the Feynman method wasn't the only way forward: it should be possible to systematically apply a mechanical technique repeatedly to find the source of our problems. Beneath all the abstractions, our projects consisted purely of NAND gates (woe to those who dug around our toolbox enough to uncover dynamic logic), which outputs a 0 only when both inputs are 1. If the correct output is 0, both inputs should be 1. The input that isn't is in error, an error that is, itself, the output of a NAND gate where at least one input is 0 when it should be 1. We applied this method recursively, finding the source of all the problems in both our designs in under half an hour.
techtariat  dan-luu  engineering  programming  debugging  IEEE  reflection  stories  education  higher-ed  checklists  iteration-recursion  divide-and-conquer  thinking  ground-up  nitty-gritty  giants  feynman  error  input-output  structure  composition-decomposition  abstraction  systematic-ad-hoc  reduction  teaching  state  correctness 
11 weeks ago by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages 
april 2018 by nhaliday
Society of Mind - Wikipedia
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

https://en.wikipedia.org/wiki/Modularity_of_mind

The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
books  ideas  speculation  structure  composition-decomposition  complex-systems  neuro  ai  psychology  cog-psych  intelligence  reduction  wiki  giants  philosophy  number  cohesion  diversity  systematic-ad-hoc  detail-architecture  pdf  study  neuro-nitgrit  brain-scan  nitty-gritty  network-structure  graphs  graph-theory  models  whole-partial-many  evopsych  eden  reference  psych-architecture  article  coupling-cohesion 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
What are the Laws of Biology?
The core finding of systems biology is that only a very small subset of possible network motifs is actually used and that these motifs recur in all kinds of different systems, from transcriptional to biochemical to neural networks. This is because only those arrangements of interactions effectively perform some useful operation, which underlies some necessary function at a cellular or organismal level. There are different arrangements for input summation, input comparison, integration over time, high-pass or low-pass filtering, negative auto-regulation, coincidence detection, periodic oscillation, bistability, rapid onset response, rapid offset response, turning a graded signal into a sharp pulse or boundary, and so on, and so on.

These are all familiar concepts and designs in engineering and computing, with well-known properties. In living organisms there is one other general property that the designs must satisfy: robustness. They have to work with noisy components, at a scale that’s highly susceptible to thermal noise and environmental perturbations. Of the subset of designs that perform some operation, only a much smaller subset will do it robustly enough to be useful in a living organism. That is, they can still perform their particular functions in the face of noisy or fluctuating inputs or variation in the number of components constituting the elements of the network itself.
scitariat  reflection  proposal  ideas  thinking  conceptual-vocab  lens  bio  complex-systems  selection  evolution  flux-stasis  network-structure  structure  composition-decomposition  IEEE  robust  signal-noise  perturbation  interdisciplinary  graphs  circuits  🌞  big-picture  hi-order-bits  nibble  synthesis 
november 2017 by nhaliday
multivariate analysis - Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? - Cross Validated
The bivariate normal distribution is the exception, not the rule!

It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided.

Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications.

note: there is a multivariate central limit theorem, so those such applications have no problem
nibble  q-n-a  overflow  stats  math  acm  probability  distribution  gotchas  intricacy  characterization  structure  composition-decomposition  counterexample  limits  concentration-of-measure 
october 2017 by nhaliday
design patterns - What is MVC, really? - Software Engineering Stack Exchange
The model manages fundamental behaviors and data of the application. It can respond to requests for information, respond to instructions to change the state of its information, and even to notify observers in event-driven systems when information changes. This could be a database, or any number of data structures or storage systems. In short, it is the data and data-management of the application.

The view effectively provides the user interface element of the application. It'll render data from the model into a form that is suitable for the user interface.

The controller receives user input and makes calls to model objects and the view to perform appropriate actions.

...

Though this answer has 21 upvotes, I find the sentence "This could be a database, or any number of data structures or storage systems. (tl;dr : it's the data and data-management of the application)" horrible. The model is the pure business/domain logic. And this can and should be so much more than data management of an application. I also differentiate between domain logic and application logic. A controller should not ever contain business/domain logic or talk to a database directly.
q-n-a  stackex  explanation  concept  conceptual-vocab  structure  composition-decomposition  programming  engineering  best-practices  pragmatic  jargon  thinking  metabuch  working-stiff  tech  🖥  checklists 
october 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software  coupling-cohesion 
june 2017 by nhaliday
Kinship Systems, Cooperation and the Evolution of Culture
In the data, societies with loose ancestral kinship ties cooperate and trust broadly, which is apparently sustained through a belief in moralizing gods, universally applicable moral principles, feelings of guilt, and large-scale institutions. Societies with a historically tightly knit kinship structure, on the other hand, exhibit strong in-group favoritism: they cheat on and are distrusting of out-group members, but readily support in-group members in need. This cooperation scheme is enforced by moral values of in-group loyalty, conformity to tight social norms, emotions of shame, and strong local institutions.

Henrich, Joseph, The Secret of Our Success: How Culture is Driving Human Evolution,
Domesticating Our Species, and Making Us Smarter, Princeton University Press, 2015.
—, W.E.I.R.D People: How Westerners became Individualistic, Self-Obsessed, Guilt-Ridden,
Analytic, Patient, Principled and Prosperous, Princeton University Press, n.d.
—, Jean Ensminger, Richard McElreath, Abigail Barr, Clark Barrett, Alexander Bolyanatz, Juan Camilo Cardenas, Michael Gurven, Edwins Gwako, Natalie Hen- rich et al., “Markets, Religion, Community Size, and the Evolution of Fairness and Punishment,” Science, 2010, 327 (5972), 1480–1484.

...

—, —, Will M. Gervais, Aiyana K. Willard, Rita A. McNamara, Edward Slingerland, and Joseph Henrich, “The Cultural Evolution of Prosocial Religions,” Behavioral and Brain Sciences, 2016, 39, e1.

...

Purzycki, Benjamin Grant, Coren Apicella, Quentin D. Atkinson, Emma Cohen, Rita Anne McNamara, Aiyana K. Willard, Dimitris Xygalatas, Ara Norenzayan, and Joseph Henrich, “Moralistic Gods, Supernatural Punishment and the Expansion of Human Sociality,” Nature, 2016.

Table 1 summarizes
Figure 1 has map of kinship tightness
Figure 2 has cheating and in-group vs. out-group
Table 2 has regression
Figure 3 has univeralism and shame-guilt
Figure 4 has individualism-collectivism/conformity
Table 4 has radius of trust, Table 5 same for within-country variation (ethnic)
Tables 7 and 8 do universalism

Haidt moral foundations:
In line with the research hypothesis discussed in Section 3, the analysis employs two dependent variables, i.e., (i) the measure of in-group loyalty, and (ii) an index of the importance of communal values relative to the more universal (individualizing) ones. That is, the hypothesis is explicitly not about some societies being more or less moral than others, but merely about heterogeneity in the relative importance that people attach to structurally different types of values. To construct the index, I compute the first principal component of fairness / reciprocity, harm / care, in-group / loyalty, and respect /authority. The resulting score endogenously has the appealing property that – in line with the research hypothesis – it loads positively on the first two values and negatively on the latter two, with roughly equal weights, see Appendix F for details.²⁴I compute country-level scores by averaging responses by country of residence of respondents. Importantly, in Enke (2017) I document that – in a nationally representative sample of Americans – this same index of moral communalism is strongly correlated with individuals’ propensity to favor their local community over society as a whole in issues ranging from taxation and redistribution to donations and volunteering. Thus, there is evidence that the index of communal moral values captures economically meaningful behavioral heterogeneity.

The coevolution of kinship systems, cooperation, and culture: http://voxeu.org/article/kinship-cooperation-and-culture
- Benjamin Enke

pretty short

good linguistics reference cited in this paper:
On the biological and cultural evolution of shame: Using internet search tools to weight values in many cultures: https://arxiv.org/abs/1401.1100v2
Here we explore the relative importance between shame and guilt by using Google Translate [>_>...] to produce translation for the words "shame", "guilt", "pain", "embarrassment" and "fear" to the 64 languages covered. We also explore the meanings of these concepts among the Yanomami, a horticulturist hunter-gatherer tribe in the Orinoquia. Results show that societies previously described as “guilt societies” have more words for guilt than for shame, but *the large majority*, including the societies previously described as “shame societies”, *have more words for shame than for guilt*. Results are consistent with evolutionary models of shame which predict a wide scatter in the relative importance between guilt and shame, suggesting that cultural evolution of shame has continued the work of biological evolution, and that neither provides a strong adaptive advantage to either shame or guilt [? did they not just say that most languages favor shame?].

...

The roots of the word "shame" are thought to derive from an older word meaning "to cover". The emotion of shame has clear physiological consequences. Its facial and corporal expression is a human universal, as was recognized already by Darwin (5). Looking away, reddening of the face, sinking the head, obstructing direct view, hiding the face and downing the eyelids, are the unequivocal expressions signaling shame. Shame might be an emotion specific to humans, as no clear description of it is known for animals.
...
Classical Greek philosophers, such as Aristotle, explicitly mention shame as a key element in building society.

Guilt is the emotion of being responsible for the commission of an offense, however, it seems to be distinct from shame. Guilt says “what I did was not good”, whereas shame says “I am no good"(2). For Benedict (1), shame is a violation of cultural or social values, while guilt feelings arise from violations of one's internal values.

...

Unobservable emotions such as guilt may be of value to the receiver but constitutes in economy “private information”. Thus, in economic and biological terms, adaptive pressures acting upon the evolution of shame differ from those acting on that of guilt.

Shame has evolutionary advantages to both individual and society, but the lack ofshame also has evolutionary advantages as it allows cheating and thus benefiting from public goods without paying the costs of its build up.

...

Dodds (7) coined the distinction between guilt and shame cultures and postulated that in Greek cultural history, shame as a social value was displaced, at least in part, by guilt in guiding moral behavior.
...
"[...]True guilt cultures rely on an internalized conviction of sin as the enforcer of good behavior, not, as shame cultures do, on external sanctions. Guilt cultures emphasize punishment and forgiveness as ways of restoring the moral order; shame cultures stress self-denial and humility as ways of restoring the social order”.

...

For example, Wikipedia is less error prone than Encyclopedia Britannica (12, 17); and Google Translate is as accurate as more traditional methods (35).

Table 1, Figure 1

...

This regression is close to a proportional line of two words for shame for each word for guilt.

...

For example, in the case of Chinese, no overlap between the five concepts is reported using Google Translate in Figure 1. Yet, linguistic-conceptual studies of guilt and shame revealed an important overlap between several of these concepts in Chinese (29).

...

Our results using Google Translate show no overlap between Guilt and Shame in any of the languages studied.
...
[lol:] Examples of the context when they feel “kili” are: a tiger appears in the forest; you kill somebody from another community; your daughter is going to die; everybody looks at your underwear; you are caught stealing; you soil your pants while among others; a doctor gives you an injection; you hit your wife and others find out; you are unfaithful to your husband and others find out; you are going to be hit with a machete.

...

Linguistic families do not aggregate according to the relationship of the number of synonyms for shame and guilt (Figure 3).

...

The ratios are 0.89 and 2.5 respectively, meaning a historical transition from guilt-culture in Latin to shame-culture in Italian, suggesting a historical development that is inverse to that suggested byDodds for ancient to classical Greek. [I hope their Latin corpus doesn't include stuff from Catholics...]

Joe Henrich presentation: https://www.youtube.com/watch?v=f-unD4ZzWB4

relevant video:
Johnny Cash - God's Gonna Cut You Down: https://www.youtube.com/watch?v=eJlN9jdQFSc

https://en.wikipedia.org/wiki/Guilt_society
https://en.wikipedia.org/wiki/Shame_society
https://en.wikipedia.org/wiki/Guilt-Shame-Fear_spectrum_of_cultures
this says Dems more guilt-driven but Peter Frost says opposite here (and matches my perception of the contemporary breakdown both including minorities and focusing only on whites): https://pinboard.in/u:nhaliday/b:9b75881f6861
http://honorshame.com/global-map-of-culture-types/

this is an amazing paper:
The Origins of WEIRD Psychology: https://psyarxiv.com/d6qhu/
Recent research not only confirms the existence of substantial psychological variation around the globe but also highlights the peculiarity of populations that are Western, Educated, Industrialized, Rich and Democratic (WEIRD). We propose that much of this variation arose as people psychologically adapted to differing kin-based institutions—the set of social norms governing descent, marriage, residence and related domains. We further propose that part of the variation in these institutions arose historically from the Catholic Church’s marriage and family policies, which contributed to the dissolution of Europe’s traditional kin-based institutions, leading eventually to the predominance of nuclear families and impersonal institutions. By combining data on 20 psychological outcomes with historical measures of both kinship and Church exposure, we find support for these ideas in a comprehensive array of analyses across countries, among European regions and between individuals with … [more]
study  economics  broad-econ  pseudoE  roots  anthropology  sociology  culture  cultural-dynamics  society  civilization  religion  theos  kinship  individualism-collectivism  universalism-particularism  europe  the-great-west-whale  orient  integrity  morality  ethics  trust  institutions  things  pdf  piracy  social-norms  cooperate-defect  patho-altruism  race  world  developing-world  pop-diff  n-factor  ethnography  ethnocentrism  🎩  🌞  us-them  occident  political-econ  altruism  self-interest  books  todo  multi  old-anglo  big-peeps  poetry  aristos  homo-hetero  north-weingast-like  maps  data  modernity  tumblr  social  ratty  gender  history  iron-age  mediterranean  the-classics  christianity  speculation  law  public-goodish  tribalism  urban  china  asia  sinosphere  decision-making  polanyi-marx  microfoundations  open-closed  alien-character  axelrod  eden  growth-econ  social-capital  values  phalanges  usa  within-group  group-level  regional-scatter-plots  comparison  psychology  social-psych  behavioral-econ  ec 
june 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Estimating the number of unseen variants in the human genome
To find all common variants (frequency at least 1%) the number of individuals that need to be sequenced is small (∼350) and does not differ much among the different populations; our data show that, subject to sequence accuracy, the 1000 Genomes Project is likely to find most of these common variants and a high proportion of the rarer ones (frequency between 0.1 and 1%). The data reveal a rule of diminishing returns: a small number of individuals (∼150) is sufficient to identify 80% of variants with a frequency of at least 0.1%, while a much larger number (> 3,000 individuals) is necessary to find all of those variants.

A map of human genome variation from population-scale sequencing: http://www.internationalgenome.org/sites/1000genomes.org/files/docs/nature09534.pdf

Scientists using data from the 1000 Genomes Project, which sequenced one thousand individuals from 26 human populations, found that "a typical [individual] genome differs from the reference human genome at 4.1 million to 5.0 million sites … affecting 20 million bases of sequence."[11] Nearly all (>99.9%) of these sites are small differences, either single nucleotide polymorphisms or brief insertion-deletions in the genetic sequence, but structural variations account for a greater number of base-pairs than the SNPs and indels.[11]

Human genetic variation: https://en.wikipedia.org/wiki/Human_genetic_variation

Singleton Variants Dominate the Genetic Architecture of Human Gene Expression: https://www.biorxiv.org/content/early/2017/12/15/219238
study  sapiens  genetics  genomics  population-genetics  bioinformatics  data  prediction  cost-benefit  scale  scaling-up  org:nat  QTL  methodology  multi  pdf  curvature  convexity-curvature  nonlinearity  measurement  magnitude  🌞  distribution  missing-heritability  pop-structure  genetic-load  mutation  wiki  reference  article  structure  bio  preprint  biodet  variance-components  nibble  chart 
may 2017 by nhaliday
Backwardness | West Hunter
Back around the time I was born, anthropologists sometimes talked about some cultures being more advanced than others. This was before they decided that all cultures are equal, except that some are more equal than others.

...

I’ve been trying to estimate the gap between Eurasian and Amerindian civilization. The Conquistadors were, in a sense, invaders from the future: but just how far in the future? What point in the history of the Middle East is most similar to the state of the Amerindian civilizations of 1500 AD ?

I would argue that the Amerindian civilizations were less advanced than the Akkadian Empire, circa 2300 BC. The Mayans had writing, but were latecomers in metallurgy. The Inca had tin and arsenical bronze, but didn’t have written records. The Akkadians had both – as well as draft animals and the wheel. You can maybe push the time as far back as 2600 BC, since Sumerian cuneiform was in pretty full swing by then. So the Amerindians were around four thousand years behind.

https://westhunt.wordpress.com/2012/02/10/backwardness/#comment-1520
Excepting the use of iron, sub-Saharan Africa, excepting Ethiopia, was well behind the most advanced Amerindian civilizations circa 1492. I am right now resisting the temptation to get into a hammer-and-tongs discussion of Isandlwana, Rorke’s Drift, Blood River, etc. – and we would all be better off if I continued to do so.

https://en.wikipedia.org/wiki/Battle_of_Blood_River
The Battle of Blood River (Afrikaans: Slag van Bloedrivier; Zulu: iMpi yaseNcome) is the name given for the battle fought between _470 Voortrekkers_ ("Pioneers"), led by Andries Pretorius, and _an estimated 80,000 Zulu attackers_ on the bank of the Ncome River on 16 December 1838, in what is today KwaZulu-Natal, South Africa. Casualties amounted to over 3,000 of king Dingane's soldiers dead, including two Zulu princes competing with Prince Mpande for the Zulu throne. _Three Pioneers commando members were lightly wounded_, including Pretorius himself.

https://en.wikipedia.org/wiki/Battle_of_Rorke%27s_Drift
https://en.wikipedia.org/wiki/Battle_of_Isandlwana

https://twitter.com/tcjfs/status/895719621218541568
In the morning of Tuesday, June 15, while we sat at Dr. Adams's, we talked of a printed letter from the Reverend Herbert Croft, to a young gentleman who had been his pupil, in which he advised him to read to the end of whatever books he should begin to read. JOHNSON. 'This is surely a strange advice; you may as well resolve that whatever men you happen to get acquainted with, you are to keep to them for life. A book may be good for nothing; or there may be only one thing in it worth knowing; are we to read it all through? These Voyages, (pointing to the three large volumes of Voyages to the South Sea, which were just come out) WHO will read them through? A man had better work his way before the mast, than read them through; they will be eaten by rats and mice, before they are read through. There can be little entertainment in such books; one set of Savages is like another.' BOSWELL. 'I do not think the people of Otaheite can be reckoned Savages.' JOHNSON. 'Don't cant in defence of Savages.' BOSWELL. 'They have the art of navigation.' JOHNSON. 'A dog or a cat can swim.' BOSWELL. 'They carve very ingeniously.' JOHNSON. 'A cat can scratch, and a child with a nail can scratch.' I perceived this was none of the mollia tempora fandi; so desisted.

Déjà Vu all over again: America and Europe: https://westhunt.wordpress.com/2014/11/12/deja-vu-all-over-again-america-and-europe/
In terms of social organization and technology, it seems to me that Mesolithic Europeans (around 10,000 years ago) were like archaic Amerindians before agriculture. Many Amerindians on the west coast were still like that when Europeans arrived – foragers with bows and dugout canoes.

On the other hand, the farmers of Old Europe were in important ways a lot like English settlers: the pioneers planted wheat, raised pigs and cows and sheep, hunted deer, expanded and pushed aside the previous peoples, without much intermarriage. Sure, Anglo pioneers were literate, had guns and iron, were part of a state, all of which gave them a much bigger edge over the Amerindians than Old Europe ever had over the Mesolithic hunter-gatherers and made the replacement about ten times faster – but in some ways it was similar. Some of this similarity was the product of historical accidents: the local Amerindians were thin on the ground, like Europe’s Mesolithic hunters – but not so much because farming hadn’t arrived (it had in most of the United States), more because of an ongoing population crash from European diseases.

On the gripping hand, the Indo-Europeans seem to have been something like the Plains Indians: sure, they raised cattle rather than living off abundant wild buffalo, but they too were transformed into troublemakers by the advent of the horse. Both still did a bit of farming. They were also alike in that neither of them really knew what they were doing: neither were the perfected product of thousands of years of horse nomadry. The Indo-Europeans were the first raiders on horseback, and the Plains Indians had only been at it for a century, without any opportunity to learn state-of-the-art tricks from Eurasian horse nomads.

The biggest difference is that the Indo-Europeans won, while the Plains Indians were corralled into crappy reservations.

Quantitative historical analysis uncovers a single dimension of complexity that structures global variation in human social organization: http://www.pnas.org/content/early/2017/12/20/1708800115.full
Do human societies from around the world exhibit similarities in the way that they are structured, and show commonalities in the ways that they have evolved? These are long-standing questions that have proven difficult to answer. To test between competing hypotheses, we constructed a massive repository of historical and archaeological information known as “Seshat: Global History Databank.” We systematically coded data on 414 societies from 30 regions around the world spanning the last 10,000 years. We were able to capture information on 51 variables reflecting nine characteristics of human societies, such as social scale, economy, features of governance, and information systems. Our analyses revealed that these different characteristics show strong relationships with each other and that a single principal component captures around three-quarters of the observed variation. Furthermore, we found that different characteristics of social complexity are highly predictable across different world regions. These results suggest that key aspects of social organization are functionally related and do indeed coevolve in predictable ways. Our findings highlight the power of the sciences and humanities working together to rigorously test hypotheses about general rules that may have shaped human history.

Fig. 2.

The General Social Complexity Factor Is A Thing: https://www.gnxp.com/WordPress/2017/12/21/the-general-social-complexity-factor-is-a-thing/
west-hunter  scitariat  discussion  civilization  westminster  egalitarianism-hierarchy  history  early-modern  age-of-discovery  comparison  europe  usa  latin-america  farmers-and-foragers  technology  the-great-west-whale  divergence  conquest-empire  modernity  ranking  aphorism  rant  ideas  innovation  multi  africa  poast  war  track-record  death  nihil  nietzschean  lmao  wiki  attaq  data  twitter  social  commentary  gnon  unaffiliated  right-wing  inequality  quotes  big-peeps  old-anglo  aristos  literature  expansionism  world  genetics  genomics  gene-flow  gavisti  roots  analogy  absolute-relative  studying  sapiens  anthropology  archaeology  truth  primitivism  evolution  study  org:nat  turchin  broad-econ  deep-materialism  social-structure  sociology  cultural-dynamics  variance-components  exploratory  matrix-factorization  things  🌞  structure  scale  dimensionality  degrees-of-freedom  infrastructure  leviathan  polisci  religion  philosophy  government  institutions  money  monetary-fiscal  population  density  urban-rural  values  phalanges  cultu 
may 2017 by nhaliday
Typos | West Hunter
In a simple model, a given mutant has an equilibrium frequency μ/s, when μ is the mutation rate from good to bad alleles and s is the size of the selective disadvantage. To estimate the total impact of mutation at that locus, you multiply the frequency by the expected harm, s: which means that the fitness decrease (from effects at that locus) is just μ, the mutation rate. If we assume that these fitness effects are multiplicative, the total fitness decrease (also called ‘mutational load’) is approximately 1 – exp(-U), when U is where U=Σ2μ, the total number of new harmful mutations per diploid individual.

https://westhunt.wordpress.com/2012/10/17/more-to-go-wrong/

https://westhunt.wordpress.com/2012/07/13/sanctuary/
interesting, suggestive comment on Africa:
https://westhunt.wordpress.com/2012/07/13/sanctuary/#comment-3671
https://westhunt.wordpress.com/2012/07/14/too-darn-hot/
http://infoproc.blogspot.com/2012/07/rare-variants-and-human-genetic.html
https://westhunt.wordpress.com/2012/07/18/changes-in-attitudes/
https://westhunt.wordpress.com/2012/08/24/men-and-macaques/
I have reason to believe that few people understand genetic load very well, probably for self-referential reasons, but better explanations are possible.

One key point is that the amount of neutral variation is determined by the long-term mutational rate and population history, while the amount of deleterious variation [genetic load] is set by the selective pressures and the prevailing mutation rate over a much shorter time scale. For example, if you consider the class of mutations that reduce fitness by 1%, what matters is the past few thousand years, not the past few tens or hundreds of of thousands of years.

...

So, assuming that African populations have more neutral variation than non-African populations (which is well-established), what do we expect to see when we compare the levels of probably-damaging mutations in those two populations? If the Africans and non-Africans had experienced essentially similar mutation rates and selective pressures over the past few thousand years, we would expect to see the same levels of probably-damaging mutations. Bottlenecks that happened at the last glacial maximum or in the expansion out of Africa are irrelevant – too long ago to matter.

But we don’t. The amount of rare synonymous stuff is about 22% higher in Africans. The amount of rare nonsynonymous stuff (usually at least slightly deleterious) is 20.6% higher. The number of rare variants predicted to be more deleterious is ~21.6% higher. The amount of stuff predicted to be even more deleterious is ~27% higher. The number of harmful looking loss-of-function mutations (yet more deleterious) is 25% higher.

It looks as if the excess grows as the severity of the mutations increases. There is a scenario in which this is possible: the mutation rate in Africa has increased recently. Not yesterday, but, say, over the past few thousand years.

...

What is the most likely cause of such variations in the mutation rate? Right now, I’d say differences in average paternal age. We know that modest differences (~5 years) in average paternal age can easily generate ~20% differences in the mutation rate. Such between-population differences in mutation rates seem quite plausible, particularly since the Neolithic.
https://westhunt.wordpress.com/2016/04/10/bugs-versus-drift/
more recent: https://westhunt.wordpress.com/2017/06/06/happy-families-are-all-alike-every-unhappy-family-is-unhappy-in-its-own-way/#comment-92491
Probably not, but the question is complex: depends on the shape of the deleterious mutational spectrum [which we don’t know], ancient and recent demography, paternal age, and the extent of truncation selection in the population.
west-hunter  scitariat  discussion  bio  sapiens  biodet  evolution  mutation  genetics  genetic-load  population-genetics  nibble  stylized-facts  methodology  models  equilibrium  iq  neuro  neuro-nitgrit  epidemiology  selection  malthus  temperature  enhancement  CRISPR  genomics  behavioral-gen  multi  poast  africa  roots  pop-diff  ideas  gedanken  paternal-age  🌞  environment  speculation  gene-drift  longevity  immune  disease  parasites-microbiome  scifi-fantasy  europe  asia  race  migration  hsu  study  summary  commentary  shift  the-great-west-whale  nordic  intelligence  eden  long-short-run  debate  hmm  idk  explanans  comparison  structure  occident  mediterranean  geography  within-group  correlation  direction  volo-avolo  demographics  age-generation  measurement  data  applicability-prereqs  aging 
may 2017 by nhaliday
A Unified Theory of Randomness | Quanta Magazine
Beyond the one-dimensional random walk, there are many other kinds of random shapes. There are varieties of random paths, random two-dimensional surfaces, random growth models that approximate, for example, the way a lichen spreads on a rock. All of these shapes emerge naturally in the physical world, yet until recently they’ve existed beyond the boundaries of rigorous mathematical thought. Given a large collection of random paths or random two-dimensional shapes, mathematicians would have been at a loss to say much about what these random objects shared in common.

Yet in work over the past few years, Sheffield and his frequent collaborator, Jason Miller, a professor at the University of Cambridge, have shown that these random shapes can be categorized into various classes, that these classes have distinct properties of their own, and that some kinds of random objects have surprisingly clear connections with other kinds of random objects. Their work forms the beginning of a unified theory of geometric randomness.
news  org:mag  org:sci  math  research  probability  profile  structure  geometry  random  popsci  nibble  emergent  org:inst 
february 2017 by nhaliday
The language of geometry: Fast comprehension of geometrical primitives and rules in human adults and preschoolers
The child’s acquisition of language has been suggested to rely on the ability to build hierarchically structured representations from sequential inputs. Does a similar mechanism also underlie the acquisition of geometrical rules? Here, we introduce a learning situation in which human participants had to grasp simple spatial sequences and try to predict the next location. Sequences were generated according to a “geometrical language” endowed with simple primitives of symmetries and rotations, and combinatorial rules. Analyses of error rates of various populations—a group of French educated adults, two groups of 5 years-old French children, and a rare group of teenagers and adults from an Amazonian population, the Mundurukus, who have limited access to formal schooling and a reduced geometrical lexicon—revealed that subjects’ learning indeed rests on internal language-like representations. A theoretical model, based on minimum description length, proved to fit well participants’ behavior, suggesting that human subjects “compress” spatial sequences into a minimal internal rule or program.
study  psychology  cog-psych  visuo  spatial  structure  neurons  occam  computation  models  eden  intelligence  neuro  learning  language  psych-architecture  🌞  retrofit 
february 2017 by nhaliday
probability - Variance of maximum of Gaussian random variables - Cross Validated
In full generality it is rather hard to find the right order of magnitude of the variance of a Gaussien supremum since the tools from concentration theory are always suboptimal for the maximum function.

order ~ 1/log n
q-n-a  overflow  stats  probability  acm  orders  tails  bias-variance  moments  concentration-of-measure  magnitude  tidbits  distribution  yoga  structure  extrema  nibble 
february 2017 by nhaliday
Mikhail Leonidovich Gromov - Wikipedia
Gromov's style of geometry often features a "coarse" or "soft" viewpoint, analyzing asymptotic or large-scale properties.

Gromov is also interested in mathematical biology,[11] the structure of the brain and the thinking process, and the way scientific ideas evolve.[8]
math  people  giants  russia  differential  geometry  topology  math.GR  wiki  structure  meta:math  meta:science  interdisciplinary  bio  neuro  magnitude  limits  science  nibble  coarse-fine  wild-ideas  convergence  info-dynamics  ideas 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition  coupling-cohesion 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractmath

related tags

:/  aaronson  ability-competence  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  advanced  adversarial  advice  aesthetics  africa  age-generation  age-of-discovery  aging  agriculture  ai  ai-control  algebra  algebraic-complexity  algorithms  alien-character  alignment  allodium  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  anomie  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  arbitrage  archaeology  archaics  architecture  aristos  arms  arrows  art  article  asia  assembly  atmosphere  atoms  attaq  attention  authoritarianism  autism  automata-languages  automation  aversion  axelrod  axioms  backup  bare-hands  barons  bayesian  behavioral-econ  behavioral-gen  being-becoming  being-right  benevolence  best-practices  better-explained  bias-variance  biases  big-list  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blowhards  books  bostrom  bounded-cognition  brain-scan  branches  brands  brexit  britain  broad-econ  browser  build-packaging  business  business-models  c(pp)  c:**  c:***  caching  calculation  california  cancer  canon  capital  capitalism  carcinisation  career  carmack  cartoons  causation  certificates-recognition  characterization  charity  chart  checking  checklists  chemistry  china  christianity  circuits  civic  civil-liberty  civilization  clarity  class  class-warfare  classification  clever-rats  climate-change  cliometrics  closure  coalitions  coarse-fine  cocktail  cocoa  code-dive  coding-theory  cog-psych  cohesion  cold-war  collaboration  coloring  comedy  commentary  communication  communism  commutativity  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confluence  confusion  conquest-empire  consilience  constraint-satisfaction  context  contracts  contradiction  contrarianism  convergence  convexity-curvature  cool  cooperate-defect  coordination  corporation  correctness  correlation  corruption  cost-benefit  counter-revolution  counterexample  counterfactual  coupling-cohesion  courage  course  cracker-prog  creative  crime  criminal-justice  CRISPR  critique  crooked  crux  crypto-anarchy  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  dan-luu  dark-arts  darwinian  data  data-science  database  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographics  dennett  density  dependence-independence  descriptive  design  desktop  detail-architecture  deterrence  developing-world  developmental  devtools  differential  dignity  dimensionality  direct-indirect  direction  dirty-hands  discovery  discrete  discrimination  discussion  disease  distribution  divergence  diversity  divide-and-conquer  documentation  dotnet  DP  drugs  duality  duplication  duty  dysgenics  early-modern  eastern-europe  ecology  econ-metrics  economics  econotariat  eden  eden-heaven  editors  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  einstein  elections  electromag  elegance  elite  email  embeddings  embodied  embodied-cognition  embodied-pack  embodied-street-fighting  emergent  emotion  empirical  ems  encyclopedic  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epistemic  equilibrium  eric-kaufmann  error  error-handling  essay  essence-existence  estimate  ethanol  ethics  ethnocentrism  ethnography  europe  evidence-based  evolution  evopsych  examples  existence  expanders  expansionism  expectancy  expert-experience  explanans  explanation  exploratory  exposition  expression-survival  externalities  extra-introversion  extrema  facebook  faq  farmers-and-foragers  fashun  FDA  features  fermi  fertility  feudal  feynman  fiction  field-study  fields  finance  finiteness  flexibility  fluid  flux-stasis  focus  food  foreign-lang  foreign-policy  formal-methods  formal-values  forms-instances  fourier  frameworks  frequency  frontier  fungibility-liquidity  futurism  gallic  game-theory  games  gavisti  gedanken  gender  gender-diff  gene-drift  gene-flow  general-survey  generalization  genetic-correlation  genetic-load  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  giants  gibbon  github  gnon  gnosis-logos  gnxp  god-man-beast-victim  golang  good-evil  google  gotchas  government  gowers  graph-theory  graphics  graphs  gravity  gray-econ  greedy  gregory-clark  ground-up  group-level  group-selection  growth-econ  GT-101  guilt-shame  gwern  haidt  hanson  happy-sad  hard-tech  hardness  hardware  hari-seldon  harvard  heavy-industry  henrich  heterodox  heuristic  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hmm  hn  homo-hetero  homogeneity  honor  horror  howto  hsu  human-capital  human-ml  human-study  humanity  humility  hypochondria  hypocrisy  ide  ideas  identity  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impact  impetus  incentives  increase-decrease  india  individualism-collectivism  induction  industrial-revolution  inequality  inference  info-dynamics  info-econ  information-theory  infrastructure  inner-product  innovation  input-output  insight  instinct  institutions  integral  integrity  intel  intelligence  interdisciplinary  interests  internet  interpretability  interpretation  intervention  interview  intricacy  intuition  invariance  investing  ios  iq  iron-age  islam  iteration-recursion  janus  japan  jargon  javascript  judaism  judgement  julia  justice  jvm  kernels  kinship  knowledge  kumbaya-kult  labor  land  language  large-factor  latex  latin-america  lattice  law  leadership  learning  learning-theory  lecture-notes  lectures  left-wing  legacy  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifts-projections  limits  linear-algebra  linear-models  linear-programming  liner-notes  links  linux  list  literature  lived-experience  lmao  local-global  logic  lol  long-short-run  long-term  longevity  love-hate  lower-bounds  luca-trevisan  machiavelli  machine-learning  macro  magnitude  malthus  management  manifolds  map-territory  maps  marginal  marginal-rev  market-power  markets  markov  martial  matching  math  math.AC  math.CA  math.CO  math.CT  math.DS  math.FA  math.GN  math.GR  math.NT  mathtariat  matrix-factorization  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  MENA  mena4  meta-analysis  meta:math  meta:prediction  meta:reading  meta:rhetoric  meta:science  metabolic  metabuch  metameta  methodology  metrics  micro  microfoundations  microsoft  migration  military  minimalism  miri-cfar  missing-heritability  mit  ML-MAP-E  mobile  model-class  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  money-for-time  monotonicity  monte-carlo  mooc  morality  mostly-modern  multi  multiplicative  music  musk  mutation  mystic  myth  n-factor  narrative  nationalism-globalism  naturality  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nlp  no-go  noble-lie  noblesse-oblige  nonlinearity  nootropics  nordic  north-weingast-like  northeast  novelty  nuclear  number  nutrition  nyc  objective-measure  objektbuch  ocaml-sml  occam  occident  ocw  offense-defense  old-anglo  oly  online-learning  oop  open-closed  operational  optimate  optimism  optimization  order-disorder  orders  ORFE  org:bleg  org:com  org:edu  org:inst  org:junk  org:mag  org:mat  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  os  oss  osx  outcome-risk  outliers  overflow  p:someday  p:whenever  papers  paradox  parallax  parasites-microbiome  parenting  parsimony  paternal-age  path-dependence  patho-altruism  patience  paul-romer  pdf  peace-violence  people  performance  personality  perturbation  pessimism  phalanges  pharma  philosophy  phys-energy  physics  pic  piracy  planning  play  plots  pls  poast  poetry  polanyi-marx  polarization  policy  polisci  political-econ  politics  polynomials  pop-diff  pop-structure  popsci  population  population-genetics  populism  positivity  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  prediction-markets  predictive-processing  prejudice  prepping  preprint  presentation  primitivism  princeton  priors-posteriors  privacy  pro-rata  probability  problem-solving  productivity  prof  profile  programming  project  proofs  propaganda  properties  property-rights  proposal  protestant-catholic  prudence  pseudoE  pseudorandomness  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  publishing  putnam-like  puzzles  python  q-n-a  qra  QTL  quality  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  questions  quotes  race  random  random-networks  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  reading  realness  reason  rec-math  recent-selection  recommendations  recruiting  reddit  redistribution  reduction  reference  reflection  regional-scatter-plots  regularization  regulation  reinforcement  relativity  relativization  religion  rent-seeking  repo  research  responsibility  retention  retrofit  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  risk  ritual  robotics  robust  roots  rot  rsc  ruby  russia  rust  s:*  s:**  s:***  s:null  saas  safety  sanctity-degradation  sanjeev-arora  sapiens  scala  scale  scaling-tech  scaling-up  scholar  science  science-anxiety  scifi-fantasy  scitariat  search  sebastien-bubeck  securities  security  selection  self-interest  selfish-gene  sex  shakespeare  shift  shipping  SIGGRAPH  signal-noise  signaling  signum  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  slides  slippery-slope  smoothness  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  sociality  society  sociology  socs-and-mops  soft-question  software  solid-study  space  spatial  spearhead  speculation  speed  speedometer  spengler  spock  spreading  ssc  stackex  stagnation  stanford  startups  stat-mech  state  state-of-art  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  straussian  street-fighting  stress  structure  study  studying  stylized-facts  sub-super  subculture  subjective-objective  success  sum-of-squares  summary  supply-demand  survey  sv  symmetry  synchrony  synthesis  system-design  systematic-ad-hoc  systems  tactics  tails  tainter  tapes  taxes  tcs  tcstariat  teaching  tech  technical-writing  technology  techtariat  telos-atelos  temperature  tensors  terminal  tetlock  the-basilisk  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  tidbits  tightness  time  time-complexity  time-preference  tip-of-tongue  todo  toolkit  tools  top-n  topics  topology  track-record  trade  tradeoffs  tradition  transportation  trends  tribalism  tricki  tricks  trivia  troll  trump  trust  truth  tumblr  turchin  turing  tv  twitter  types  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  unix  urban  urban-rural  us-them  usa  utopia-dystopia  vague  values  variance-components  venture  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  walls  war  waves  wealth  wealth-of-nations  web  webapp  welfare-state  west-hunter  westminster  whole-partial-many  wigderson  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-group  within-without  wordlessness  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  X-not-about-Y  xenobio  yak-shaving  yoga  yvain  zero-positive-sum  zooming  🌞  🎓  🎩  🐸  👳  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: