nhaliday + wire-guided   89

"Performance Matters" by Emery Berger - YouTube
Stabilizer is a tool that enables statistically sound performance evaluation, making it possible to understand the impact of optimizations and conclude things like the fact that the -O2 and -O3 optimization levels are indistinguishable from noise (sadly true).

Since compiler optimizations have run out of steam, we need better profiling support, especially for modern concurrent, multi-threaded applications. Coz is a new "causal profiler" that lets programmers optimize for throughput or latency, and which pinpoints and accurately predicts the impact of optimizations.

- randomize extraneous factors like code layout and stack size to avoid spurious speedups
- simulate speedup of component of concurrent system (to assess effect of optimization before attempting) by slowing down the complement (all but that component)
- latency vs. throughput, Little's law
video  presentation  programming  engineering  nitty-gritty  performance  devtools  compilers  latency-throughput  concurrency  legacy  causation  wire-guided  let-me-see  manifolds  pro-rata  tricks  endogenous-exogenous  control  random  signal-noise  comparison  marginal  llvm  systems  hashing  computer-memory  build-packaging  composition-decomposition  coupling-cohesion  local-global  dbs  direct-indirect  symmetry  research  models  metal-to-virtual  linux  measurement  simulation  magnitude  realness  hypothesis-testing  techtariat 
5 weeks ago by nhaliday
Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom | PNAS
This article addresses the long-standing question of why students and faculty remain resistant to active learning. Comparing passive lectures with active learning using a randomized experimental approach and identical course materials, we find that students in the active classroom learn more, but they feel like they learn less. We show that this negative correlation is caused in part by the increased cognitive effort required during active learning.

https://news.ycombinator.com/item?id=21164005
study  org:nat  psychology  cog-psych  education  learning  studying  teaching  productivity  higher-ed  cost-benefit  aversion  🦉  growth  stamina  multi  hn  commentary  sentiment  thinking  neurons  wire-guided  emotion  subjective-objective  self-report  objective-measure 
5 weeks ago by nhaliday
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
1400-1900:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think that’s because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)

...

Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

1900-2200:
To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and don’t give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean “Virtual Participation” in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search “Codeforces Round #556 editorial” in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests

...

At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for “considering” and “typical”, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I don’t recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating won’t increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  🖥  👳  yoga 
august 2019 by nhaliday
The 'science' of training in competitive programming - Codeforces
"Hard problems" is subjective. A good rule of thumb for learning problem solving (at least according to me) is that your problem selection is good if you fail to solve roughly 50% of problems you attempt. Anything in [20%,80%] should still be fine, although many people have problems staying motivated if they fail too often. Read solutions for problems you fail to solve.

(There is some actual math behind this. Hopefully one day I'll have the time to write it down.)
- misof in a comment
--
I don't believe in any of things like "either you solve it in 30mins — few hours, or you never solve it at all". There are some magic at first glance algorithms like polynomial hashing, interval tree or FFT (which is magic even at tenth glance :P), but there are not many of them and vast majority of algorithms are possible to be invented on our own, for example dp. In high school I used to solve many problems from IMO and PMO and when I didn't solve a problem I tried it once again for some time. And I have solved some problems after third or sth like that attempt. Though, if we are restricting ourselves to beginners, I think that it still holds true, but it would be better to read solutions after some time, because there are so many other things which we can learn, so better not get stuck at one particular problem, when there are hundreds of other important concepts to be learnt.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  marginal  wire-guided  stylized-facts  hmm  advice  tactics  time  time-use  cost-benefit  growth  studying  🖥  👳 
august 2019 by nhaliday
Panel: Systems Programming in 2014 and Beyond | Lang.NEXT 2014 | Channel 9
- Bjarne Stroustrup, Niko Matsakis, Andrei Alexandrescu, Rob Pike
- 2014 so pretty outdated but rare to find a discussion with people like this together
- pretty sure Jonathan Blow asked a couple questions
- Rob Pike compliments Rust at one point. Also kinda softly rags on dynamic typing at one point ("unit testing is what they have instead of static types").
video  presentation  debate  programming  pls  c(pp)  systems  os  rust  d-lang  golang  computer-memory  legacy  devtools  formal-methods  concurrency  compilers  syntax  parsimony  google  intricacy  thinking  cost-benefit  degrees-of-freedom  facebook  performance  people  rsc  cracker-prog  critique  types  checking  api  flux-stasis  engineering  time  wire-guided  worse-is-better/the-right-thing  static-dynamic  latency-throughput  techtariat 
july 2019 by nhaliday
Should I go for TensorFlow or PyTorch?
Honestly, most experts that I know love Pytorch and detest TensorFlow. Karpathy and Justin from Stanford for example. You can see Karpthy's thoughts and I've asked Justin personally and the answer was sharp: PYTORCH!!! TF has lots of PR but its API and graph model are horrible and will waste lots of your research time.

--

...

Updated Mar 12
Update after 2019 TF summit:

TL/DR: previously I was in the pytorch camp but with TF 2.0 it’s clear that Google is really going to try to have parity or try to be better than Pytorch in all aspects where people voiced concerns (ease of use/debugging/dynamic graphs). They seem to be allocating more resources on development than Facebook so the longer term currently looks promising for Google. Prior to TF 2.0 I thought that Pytorch team had more momentum. One area where FB/Pytorch is still stronger is Google is a bit more closed and doesn’t seem to release reproducible cutting edge models such as AlphaGo whereas FAIR released OpenGo for instance. Generally you will end up running into models that are only implemented in one framework of the other so chances are you might end up learning both.
q-n-a  qra  comparison  software  recommendations  cost-benefit  tradeoffs  python  libraries  machine-learning  deep-learning  data-science  sci-comp  tools  google  facebook  tech  competition  best-practices  trends  debugging  expert-experience  ecosystem  theory-practice  pragmatic  wire-guided  static-dynamic  state  academia  frameworks  open-closed 
may 2019 by nhaliday
c++ - Debugging template instantiations - Stack Overflow
Yes, there is a template metaprogramming debugger. Templight

https://github.com/mikael-s-persson/templight
--
Seems to be dead now, though :( [ed.: Partially true. They've merged pull requests recently tho.]
--
Metashell is still in active development though: github.com/metashell/metashell
q-n-a  stackex  nitty-gritty  pls  types  c(pp)  debugging  devtools  tools  programming  howto  advice  checklists  multi  repo  wire-guided  static-dynamic  compilers  performance  measurement  time  latency-throughput 
may 2019 by nhaliday
Teach debugging
A friend of mine and I couldn't understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed.

1. Write down the problem
2. Think real hard
3. Write down the solution

The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we'd been asked to tackle before. On the day he assigned the project, the professor exhorted us to begin early. Over the next few weeks, we heard rumors that some of our classmates worked day and night without making progress.

...

And then, just after midnight, a number of our newfound buddies from dinner reported successes. Half of those who started from scratch had working designs. Others were despondent, because their design was still broken in some subtle, non-obvious way. As I talked with one of those students, I began poring over his design. And after a few minutes, I realized that the Feynman method wasn't the only way forward: it should be possible to systematically apply a mechanical technique repeatedly to find the source of our problems. Beneath all the abstractions, our projects consisted purely of NAND gates (woe to those who dug around our toolbox enough to uncover dynamic logic), which outputs a 0 only when both inputs are 1. If the correct output is 0, both inputs should be 1. The input that isn't is in error, an error that is, itself, the output of a NAND gate where at least one input is 0 when it should be 1. We applied this method recursively, finding the source of all the problems in both our designs in under half an hour.

How To Debug Any Program: https://www.blinddata.com/blog/how-to-debug-any-program-9
May 8th 2019 by Saketh Are

Start by Questioning Everything

...

When a program is behaving unexpectedly, our attention tends to be drawn first to the most complex portions of the code. However, mistakes can come in all forms. I've personally been guilty of rushing to debug sophisticated portions of my code when the real bug was that I forgot to read in the input file. In the following section, we'll discuss how to reliably focus our attention on the portions of the program that need correction.

Then Question as Little as Possible

Suppose that we have a program and some input on which its behavior doesn’t match our expectations. The goal of debugging is to narrow our focus to as small a section of the program as possible. Once our area of interest is small enough, the value of the incorrect output that is being produced will typically tell us exactly what the bug is.

In order to catch the point at which our program diverges from expected behavior, we must inspect the intermediate state of the program. Suppose that we select some point during execution of the program and print out all values in memory. We can inspect the results manually and decide whether they match our expectations. If they don't, we know for a fact that we can focus on the first half of the program. It either contains a bug, or our expectations of what it should produce were misguided. If the intermediate state does match our expectations, we can focus on the second half of the program. It either contains a bug, or our understanding of what input it expects was incorrect.

Question Things Efficiently

For practical purposes, inspecting intermediate state usually doesn't involve a complete memory dump. We'll typically print a small number of variables and check whether they have the properties we expect of them. Verifying the behavior of a section of code involves:

1. Before it runs, inspecting all values in memory that may influence its behavior.
2. Reasoning about the expected behavior of the code.
3. After it runs, inspecting all values in memory that may be modified by the code.

Reasoning about expected behavior is typically the easiest step to perform even in the case of highly complex programs. Practically speaking, it's time-consuming and mentally strenuous to write debug output into your program and to read and decipher the resulting values. It is therefore advantageous to structure your code into functions and sections that pass a relatively small amount of information between themselves, minimizing the number of values you need to inspect.

...

Finding the Right Question to Ask

We’ve assumed so far that we have available a test case on which our program behaves unexpectedly. Sometimes, getting to that point can be half the battle. There are a few different approaches to finding a test case on which our program fails. It is reasonable to attempt them in the following order:

1. Verify correctness on the sample inputs.
2. Test additional small cases generated by hand.
3. Adversarially construct corner cases by hand.
4. Re-read the problem to verify understanding of input constraints.
5. Design large cases by hand and write a program to construct them.
6. Write a generator to construct large random cases and a brute force oracle to verify outputs.
techtariat  dan-luu  engineering  programming  debugging  IEEE  reflection  stories  education  higher-ed  checklists  iteration-recursion  divide-and-conquer  thinking  ground-up  nitty-gritty  giants  feynman  error  input-output  structure  composition-decomposition  abstraction  systematic-ad-hoc  reduction  teaching  state  correctness  multi  oly  oly-programming  metabuch  neurons  problem-solving  wire-guided  marginal  strategy  tactics  methodology  simplification-normalization 
may 2019 by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Commentary: Predictions and the brain: how musical sounds become rewarding
https://twitter.com/AOEUPL_PHE/status/1004807377076604928
https://archive.is/FgNHG
did i just learn something big?

Prerecorded music has ABSOLUTELY NO
SURVIVAL reward. Zero. It does not help
with procreation (well, unless you're the
one making the music, then you get
endless sex) and it does not help with
individual survival.
As such, one must seriously self test
(n=1) prerecorded music actually holds
you back.
If you're reading this and you try no
music for 2 weeks and fail, hit me up. I
have some mind blowing stuff to show
you in how you can control others with
music.
study  psychology  cog-psych  yvain  ssc  models  speculation  music  art  aesthetics  evolution  evopsych  accuracy  meta:prediction  neuro  neuro-nitgrit  neurons  error  roots  intricacy  hmm  wire-guided  machiavelli  dark-arts  predictive-processing  reinforcement  multi  science-anxiety 
june 2018 by nhaliday
Theories of humor - Wikipedia
There are many theories of humor which attempt to explain what humor is, what social functions it serves, and what would be considered humorous. Among the prevailing types of theories that attempt to account for the existence of humor, there are psychological theories, the vast majority of which consider humor to be very healthy behavior; there are spiritual theories, which consider humor to be an inexplicable mystery, very much like a mystical experience.[1] Although various classical theories of humor and laughter may be found, in contemporary academic literature, three theories of humor appear repeatedly: relief theory, superiority theory, and incongruity theory.[2] Among current humor researchers, there is no consensus about which of these three theories of humor is most viable.[2] Proponents of each one originally claimed their theory to be capable of explaining all cases of humor.[2][3] However, they now acknowledge that although each theory generally covers its own area of focus, many instances of humor can be explained by more than one theory.[2][3][4][5] Incongruity and superiority theories, for instance, seem to describe complementary mechanisms which together create humor.[6]

...

Relief theory
Relief theory maintains that laughter is a homeostatic mechanism by which psychological tension is reduced.[2][3][7] Humor may thus for example serve to facilitate relief of the tension caused by one's fears.[8] Laughter and mirth, according to relief theory, result from this release of nervous energy.[2] Humor, according to relief theory, is used mainly to overcome sociocultural inhibitions and reveal suppressed desires. It is believed that this is the reason we laugh whilst being tickled, due to a buildup of tension as the tickler "strikes".[2][9] According to Herbert Spencer, laughter is an "economical phenomenon" whose function is to release "psychic energy" that had been wrongly mobilized by incorrect or false expectations. The latter point of view was supported also by Sigmund Freud.

Superiority theory
The superiority theory of humor traces back to Plato and Aristotle, and Thomas Hobbes' Leviathan. The general idea is that a person laughs about misfortunes of others (so called schadenfreude), because these misfortunes assert the person's superiority on the background of shortcomings of others.[10] Socrates was reported by Plato as saying that the ridiculous was characterized by a display of self-ignorance.[11] For Aristotle, we laugh at inferior or ugly individuals, because we feel a joy at feeling superior to them.[12]

Incongruous juxtaposition theory
The incongruity theory states that humor is perceived at the moment of realization of incongruity between a concept involved in a certain situation and the real objects thought to be in some relation to the concept.[10]

Since the main point of the theory is not the incongruity per se, but its realization and resolution (i.e., putting the objects in question into the real relation), it is often called the incongruity-resolution theory.[10]

...

Detection of mistaken reasoning
In 2011, three researchers, Hurley, Dennett and Adams, published a book that reviews previous theories of humor and many specific jokes. They propose the theory that humor evolved because it strengthens the ability of the brain to find mistakes in active belief structures, that is, to detect mistaken reasoning.[46] This is somewhat consistent with the sexual selection theory, because, as stated above, humor would be a reliable indicator of an important survival trait: the ability to detect mistaken reasoning. However, the three researchers argue that humor is fundamentally important because it is the very mechanism that allows the human brain to excel at practical problem solving. Thus, according to them, humor did have survival value even for early humans, because it enhanced the neural circuitry needed to survive.

Misattribution theory
Misattribution is one theory of humor that describes an audience's inability to identify exactly why they find a joke to be funny. The formal theory is attributed to Zillmann & Bryant (1980) in their article, "Misattribution Theory of Tendentious Humor", published in Journal of Experimental Social Psychology. They derived the critical concepts of the theory from Sigmund Freud's Wit and Its Relation to the Unconscious (note: from a Freudian perspective, wit is separate from humor), originally published in 1905.

Benign violation theory
The benign violation theory (BVT) is developed by researchers A. Peter McGraw and Caleb Warren.[47] The BVT integrates seemingly disparate theories of humor to predict that humor occurs when three conditions are satisfied: 1) something threatens one's sense of how the world "ought to be", 2) the threatening situation seems benign, and 3) a person sees both interpretations at the same time.

From an evolutionary perspective, humorous violations likely originated as apparent physical threats, like those present in play fighting and tickling. As humans evolved, the situations that elicit humor likely expanded from physical threats to other violations, including violations of personal dignity (e.g., slapstick, teasing), linguistic norms (e.g., puns, malapropisms), social norms (e.g., strange behaviors, risqué jokes), and even moral norms (e.g., disrespectful behaviors). The BVT suggests that anything that threatens one's sense of how the world "ought to be" will be humorous, so long as the threatening situation also seems benign.

...

Sense of humor, sense of seriousness
One must have a sense of humor and a sense of seriousness to distinguish what is supposed to be taken literally or not. An even more keen sense is needed when humor is used to make a serious point.[48][49] Psychologists have studied how humor is intended to be taken as having seriousness, as when court jesters used humor to convey serious information. Conversely, when humor is not intended to be taken seriously, bad taste in humor may cross a line after which it is taken seriously, though not intended.[50]

Philosophy of humor bleg: http://marginalrevolution.com/marginalrevolution/2017/03/philosophy-humor-bleg.html

Inside Jokes: https://mitpress.mit.edu/books/inside-jokes
humor as reward for discovering inconsistency in inferential chain

https://twitter.com/search?q=comedy%20OR%20humor%20OR%20humour%20from%3Asarahdoingthing&src=typd
https://twitter.com/sarahdoingthing/status/500000435529195520

https://twitter.com/sarahdoingthing/status/568346955811663872
https://twitter.com/sarahdoingthing/status/600792582453465088
https://twitter.com/sarahdoingthing/status/603215362033778688
https://twitter.com/sarahdoingthing/status/605051508472713216
https://twitter.com/sarahdoingthing/status/606197597699604481
https://twitter.com/sarahdoingthing/status/753514548787683328

https://en.wikipedia.org/wiki/Humour
People of all ages and cultures respond to humour. Most people are able to experience humour—be amused, smile or laugh at something funny—and thus are considered to have a sense of humour. The hypothetical person lacking a sense of humour would likely find the behaviour inducing it to be inexplicable, strange, or even irrational.

...

Ancient Greece
Western humour theory begins with Plato, who attributed to Socrates (as a semi-historical dialogue character) in the Philebus (p. 49b) the view that the essence of the ridiculous is an ignorance in the weak, who are thus unable to retaliate when ridiculed. Later, in Greek philosophy, Aristotle, in the Poetics (1449a, pp. 34–35), suggested that an ugliness that does not disgust is fundamental to humour.

...

China
Confucianist Neo-Confucian orthodoxy, with its emphasis on ritual and propriety, has traditionally looked down upon humour as subversive or unseemly. The Confucian "Analects" itself, however, depicts the Master as fond of humorous self-deprecation, once comparing his wanderings to the existence of a homeless dog.[10] Early Daoist philosophical texts such as "Zhuangzi" pointedly make fun of Confucian seriousness and make Confucius himself a slow-witted figure of fun.[11] Joke books containing a mix of wordplay, puns, situational humor, and play with taboo subjects like sex and scatology, remained popular over the centuries. Local performing arts, storytelling, vernacular fiction, and poetry offer a wide variety of humorous styles and sensibilities.

...

Physical attractiveness
90% of men and 81% of women, all college students, report having a sense of humour is a crucial characteristic looked for in a romantic partner.[21] Humour and honesty were ranked as the two most important attributes in a significant other.[22] It has since been recorded that humour becomes more evident and significantly more important as the level of commitment in a romantic relationship increases.[23] Recent research suggests expressions of humour in relation to physical attractiveness are two major factors in the desire for future interaction.[19] Women regard physical attractiveness less highly compared to men when it came to dating, a serious relationship, and sexual intercourse.[19] However, women rate humorous men more desirable than nonhumorous individuals for a serious relationship or marriage, but only when these men were physically attractive.[19]

Furthermore, humorous people are perceived by others to be more cheerful but less intellectual than nonhumorous people. Self-deprecating humour has been found to increase the desirability of physically attractive others for committed relationships.[19] The results of a study conducted by McMaster University suggest humour can positively affect one’s desirability for a specific relationship partner, but this effect is only most likely to occur when men use humour and are evaluated by women.[24] No evidence was found to suggest men prefer women with a sense of humour as partners, nor women preferring other women with a sense of humour as potential partners.[24] When women were given the forced-choice design in the study, they chose funny men as potential … [more]
article  list  wiki  reference  psychology  cog-psych  social-psych  emotion  things  phalanges  concept  neurons  instinct  👽  comedy  models  theory-of-mind  explanans  roots  evopsych  signaling  humanity  logic  sex  sexuality  cost-benefit  iq  intelligence  contradiction  homo-hetero  egalitarianism-hierarchy  humility  reinforcement  EEA  eden  play  telos-atelos  impetus  theos  mystic  philosophy  big-peeps  the-classics  literature  inequality  illusion  within-without  dennett  dignity  social-norms  paradox  parallax  analytical-holistic  multi  econotariat  marginal-rev  discussion  speculation  books  impro  carcinisation  postrat  cool  twitter  social  quotes  commentary  search  farmers-and-foragers  🦀  evolution  sapiens  metameta  insight  novelty  wire-guided  realness  chart  beauty  nietzschean  class  pop-diff  culture  alien-character  confucian  order-disorder  sociality  🐝  integrity  properties  gender  gender-diff  china  asia  sinosphere  long-short-run  trust  religion  ideology  elegance  psycho-atoms 
april 2018 by nhaliday
What Peter Thiel thinks about AI risk - Less Wrong
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

For context, in case anyone doesn't realize: Thiel has been MIRI's top donor throughout its history.

other stuff:
nice interview question: "thing you know is true that not everyone agrees on?"
"learning from failure overrated"
cleantech a huge market, hard to compete
software makes for easy monopolies (zero marginal costs, network effects, etc.)
for most of history inventors did not benefit much (continuous competition)
ethical behavior is a luxury of monopoly
ratty  lesswrong  commentary  ai  ai-control  risk  futurism  technology  speedometer  audio  presentation  musk  thiel  barons  frontier  miri-cfar  charity  people  track-record  venture  startups  entrepreneurialism  contrarianism  competition  market-power  business  google  truth  management  leadership  socs-and-mops  dark-arts  skunkworks  hard-tech  energy-resources  wire-guided  learning  software  sv  tech  network-structure  scale  marginal  cost-benefit  innovation  industrial-revolution  economics  growth-econ  capitalism  comparison  nationalism-globalism  china  asia  trade  stagnation  things  dimensionality  exploratory  world  developing-world  thinking  definite-planning  optimism  pessimism  intricacy  politics  war  career  planning  supply-demand  labor  science  engineering  dirty-hands  biophysical-econ  migration  human-capital  policy  canada  anglo  winner-take-all  polarization  amazon  business-models  allodium  civilization  the-classics  microsoft  analogy  gibbon  conquest-empire  realness  cynicism-idealism  org:edu  open-closed  ethics  incentives  m 
february 2018 by nhaliday
Are Sunk Costs Fallacies? - Gwern.net
But to what extent is the sunk cost fallacy a real fallacy?
Below, I argue the following:
1. sunk costs are probably issues in big organizations
- but maybe not ones that can be helped
2. sunk costs are not issues in animals
3. sunk costs appear to exist in children & adults
- but many apparent instances of the fallacy are better explained as part of a learning strategy
- and there’s little evidence sunk cost-like behavior leads to actual problems in individuals
4. much of what we call sunk cost looks like simple carelessness & thoughtlessness
ratty  gwern  analysis  meta-analysis  faq  biases  rationality  decision-making  decision-theory  economics  behavioral-econ  realness  cost-benefit  learning  wire-guided  marginal  age-generation  aging  industrial-org  organizing  coordination  nature  retention  knowledge  iq  education  tainter  management  government  competition  equilibrium  models  roots  chart 
december 2017 by nhaliday
Charity Cost-Effectiveness in an Uncertain World – Foundational Research Institute
Evaluating the effectiveness of our actions, or even just whether they're positive or negative by our values, is very difficult. One approach is to focus on clear, quantifiable metrics and assume that the larger, indirect considerations just kind of work out. Another way to deal with uncertainty is to focus on actions that seem likely to have generally positive effects across many scenarios, and often this approach amounts to meta-level activities like encouraging positive-sum institutions, philosophical inquiry, and effective altruism in general. When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing.
ratty  effective-altruism  subculture  article  decision-making  miri-cfar  charity  uncertainty  moments  reflection  regularizer  wire-guided  robust  outcome-risk  flexibility  🤖  spock  info-dynamics  efficiency  arbitrage 
august 2017 by nhaliday
GALILEO'S STUDIES OF PROJECTILE MOTION
During the Renaissance, the focus, especially in the arts, was on representing as accurately as possible the real world whether on a 2 dimensional surface or a solid such as marble or granite. This required two things. The first was new methods for drawing or painting, e.g., perspective. The second, relevant to this topic, was careful observation.

With the spread of cannon in warfare, the study of projectile motion had taken on greater importance, and now, with more careful observation and more accurate representation, came the realization that projectiles did not move the way Aristotle and his followers had said they did: the path of a projectile did not consist of two consecutive straight line components but was instead a smooth curve. [1]

Now someone needed to come up with a method to determine if there was a special curve a projectile followed. But measuring the path of a projectile was not easy.

Using an inclined plane, Galileo had performed experiments on uniformly accelerated motion, and he now used the same apparatus to study projectile motion. He placed an inclined plane on a table and provided it with a curved piece at the bottom which deflected an inked bronze ball into a horizontal direction. The ball thus accelerated rolled over the table-top with uniform motion and then fell off the edge of the table Where it hit the floor, it left a small mark. The mark allowed the horizontal and vertical distances traveled by the ball to be measured. [2]

By varying the ball's horizontal velocity and vertical drop, Galileo was able to determine that the path of a projectile is parabolic.

https://www.scientificamerican.com/author/stillman-drake/

Galileo's Discovery of the Parabolic Trajectory: http://www.jstor.org/stable/24949756

Galileo's Experimental Confirmation of Horizontal Inertia: Unpublished Manuscripts (Galileo
Gleanings XXII): https://sci-hub.tw/https://www.jstor.org/stable/229718
- Drake Stillman

MORE THAN A DECADE HAS ELAPSED since Thomas Settle published a classic paper in which Galileo's well-known statements about his experiments on inclined planes were completely vindicated.' Settle's paper replied to an earlier attempt by Alexandre Koyre to show that Galileo could not have obtained the results he claimed in his Two New Sciences by actual observations using the equipment there described. The practical ineffectiveness of Settle's painstaking repetition of the experiments in altering the opinion of historians of science is only too evident. Koyre's paper was reprinted years later in book form without so much as a note by the editors concerning Settle's refutation of its thesis.2 And the general literature continues to belittle the role of experiment in Galileo's physics.

More recently James MacLachlan has repeated and confirmed a different experiment reported by Galileo-one which has always seemed highly exaggerated and which was also rejected by Koyre with withering sarcasm.3 In this case, however, it was accuracy of observation rather than precision of experimental data that was in question. Until now, nothing has been produced to demonstrate Galileo's skill in the design and the accurate execution of physical experiment in the modern sense.

Pant of a page of Galileo's unpublished manuscript notes, written late in 7608, corroborating his inertial assumption and leading directly to his discovery of the parabolic trajectory. (Folio 1 16v Vol. 72, MSS Galileiani; courtesy of the Biblioteca Nazionale di Firenze.)

...

(The same skeptical historians, however, believe that to show that Galileo could have used the medieval mean-speed theorem suffices to prove that he did use it, though it is found nowhere in his published or unpublished writings.)

...

Now, it happens that among Galileo's manuscript notes on motion there are many pages that were not published by Favaro, since they contained only calculations or diagrams without attendant propositions or explanations. Some pages that were published had first undergone considerable editing, making it difficult if not impossible to discern their full significance from their printed form. This unpublished material includes at least one group of notes which cannot satisfactorily be accounted for except as representing a series of experiments designed to test a fundamental assumption, which led to a new, important discovery. In these documents precise empirical data are given numerically, comparisons are made with calculated values derived from theory, a source of discrepancy from still another expected result is noted, a new experiment is designed to eliminate this, and further empirical data are recorded. The last-named data, although proving to be beyond Galileo's powers of mathematical analysis at the time, when subjected to modern analysis turn out to be remarkably precise. If this does not represent the experimental process in its fully modern sense, it is hard to imagine what standards historians require to be met.

The discovery of these notes confirms the opinion of earlier historians. They read only Galileo's published works, but did so without a preconceived notion of continuity in the history of ideas. The opinion of our more sophisticated colleagues has its sole support in philosophical interpretations that fit with preconceived views of orderly long-term scientific development. To find manuscript evidence that Galileo was at home in the physics laboratory hardly surprises me. I should find it much more astonishing if, by reasoning alone, working only from fourteenth-century theories and conclusions, he had continued along lines so different from those followed by profound philosophers in earlier centuries. It is to be hoped that, warned by these examples, historians will begin to restore the old cautionary clauses in analogous instances in which scholarly opinions are revised without new evidence, simply to fit historical theories.

In what follows, the newly discovered documents are presented in the context of a hypothetical reconstruction of Galileo's thought.

...

As early as 1590, if we are correct in ascribing Galileo's juvenile De motu to that date, it was his belief that an ideal body resting on an ideal horizontal plane could be set in motion by a force smaller than any previously assigned force, however small. By "horizontal plane" he meant a surface concentric with the earth but which for reasonable distances would be indistinguishable from a level plane. Galileo noted at the time that experiment did not confirm this belief that the body could be set in motion by a vanishingly small force, and he attributed the failure to friction, pressure, the imperfection of material surfaces and spheres, and the departure of level planes from concentricity with the earth.5

It followed from this belief that under ideal conditions the motion so induced would also be perpetual and uniform. Galileo did not mention these consequences until much later, and it is impossible to say just when he perceived them. They are, however, so evident that it is safe to assume that he saw them almost from the start. They constitute a trivial case of the proposition he seems to have been teaching before 1607-that a mover is required to start motion, but that absence of resistance is then sufficient to account for its continuation.6

In mid-1604, following some investigations of motions along circular arcs and motions of pendulums, Galileo hit upon the law that in free fall the times elapsed from rest are as the smaller distance is to the mean proportional between two distances fallen.7 This gave him the times-squared law as well as the rule of odd numbers for successive distances and speeds in free fall. During the next few years he worked out a large number of theorems relating to motion along inclined planes, later published in the Two New Sciences. He also arrived at the rule that the speed terminating free fall from rest was double the speed of the fall itself. These theorems survive in manuscript notes of the period 1604-1609. (Work during these years can be identified with virtual certainty by the watermarks in the paper used, as I have explained elsewhere.8)

In the autumn of 1608, after a summer at Florence, Galileo seems to have interested himself in the question whether the actual slowing of a body moving horizontally followed any particular rule. On folio 117i of the manuscripts just mentioned, the numbers 196, 155, 121, 100 are noted along the horizontal line near the middle of the page (see Fig. 1). I believe that this was the first entry on this leaf, for reasons that will appear later, and that Galileo placed his grooved plane in the level position and recorded distances traversed in equal times along it. Using a metronome, and rolling a light wooden ball about 4 3/4 inches in diameter along a plane with a groove 1 3/4 inches wide, I obtained similar relations over a distance of 6 feet. The figures obtained vary greatly for balls of different materials and weights and for greatly different initial speeds.9 But it suffices for my present purposes that Galileo could have obtained the figures noted by observing the actual deceleration of a ball along a level plane. It should be noted that the watermark on this leaf is like that on folio 116, to which we shall come presently, and it will be seen later that the two sheets are closely connected in time in other ways as well.

The relatively rapid deceleration is obviously related to the contact of ball and groove. Were the ball to roll right off the end of the plane, all resistance to horizontal motion would be virtually removed. If, then, there were any way to have a given ball leave the plane at different speeds of which the ratios were known, Galileo's old idea that horizontal motion would continue uniformly in the absence of resistance could be put to test. His law of free fall made this possible. The ratios of speeds could be controlled by allowing the ball to fall vertically through known heights, at the ends of which it would be deflected horizontally. Falls through given heights … [more]
nibble  org:junk  org:edu  physics  mechanics  gravity  giants  the-trenches  discovery  history  early-modern  europe  mediterranean  the-great-west-whale  frontier  science  empirical  experiment  arms  technology  lived-experience  time  measurement  dirty-hands  iron-age  the-classics  medieval  sequential  wire-guided  error  wiki  reference  people  quantitative-qualitative  multi  pdf  piracy  study  essay  letters  discrete  news  org:mag  org:sci  popsci 
august 2017 by nhaliday
All models are wrong - Wikipedia
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled "All models are wrong but some are useful". The section is copied below.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
thinking  metabuch  metameta  map-territory  models  accuracy  wire-guided  truth  philosophy  stats  data-science  methodology  lens  wiki  reference  complex-systems  occam  parsimony  science  nibble  hi-order-bits  info-dynamics  the-trenches  meta:science  physics  fluid  thermo  stat-mech  applicability-prereqs  theory-practice  elegance  simplification-normalization 
august 2017 by nhaliday
Whole Health Source: Palatability, Satiety and Calorie Intake
The more palatable the food, the less filling per calorie, and the relationship was quite strong for a study of this nature. This is consistent with the evidence that highly palatable foods shut down the mechanisms in the brain that constrain food intake. Croissants had the lowest SI (47), while potatoes had the highest (323). Overall, baked goods and candy had the lowest SI. They didn't test sweet potatoes, but I suspect they would have been similar to potatoes. Other foods with a high SI include meat/fish, whole grain foods, fruit and porridge.
taubes-guyenet  org:health  fitsci  health  embodied  food  diet  nutrition  metabolic  constraint-satisfaction  wire-guided  correlation  emotion 
july 2017 by nhaliday
How accurate are population forecasts?
2 The Accuracy of Past Projections: https://www.nap.edu/read/9828/chapter/4
good ebook:
Beyond Six Billion: Forecasting the World's Population (2000)
https://www.nap.edu/read/9828/chapter/2
Appendix A: Computer Software Packages for Projecting Population
https://www.nap.edu/read/9828/chapter/12
PDE Population Projections looks most relevant for my interests but it's also *ancient*
https://applieddemogtoolbox.github.io/Toolbox/
This Applied Demography Toolbox is a collection of applied demography computer programs, scripts, spreadsheets, databases and texts.

How Accurate Are the United Nations World Population Projections?: http://pages.stern.nyu.edu/~dbackus/BCH/demography/Keilman_JDR_98.pdf

cf. Razib on this: https://pinboard.in/u:nhaliday/b:d63e6df859e8
news  org:lite  prediction  meta:prediction  tetlock  demographics  population  demographic-transition  fertility  islam  world  developing-world  africa  europe  multi  track-record  accuracy  org:ngo  pdf  study  sociology  measurement  volo-avolo  methodology  estimate  data-science  error  wire-guided  priors-posteriors  books  guide  howto  software  tools  recommendations  libraries  gnxp  scitariat 
july 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.

https://twitter.com/wwwojtekk/status/1190731344336293889
https://archive.is/EZu0h
Great (not completely new but still good to have it in one place) discussion of RCTs and inference in economics by Deaton, my favorite sentences (more general than just about RCT) below
Randomization in the tropics revisited: a theme and eleven variations: https://scholar.princeton.edu/sites/default/files/deaton/files/deaton_randomization_revisited_v3_2019.pdf
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated  garett-jones 
june 2017 by nhaliday
To err is human; so is the failure to admit it
Lowering the cost of admitting error could help defuse these crises. A new issue of Econ Journal Watch, an online journal, includes a symposium in which prominent economic thinkers are asked to provide their “most regretted statements”. Held regularly, such exercises might take the shame out of changing your mind. Yet the symposium also shows how hard it is for scholars to grapple with intellectual regret. Some contributions are candid; Tyler Cowen’s analysis of how and why he underestimated the risk of financial crisis in 2007 is enlightening. But some disappoint, picking out regrets that cast the writer in a flattering light or using the opportunity to shift blame.
news  org:rec  org:anglo  org:biz  economics  error  wire-guided  priors-posteriors  publishing  econotariat  marginal-rev  cycles  journos-pundits  responsibility  failure 
june 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
In the first place | West Hunter
We hear a lot about innovative educational approaches, and since these silly people have been at this for a long time now, we hear just as often about the innovative approaches that some idiot started up a few years ago and are now crashing in flames.  We’re in steady-state.

I’m wondering if it isn’t time to try something archaic.  In particular, mnemonic techniques, such as the method of loci.  As far as I know, nobody has actually tried integrating the more sophisticated mnemonic techniques into a curriculum.  Sure, we all know useful acronyms, like the one for resistor color codes, but I’ve not heard of anyone teaching kids how to build a memory palace.

https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20106
I have never used formal mnemonic techniques, but life has recently tested me on how well I remember material from my college days. Turns out that I can still do the sorts of math and physics problems that I could then, in subjects like classical mechanics, real analysis, combinatorics, complex variables, quantum mechanics, statistical mechanics, etc. I usually have to crack the book though. Some of that material I have used from time to time, or even fairly often (especially linear algebra), most not. I’m sure I’m slower than I was then, at least on the stuff I haven’t used.

https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20109
Long-term memory capacity must be finite, but I know of no evidence that anyone has ever run out of it. As for the idea that you don’t really need a lot of facts in your head to come up with new ideas: pretty much the opposite of the truth, in a lot of fields.

https://en.wikipedia.org/wiki/Method_of_loci

Mental Imagery > Ancient Imagery Mnemonics: https://plato.stanford.edu/entries/mental-imagery/ancient-imagery-mnemonics.html
In the Middle Ages and the Renaissance, very elaborate versions of the method evolved, using specially learned imaginary spaces (Memory Theaters or Palaces), and complex systems of predetermined symbolic images, often imbued with occult or spiritual significances. However, modern experimental research has shown that even a simple and easily learned form of the method of loci can be highly effective (Ross & Lawrence, 1968; Maguire et al., 2003), as are several other imagery based mnemonic techniques (see section 4.2 of the main entry).

The advantages of organizing knowledge in terms of country and place: http://marginalrevolution.com/marginalrevolution/2018/02/advantages-organizing-knowledge-terms-country-place.html

https://www.quora.com/What-are-the-best-books-on-Memory-Palace

fascinating aside:
US vs Nazi army, Vietnam, the draft: https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20136
You think I know more about this than a retired major general and former head of the War College? I do, of course, but that fact itself should worry you.

He’s not all wrong, but a lot of what he says is wrong. For example, the Germany Army was a conscript army, so conscription itself can’t explain why the Krauts were about 25% more effective than the average American unit. Nor is it true that the draft in WWII was corrupt.

The US had a different mix of armed forces – more air forces and a much larger Navy than Germany. Those services have higher technical requirements and sucked up a lot of the smarter guys. That was just a product of the strategic situation.

The Germans had better officers, partly because of better training and doctrine, partly the fruit of a different attitude towards the army. The US, much of the time, thought of the Army as a career for losers, but Germans did not.

The Germans had an enormous amount of relevant combat experience, much more than anyone in the US. Spend a year or two on the Eastern Front and you learn.

And the Germans had better infantry weapons.

The US tooth-to-tail ratio was , I think, worse than that of the Germans: some of that was a natural consequence of being an expeditionary force, but some was just a mistake. You want supply sergeants to be literate, but it is probably true that we put too many of the smarter guys into non-combat positions. That changed some when we ran into manpower shortages in late 1944 and combed out the support positions.

This guy is back-projecting Vietnam problems into WWII – he’s mostly wrong.

more (more of a focus on US Marines than Army): https://www.quora.com/Were-US-Marines-tougher-than-elite-German-troops-in-WW2/answer/Joseph-Scott-13
west-hunter  scitariat  speculation  ideas  proposal  education  learning  retention  neurons  the-classics  nitty-gritty  visuo  spatial  psych-architecture  multi  poast  history  mostly-modern  world-war  war  military  strategy  usa  europe  germanic  cold-war  visual-understanding  cartoons  narrative  wordlessness  comparison  asia  developing-world  knowledge  metabuch  econotariat  marginal-rev  discussion  world  thinking  government  local-global  humility  wire-guided  policy  iron-age  mediterranean  wiki  reference  checklists  exocortex  early-modern  org:edu  philosophy  enlightenment-renaissance-restoration-reformation  qra  q-n-a  books  recommendations  list  links  ability-competence  leadership  elite  higher-ed  math  physics  linear-algebra  cost-benefit  prioritizing  defense  martial  war-nerd 
may 2017 by nhaliday
Positively wrong | West Hunter
Wanting something to be true doesn’t make it true – but sometimes, desperately wanting something to be true pays off. Sometimes because you’re actually right (by luck), and that passion helps you put in the work required to establish it, sometimes because your deluded quest ends up finding something else of actual value – sometimes far more valuable than what you were looking for.
west-hunter  scitariat  discussion  rant  history  early-modern  age-of-discovery  usa  europe  the-great-west-whale  mediterranean  space  big-peeps  innovation  discovery  error  social-science  realness  info-dynamics  truth  wire-guided  is-ought  the-trenches  alt-inst  creative 
may 2017 by nhaliday
Faces in the Clouds | West Hunter
This was a typical Iraq story: somehow, we had developed an approach to intelligence that reliably produced fantastically wrong answers, at vast expense. What so special about Iraq? Nothing, probably – except that we acquired ground truth.

https://westhunt.wordpress.com/2013/06/19/faces-in-the-clouds/#comment-15397
Those weren’t leads, any more than there are really faces in the clouds. They were excuses to sell articles, raise money, and finally one extra argument in favor of a pointless war. Without a hard fact or two, it’s all vapor, useless.

Our tactical intelligence was fine in the Gulf War, but that doesn’t mean that the military, or worse yet the people who make and influence decisions had any sense, then or now.

For example, I have long had an amateur interest in these things, and I got the impression, in the summer of 1990, that Saddam Hussein was about to invade Kuwait. I was telling everyone at work that Saddam was about to invade, till they got bored with it. This was about two weeks before it actually happened. I remember thinking about making a few investments based on that possible event, but never got around to, partly because I was really sleepy, since we had a month-old baby girl at home.

As I recall, the “threat officer” at the CIA warned about this, but since the higher-ups ignored him, his being correct embarrassed them, so he was demoted.

The tactical situation was as favorable as it ever gets, and most of it was known. We had near-perfect intelligence:: satellite recon, JSTARS, etc Complete air domination, everything from Warthogs to F-15s. . Months to get ready. A huge qualitative weapons superiority. For example, our tanks outranged theirs by about a factor of two, had computer-controlled aiming, better armor, infrared sights, etc etc etc etc. I counted something like 13 separate war-winning advantages at the time, and that count was obviously incomplete.. And one more: Arabs make terrible soldiers, generally, and Iraqis were among the worst.

But I think that most of the decisionmakers didn’t realize how easy it would be – at all – and I’ve never seen any sign that Colin Powell did either. He’s a “C” student type – not smart. Schwartzkopf may have understood what was going on: for all I know he was another Manstein, but you can’t show how good you are when you beat a patzer.

https://westhunt.wordpress.com/2013/06/19/faces-in-the-clouds/#comment-15420
For me it was a hobby – I was doing adaptive optics at the time in Colorado Springs. All I knew about particular military moves was from the newspapers, but my reasoning went like this:

A. Kuwait had a lot of oil. Worth stealing, if you could get away with it.

B. Kuwait was militarily impotent and had no defense treaty with anyone. Most people found Kuwaitis annoying.

C. Iraq owed Kuwait something like 30 billion dollars, and was generally deep in debt due to the long conflict with Iran

D. I figured that there was a fair chance that the Iraqi accusations of Kuwaiti slant drilling were true

E. There were widely reported Iraqi troop movements towards Kuwait

F. Most important was my evaluation of Saddam, from watching the long war with Iran. I thought that Saddam was a particular combination of cocky and stupid, the sort of guy to do something like this. At the time I did not know about April Glaspie’s, shall we say, poorly chosen comments.
west-hunter  scitariat  discussion  MENA  iraq-syria  stories  intel  track-record  generalization  truth  error  wire-guided  priors-posteriors  info-dynamics  multi  poast  being-right  people  statesmen  usa  management  incentives  impetus  energy-resources  military  arms  analysis  roots  alien-character  ability-competence  cynicism-idealism 
april 2017 by nhaliday
Was the Wealth of Nations Determined in 1000 BC?
Our most interesting, strong, and robust results are for the association of 1500 AD technology with per capita income and technology adoption today. We also find robust and significant technological persistence from 1000 BC to 0 AD, and from 0 AD to 1500 AD.

migration-adjusted ancestry predicts current economic growth and technology adoption today

https://economix.blogs.nytimes.com/2010/08/02/was-todays-poverty-determined-in-1000-b-c/

Putterman-Weil:
Post-1500 Population Flows and the Long Run Determinants of Economic Growth and Inequality: http://www.nber.org/papers/w14448
Persistence of Fortune: Accounting for Population Movements, There Was No Post-Columbian Reversal: http://sci-hub.tw/10.1257/mac.6.3.1
Extended State History Index: https://sites.google.com/site/econolaols/extended-state-history-index
Description:
The data set extends and replaces previous versions of the State Antiquity Index (originally created by Bockstette, Chanda and Putterman, 2002). The updated data extends the previous Statehist data into the years before 1 CE, to the first states in Mesopotamia (in the fourth millennium BCE), along with filling in the years 1951 – 2000 CE that were left out of past versions of the Statehist data.
The construction of the index follows the principles developed by Bockstette et al (2002). First, the duration of state existence is established for each territory defined by modern-day country borders. Second, this duration is divided into 50-year periods. For each half-century from the first period (state emergence) onwards, the authors assign scores to reflect three dimensions of state presence, based on the following questions: 1) Is there a government above the tribal level? 2) Is this government foreign or locally based? 3) How much of the territory of the modern country was ruled by this government?

Creators: Oana Borcan, Ola Olsson & Louis Putterman

State History and Economic Development: Evidence from Six Millennia∗: https://drive.google.com/file/d/1cifUljlPpoURL7VPOQRGF5q9H6zgVFXe/view
The presence of a state is one of the most reliable historical predictors of social and economic development. In this article, we complete the coding of an extant indicator of state presence from 3500 BCE forward for almost all but the smallest countries of the world today. We outline a theoretical framework where accumulated state experience increases aggregate productivity in individual countries but where newer or relatively inexperienced states can reach a higher productivity maximum by learning from the experience of older states. The predicted pattern of comparative development is tested in an empirical analysis where we introduce our extended state history variable. Our key finding is that the current level of economic development across countries has a hump-shaped relationship with accumulated state history.

nonlinearity confirmed in this other paper:
State and Development: A Historical Study of Europe from 0 AD to 2000 AD: https://ideas.repec.org/p/hic/wpaper/219.html
After addressing conceptual and practical concerns on its construction, we present a measure of the mean duration of state rule that is aimed at resolving some of these issues. We then present our findings on the relationship between our measure and local development, drawing from observations in Europe spanning from 0 AD to 2000 AD. We find that during this period, the mean duration of state rule and the local income level have a nonlinear, inverse U-shaped relationship, controlling for a set of historical, geographic and socioeconomic factors. Regions that have historically experienced short or long duration of state rule on average lag behind in their local wealth today, while those that have experienced medium-duration state rule on average fare better.

Figure 1 shows all borders that existed during this period
Figure 4 shows quadratic fit

I wonder if U-shape is due to Ibn Kaldun-Turchin style effect on asabiya? They suggest sunk costs and ossified institutions.
study  economics  growth-econ  history  antiquity  medieval  cliometrics  macro  path-dependence  hive-mind  garett-jones  spearhead  biodet  🎩  🌞  human-capital  divergence  multi  roots  demographics  the-great-west-whale  europe  china  asia  technology  easterly  definite-planning  big-picture  big-peeps  early-modern  stylized-facts  s:*  broad-econ  track-record  migration  assimilation  chart  frontier  prepping  discovery  biophysical-econ  cultural-dynamics  wealth-of-nations  ideas  occident  microfoundations  news  org:rec  popsci  age-of-discovery  expansionism  conquest-empire  pdf  piracy  world  developing-world  deep-materialism  dataset  time  data  database  time-series  leviathan  political-econ  polisci  iron-age  mostly-modern  government  institutions  correlation  curvature  econ-metrics  wealth  geography  walls  within-group  nonlinearity  convexity-curvature  models  marginal  wire-guided  branches  cohesion  organizing  hari-seldon 
march 2017 by nhaliday
Overcoming Bias : Surprising Popularity
This week Nature published some empirical data on a surprising-popularity consensus mechanism (a previously published mechanism, e.g., Science in 2004, with variations going by the name “Bayesian Truth Serum”). The idea is to ask people to pick from several options, and also to have each person forecast the distribution of opinion among others. The options that are picked surprisingly often, compared to what participants expected, are suggested as more likely true, and those who pick such options as better informed.

http://www.nature.com/nature/journal/v541/n7638/full/nature21054.html
http://science.sciencemag.org/content/306/5695/462

https://www.reddit.com/r/slatestarcodex/comments/5qhvf0/a_solution_to_the_singlequestion_crowd_wisdom/
http://lesswrong.com/r/discussion/lw/okv/why_is_the_surprisingly_popular_answer_correct/

different one: http://www.pnas.org/content/114/20/5077.full.pdf
We show that market-based incentive systems produce herding effects, reduce information available to the group, and restrain collective intelligence. Therefore, we propose an incentive scheme that rewards accurate minority predictions and show that this produces optimal diversity and collective predictive accuracy. We conclude that real world systems should reward those who have shown accuracy when the majority opinion has been in error.
ratty  hanson  commentary  study  summary  org:nat  psychology  social-psych  social-choice  coordination  🤖  decision-making  multi  reddit  social  ssc  contrarianism  prediction-markets  alt-inst  lesswrong  ensembles  wire-guided  meta:prediction  info-econ  info-dynamics  pdf  novelty  diversity 
january 2017 by nhaliday
Ars longa, vita brevis - Wikipedia
pronounced arrz long-uh, vite-uh brev-is

Vita brevis,
ars longa,
occasio praeceps,
experimentum periculosum,
iudicium difficile.

Life is short,
and art long,
opportunity fleeting,
experimentations perilous,
and judgment difficult.
language  aphorism  meaningness  europe  mediterranean  history  wiki  reference  death  foreign-lang  time  iron-age  medieval  the-classics  wisdom  nihil  short-circuit  wire-guided  s:*  poetry 
january 2017 by nhaliday
natural language processing blog: Whence your reward function?
I think the most substantial issue is the fact that game playing is a simulated environment and the reward function is generally crafted to make humans find the games fun, which usually means frequent small rewards that point you in the right direction. This is exactly where RL works well, and something that I'm not sure is a reasonable assumption in the real world.
acmtariat  critique  reinforcement  deep-learning  machine-learning  research  research-program  org:bleg  nibble  wire-guided  cost-benefit 
december 2016 by nhaliday
Thought Patterns: Marginal · Alex Guzey
Problem: you have a certain action you want to be doing but when the moment comes you forget about it or the trigger just never fully comes to your attention.

Example: Instead of postponing small tasks (e.g. taking out the trash) I want to do them immediately, but when they actually come, I forget about this intention and continue with whatever I was doing before i.e. telling myself I’ll do them later.

How to solve? Make these if-else action plans to always be somewhere at the back of the mind, preferably not far from the working memory, always on the edge of awareness.

Solution: Anki deck with the maximum card interval of 1 day and long initial learning curve.
ratty  advice  lifehack  rationality  akrasia  hmm  discipline  neurons  habit  workflow  🦉  wire-guided  skeleton  gtd  time-use  s:*  metabuch 
december 2016 by nhaliday
How to do career planning properly - 80,000 Hours
- A/B/Z plans
- make a list of red flags and commit to reviewing at some point (and at some interval)
advice  strategy  career  planning  80000-hours  long-term  thinking  guide  summary  checklists  rat-pack  tactics  success  working-stiff  flexibility  wire-guided  progression  ratty 
november 2016 by nhaliday
COS597C: How to solve it
- Familiarity with tools. You have to know the basic mathematical and conceptuatl tools, and over the semester we will encounter quite a few of them.
- Background reading on your topic. What is already known and how was it proven? Research involves figuring out how to stand on the shoulders of others (could be giants, midgets, or normal-sized people).
- Ability to generate new ideas and spot the ones that dont work. I cannot stress the second part enough. The only way you generate new ideas is by shooting down the ones you already have.
- Flashes of genius. Somewhat overrated; the other three points are more important. Insights come to the well-prepared.
course  tcs  princeton  yoga  👳  unit  toolkit  metabuch  problem-solving  sanjeev-arora  wire-guided  s:*  p:** 
october 2016 by nhaliday
Why Constant Learners All Embrace the 5-Hour Rule – The Mission – Medium
better than the title suggests, eg, Ben Franklins personal routine looks a lot like what I arrived at independently
growth  akrasia  advice  vulgar  habit  org:med  productivity  learning  creative  wire-guided  practice  time-use  studying  time  investing 
august 2016 by nhaliday
orthonormal comments on Where to Intervene in a Human? - Less Wrong
The highest-level hack I've found useful is to make a habit of noticing and recording the details of any part of my life that gives me trouble. It's amazing how quickly patterns start to jump out when you've assembled actual data about something that's vaguely frustrated you for a while.
lifehack  productivity  workflow  rationality  advice  akrasia  quantified-self  growth  habit  discipline  lesswrong  ratty  rat-pack  biases  decision-making  🦉  wire-guided  time-use  s:null 
july 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : frontiergrowthinfosoftthinking

related tags

80000-hours  ability-competence  abstraction  academia  accretion  accuracy  acmtariat  additive  advice  aesthetics  africa  age-generation  age-of-discovery  aggregator  aging  ai  ai-control  akrasia  albion  alien-character  allodium  alt-inst  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  antiquity  aphorism  api  apollonian-dionysian  applicability-prereqs  arbitrage  arms  art  article  asia  assimilation  atoms  attention  audio  authoritarianism  auto-learning  automata-languages  aversion  axioms  backup  barons  beauty  behavioral-econ  behavioral-gen  being-becoming  being-right  best-practices  betting  bias-variance  biases  big-peeps  big-picture  big-surf  bio  biodet  biophysical-econ  biotech  bonferroni  books  bootstraps  bounded-cognition  brain-scan  branches  brands  broad-econ  build-packaging  business  business-models  c(pp)  calculator  canada  canon  capitalism  carcinisation  career  cartoons  causation  charity  chart  cheatsheet  checking  checklists  chemistry  china  civilization  clarity  class  classic  classification  clever-rats  cliometrics  cloud  coalitions  coarse-fine  code-organizing  coding-theory  cog-psych  cohesion  cold-war  comedy  commentary  communication  communism  community  comparison  competition  compilers  complex-systems  composition-decomposition  computation  computer-memory  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confucian  conquest-empire  consilience  constraint-satisfaction  contest  contradiction  contrarianism  control  convexity-curvature  cool  coordination  core-rats  correctness  correlation  cost-benefit  coupling-cohesion  course  cracker-prog  creative  criminal-justice  criminology  critique  crooked  cs  cultural-dynamics  culture  curiosity  curvature  cycles  cynicism-idealism  d-lang  dan-luu  dark-arts  darwinian  data  data-science  database  dataset  dbs  death  debate  debt  debugging  decision-making  decision-theory  deep-learning  deep-materialism  defense  definite-planning  degrees-of-freedom  demographic-transition  demographics  dennett  descriptive  design  detail-architecture  developing-world  devtools  diet  dignity  dimensionality  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distribution  divergence  diversity  divide-and-conquer  documentation  DSL  duality  duplication  dynamic  early-modern  easterly  econ-metrics  econometrics  economics  econotariat  ecosystem  eden  eden-heaven  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  eh  electromag  elegance  elite  embodied  embodied-cognition  emotion  empirical  endo-exo  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entrepreneurialism  epistemic  equilibrium  error  error-handling  essay  essence-existence  estimate  ethics  europe  evidence-based  evolution  evopsych  examples  existence  exocortex  expansionism  experiment  expert  expert-experience  explanans  exploration-exploitation  exploratory  explore-exploit  facebook  failure  faq  farmers-and-foragers  fertility  feynman  fiction  field-study  finance  fitsci  flexibility  fluid  flux-stasis  focus  food  foreign-lang  form-design  formal-methods  formal-values  forum  frameworks  frontend  frontier  futurism  games  garett-jones  gavisti  gedanken  gender  gender-diff  generalization  genetics  geography  germanic  giants  gibbon  gig-econ  git  github  gnon  gnosis-logos  gnxp  golang  google  gotchas  government  gowers  grad-school  gradient-descent  graphs  gravity  grokkability  grokkability-clarity  ground-up  growth  growth-econ  gtd  guide  gwern  habit  hanson  hard-tech  hari-seldon  hashing  haskell  hci  health  heavyweights  heuristic  hi-order-bits  hidden-motives  hierarchy  higher-ed  history  hive-mind  hmm  hn  homepage  homo-hetero  housing  howto  hsu  human-capital  humanity  humility  hypocrisy  hypothesis-testing  ideas  identity-politics  ideology  idk  IEEE  illusion  impact  impetus  impro  incentives  increase-decrease  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  information-theory  inhibition  init  innovation  input-output  insight  instinct  institutions  integrity  intel  intelligence  interdisciplinary  internet  intervention  intricacy  intuition  investing  iq  iraq-syria  iron-age  is-ought  islam  iteration-recursion  japan  javascript  journos-pundits  judgement  jvm  kinship  knowledge  labor  language  latency-throughput  leadership  learning  legacy  len:long  len:short  lens  lesswrong  let-me-see  letters  leviathan  lexical  libraries  lifehack  limits  linear-algebra  liner-notes  links  linux  list  literature  live-coding  lived-experience  llvm  local-global  logic  long-short-run  long-term  longform  low-hanging  machiavelli  machine-learning  macro  magnitude  management  manifolds  map-territory  marginal  marginal-rev  market-failure  market-power  markets  markov  martial  matching  math  mathtariat  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  MENA  meta-analysis  meta:medicine  meta:prediction  meta:reading  meta:research  meta:rhetoric  meta:science  metabolic  metabuch  metal-to-virtual  metameta  methodology  metrics  michael-nielsen  microfoundations  microsoft  migration  military  mindful  minimalism  minimum-viable  miri-cfar  models  moments  money-for-time  monte-carlo  morality  mostly-modern  move-fast-(and-break-things)  multi  multiplicative  music  musk  mystic  n-factor  narrative  nationalism-globalism  nature  navigation  near-far  necessity-sufficiency  network-structure  networking  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nonlinearity  notation  notetaking  novelty  numerics  nutrition  objective-measure  objektbuch  occam  occident  old-anglo  oly  oly-programming  oop  open-closed  optimate  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:biz  org:bleg  org:com  org:data  org:edu  org:health  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organization  organizing  os  oss  outcome-risk  outliers  overflow  p:**  p:***  p:whenever  paleocon  papers  paradox  parallax  parasites-microbiome  parsimony  path-dependence  patience  paying-rent  pdf  peace-violence  people  performance  persuasion  pessimism  phalanges  phd  philosophy  physics  piketty  piracy  planning  play  pls  plt  poast  poetry  polarization  policy  polisci  political-econ  politics  poll  pop-diff  popsci  population  postmortem  postrat  practice  pragmatic  pre-2013  prediction  prediction-markets  predictive-processing  prepping  preprint  presentation  princeton  prioritizing  priors-posteriors  privacy  pro-rata  probability  problem-solving  procrastination  productivity  profile  programming  progression  project  propaganda  properties  proposal  prudence  pseudoE  psych-architecture  psychiatry  psycho-atoms  psychology  publishing  puzzles  python  q-n-a  qra  quality  quantified-self  quantitative-qualitative  questions  quixotic  quiz  quotes  r-lang  race  random  ranking  rant  rat-pack  rationality  ratty  reading  realness  reason  recommendations  recruiting  reddit  reduction  reference  reflection  regression  regression-to-mean  regularizer  reinforcement  religion  replication  repo  research  research-program  responsibility  retention  review  rhetoric  right-wing  rigor  risk  robust  roots  rot  rsc  russia  rust  s-factor  s:*  s:**  s:***  s:null  saas  safety  sampling  sampling-bias  sanjeev-arora  sapiens  scala  scale  scaling-tech  scholar  sci-comp  science  science-anxiety  scifi-fantasy  scitariat  search  selection  self-control  self-report  selfish-gene  sentiment  sequential  sex  sexuality  shift  short-circuit  signal-noise  signaling  similarity  simplification-normalization  simulation  singularity  sinosphere  skeleton  skunkworks  sky  sleep  sleuthin  slides  slippery-slope  social  social-choice  social-norms  social-psych  social-science  social-structure  sociality  society  sociology  socs-and-mops  software  solid-study  space  span-cover  spatial  spearhead  speculation  speed  speedometer  spengler  spock  sports  spreading  ssc  stackex  stagnation  stamina  stanford  startups  stat-mech  stat-power  state  statesmen  static-dynamic  stats  status  stochastic-processes  stories  strategy  straussian  stream  street-fighting  stress  strings  structure  study  studying  stylized-facts  subculture  subjective-objective  success  summary  supply-demand  sv  symmetry  syntax  synthesis  systematic-ad-hoc  systems  tactics  tainter  taubes-guyenet  tcs  tcstariat  teaching  tech  tech-infrastructure  technology  techtariat  telos-atelos  tetlock  the-classics  the-great-west-whale  the-monster  the-trenches  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  time  time-preference  time-series  time-use  toolkit  tools  top-n  traces  track-record  trade  tradeoffs  tradition  trees  trends  tribalism  tricks  trust  truth  turing  tutorial  twitter  types  ubiquity  unaffiliated  uncertainty  unit  universalism-particularism  us-them  usa  ux  vague  values  variance-components  vcs  venture  vgr  video  virtu  visual-understanding  visuo  volo-avolo  von-neumann  vulgar  walls  walter-scheidel  war  war-nerd  wealth  wealth-of-nations  web  west-hunter  whole-partial-many  wiki  winner-take-all  wire-guided  wisdom  within-group  within-without  wonkish  wordlessness  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  yak-shaving  yoga  yvain  zero-positive-sum  zooming  🌞  🎓  🎩  🐝  👳  👽  🔬  🖥  🤖  🦀  🦉 

Copy this bookmark:



description:


tags: