nhaliday + pdf   617

 « earlier
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

math and proof assistants:
Case Studies in Proof Checking: http://www.cs.sjsu.edu/faculty/beeson/Masters/KamThesis.pdf
http://www.cs.ru.nl/~freek/comparison/index.html
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk
13 days ago by nhaliday
CPC Elite Perception of the US since the Early 1990s: Wang Huning and Zheng Bijian as Test Cases
What makes this paper distinct from previous research is that it juxtaposes two of the most influential yet under-studied America watchers within the top echelon of the CPC, Wang Huning and Zheng Bijian. To be sure, the two have indelibly shaped CPC attitudes, yet surprisingly enough, although Zheng has been written about extensively in the English language, Wang has hitherto largely remained outside academics’ purview. This paper also aims, in passing, to explore linkages between Wang and Zheng ideas and those of other well-known America watchers like Liu Mingfu and Yan Xuetong. It is hoped that this comparison will offer clues as to the extent to which the current advisory shaping CPC thinking on the US differs from the previous generation, and as to whether CPC thinking is un-American or anti-American in essence. The conclusions will tie the study together by speculating based on Wang and Zheng’s views about the degree to which New Confucianism, as opposed to Neo-Liberalism, might shape Chinese society in the future.

https://archive.is/Fu4sG
I want someone to translate Wang Huning’s book “America Against America”
For the record, in Chinese that's《美国反对美国》。Wang traveled across USA in the '80s, visiting big cities and small towns. Book lambasted democracy, contrasting the 'ideal' of American rhetoric with the 'reality' of American life. Wang is now one of Xi's closest advisors.
pdf  white-paper  politics  polisci  government  leviathan  elite  china  asia  sinosphere  usa  comparison  democracy  antidemos  social-choice  culture  confucian  civil-liberty  civic  trends  multi  twitter  social  backup  unaffiliated  foreign-lang  map-territory  cynicism-idealism  ideology  essay  summary  thucydides  philosophy  wonkish  broad-econ
21 days ago by nhaliday
Skim / Feature Requests / #138 iphone/ebook support
Skim notes could never work on the iPhone, because SKim notes data depend on AppKit, which is not available in iOS. So any app for iOS would just be some comletely separate PDF app, that has nothing to do with Skim in particular.
tracker  app  pdf  software  tools  ios  mobile  osx  desktop  workflow  scholar  meta:reading  todo
23 days ago by nhaliday
c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

hmm:
Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.

edit: Actually, factor x100 for 7% is pretty high, could be increase constant factor by almost an order of magnitude.
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming  computer-memory
27 days ago by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia
8 weeks ago by nhaliday
[1803.00085] Chinese Text in the Wild
We introduce Chinese Text in the Wild, a very large dataset of Chinese text in street view images.

...

We give baseline results using several state-of-the-art networks, including AlexNet, OverFeat, Google Inception and ResNet for character recognition, and YOLOv2 for character detection in images. Overall Google Inception has the best performance on recognition with 80.5% top-1 accuracy, while YOLOv2 achieves an mAP of 71.0% on detection. Dataset, source code and trained models will all be publicly available on the website.
nibble  pdf  papers  preprint  machine-learning  deep-learning  deepgoog  state-of-art  china  asia  writing  language  dataset  error  accuracy  computer-vision  pic  ocr
8 weeks ago by nhaliday
Why is Software Engineering so difficult? - James Miller
basic message: No silver bullet!

most interesting nuggets:
Scale and Complexity
- Windows 7 > 50 million LOC
Expect a staggering number of bugs.

Bugs?
- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.
- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.

Bug removal
- Testing typically exercises only half the code.

Better bug removal?
- There are better ways to do testing that do produce fantastic programs.”
* No, its only an opinion!
* In general Software Engineering has ....
NO FACTS!

So why not do this?
- The costs are unbelievable.
- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.
pdf  slides  engineering  nitty-gritty  programming  best-practices  roots  comparison  cost-benefit  software  systematic-ad-hoc  structure  error  frontier  debugging  checking  formal-methods  context  detail-architecture  intricacy  big-picture  system-design  correctness  scale  scaling-tech  shipping  money  data  stylized-facts  street-fighting  objektbuch  pro-rata  estimate  pessimism  degrees-of-freedom  volo-avolo  no-go  things  thinking  summary
9 weeks ago by nhaliday
ON THE GEOMETRY OF NASH EQUILIBRIA AND CORRELATED EQUILIBRIA
Abstract: It is well known that the set of correlated equilibrium distributions of an n-player noncooperative game is a convex polytope that includes all the Nash equilibrium distributions. We demonstrate an elementary yet surprising result: the Nash equilibria all lie on the boundary of the polytope.
pdf  nibble  papers  ORFE  game-theory  optimization  geometry  dimensionality  linear-algebra  equilibrium  structure  differential  correlation  iidness  acm  linear-programming  spatial  characterization  levers
10 weeks ago by nhaliday
Delta debugging - Wikipedia
good overview of with examples: https://www.csm.ornl.gov/~sheldon/bucket/Automated-Debugging.pdf

Not as useful for my usecases (mostly contest programming) as QuickCheck. Input is generally pretty structured and I don't have a long history of code in VCS. And when I do have the latter git-bisect is probably enough.

good book tho: http://www.whyprogramsfail.com/toc.php
WHY PROGRAMS FAIL: A Guide to Systematic Debugging\
wiki  reference  programming  systems  debugging  c(pp)  python  tools  devtools  links  hmm  formal-methods  divide-and-conquer  vcs  git  search  yak-shaving  pdf  white-paper  multi  examples  stories  books  unit  caltech  recommendations  advanced  correctness
12 weeks ago by nhaliday
Sci-Hub | The genetics of human fertility. Current Opinion in Psychology, 27, 41–45 | 10.1016/j.copsyc.2018.07.011
very short

Overall, there is a suggestion of two different reproductive strategies proving to be successful in modern Western societies: (1) a strategy associated with socially conservative values, including a high commitment to the bearing of children within marriage; and(2) a strategy associated with antisocial behavior, early sexual experimentation, a variety of sexual partners, low educational attainment, low commitment to marriage, haphazard pregnancies, and indifference to politics. This notion of distinct lifestyles characterized in common by relatively high fertility deserves further empirical and theoretical study.
pdf  piracy  study  fertility  biodet  behavioral-gen  genetics  genetic-correlation  iq  education  class  right-wing  politics  ideology  long-short-run  time-preference  strategy  planning  correlation  life-history  dysgenics  rot  personality  psychology  gender  gender-diff  fisher  giants  old-anglo  tradition  religion  psychiatry  disease  autism  👽  stress  variance-components  equilibrium  class-warfare
march 2019 by nhaliday
A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.

Is spoken English more efficient than other languages?: https://linguistics.stackexchange.com/questions/2550/is-spoken-english-more-efficient-than-other-languages
As a translator, I can assure you that English is no more efficient than other languages.
--
Russian, when spoken, is somewhat less efficient than English, and that is for sure. No one who has ever worked as an interpreter can deny it. You can convey somewhat more information in English than in Russian within an hour. The English language is not constrained by the rigid case and gender systems of the Russian language, which somewhat reduce the information density of the Russian language. The rules of the Russian language force the speaker to incorporate sometimes unnecessary details in his speech, which can be problematic for interpreters – user74809 Nov 12 '18 at 12:48
But in writing, though, I do think that Russian is somewhat superior. However, when it comes to common daily speech, I do not think that anyone can claim that English is less efficient than Russian. As a matter of fact, I also find Russian to be somewhat more mentally taxing than English when interpreting. I mean, anyone who has lived in the world of Russian and then moved to the world of English is certain to notice that English is somewhat more efficient in everyday life. It is not a night-and-day difference, but it is certainly noticeable. – user74809 Nov 12 '18 at 13:01
...
By the way, I am not knocking Russian. I love Russian, it is my mother tongue and the only language, in which I sound like a native speaker. I mean, I still have a pretty thick Russian accent. I am not losing it anytime soon, if ever. But like I said, living in both worlds, the Moscow world and the Washington D.C. world, I do notice that English is objectively more efficient, even if I am myself not as efficient in it as most other people. – user74809 Nov 12 '18 at 13:40

Do most languages need more space than English?: https://english.stackexchange.com/questions/2998/do-most-languages-need-more-space-than-english
Speaking as a translator, I can share a few rules of thumb that are popular in our profession:
- Hebrew texts are usually shorter than their English equivalents by approximately 1/3. To a large extent, that can be attributed to cheating, what with no vowels and all.
- Spanish, Portuguese and French (I guess we can just settle on Romance) texts are longer than their English counterparts by about 1/5 to 1/4.
- Scandinavian languages are pretty much on par with English. Swedish is a tiny bit more compact.
- Whether or not Russian (and by extension, Ukrainian and Belorussian) is more compact than English is subject to heated debate, and if you ask five people, you'll be presented with six different opinions. However, everybody seems to agree that the difference is just a couple percent, be it this way or the other.

--

A point of reference from the website I maintain. The files where we store the translations have the following sizes:

English: 200k
Portuguese: 208k
Spanish: 209k
German: 219k
And the translations are out of date. That is, there are strings in the English file that aren't yet in the other files.

For Chinese, the situation is a bit different because the character encoding comes into play. Chinese text will have shorter strings, because most words are one or two characters, but each character takes 3–4 bytes (for UTF-8 encoding), so each word is 3–12 bytes long on average. So visually the text takes less space but in terms of the information exchanged it uses more space. This Language Log post suggests that if you account for the encoding and remove redundancy in the data using compression you find that English is slightly more efficient than Chinese.

Is English more efficient than Chinese after all?: https://languagelog.ldc.upenn.edu/nll/?p=93
[Executive summary: Who knows?]

This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote:
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data  multi  comparison  writing  meta:reading  measure  compression  empirical  evidence-based  experiment  analysis  chart  trivia  cocktail
february 2019 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages
april 2018 by nhaliday
Society of Mind - Wikipedia
A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

This idea is perhaps best summarized by the following quote:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

https://en.wikipedia.org/wiki/Modularity_of_mind

The modular organization of human anatomical
brain networks: Accounting for the cost of wiring: https://www.mitpressjournals.org/doi/pdfplus/10.1162/NETN_a_00002
Brain networks are expected to be modular. However, existing techniques for estimating a network’s modules make it difficult to assess the influence of organizational principles such as wiring cost reduction on the detected modules. Here we present a modification of an existing module detection algorithm that allowed us to focus on connections that are unexpected under a cost-reduction wiring rule and to identify modules from among these connections. We applied this technique to anatomical brain networks and showed that the modules we detected differ from those detected using the standard technique. We demonstrated that these novel modules are spatially distributed, exhibit unique functional fingerprints, and overlap considerably with rich clubs, giving rise to an alternative and complementary interpretation of the functional roles of specific brain regions. Finally, we demonstrated that, using the modified module detection approach, we can detect modules in a developmental dataset that track normative patterns of maturation. Collectively, these findings support the hypothesis that brain networks are composed of modules and provide additional insight into the function of those modules.
books  ideas  speculation  structure  composition-decomposition  complex-systems  neuro  ai  psychology  cog-psych  intelligence  reduction  wiki  giants  philosophy  number  cohesion  diversity  systematic-ad-hoc  detail-architecture  pdf  study  neuro-nitgrit  brain-scan  nitty-gritty  network-structure  graphs  graph-theory  models  whole-partial-many  evopsych  eden  reference  psych-architecture  article  coupling-cohesion
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble
march 2018 by nhaliday
Prisoner's dilemma - Wikipedia
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]
concept  conceptual-vocab  wiki  reference  article  models  GT-101  game-theory  anthropology  cultural-dynamics  trust  cooperate-defect  coordination  iteration-recursion  sequential  axelrod  discrete  smoothness  evolution  evopsych  EGT  economics  behavioral-econ  sociology  new-religion  deep-materialism  volo-avolo  characterization  hsu  scitariat  altruism  justice  group-selection  decision-making  tribalism  organizing  hari-seldon  theory-practice  applicability-prereqs  bio  finiteness  multi  history  science  social-science  decision-theory  commentary  study  summary  giants  the-trenches  zero-positive-sum  🔬  bounded-cognition  info-dynamics  org:edge  explanation  exposition  org:nat  eden  retention  long-short-run  darwinian  markov  equilibrium  linear-algebra  nitty-gritty  competition  war  explanans  n-factor  europe  the-great-west-whale  occident  china  asia  sinosphere  orient  decentralized  markets  market-failure  cohesion  metabuch  stylized-facts  interdisciplinary  physics  pdf  pessimism  time  insight  the-basilisk  noblesse-oblige  the-watchers  ideas  l
march 2018 by nhaliday
China’s Ideological Spectrum
We find that public preferences are weakly constrained, and the configuration of preferences is multidimensional, but the latent traits of these dimensions are highly correlated. Those who prefer authoritarian rule are more likely to support nationalism, state intervention in the economy, and traditional social values; those who prefer democratic institutions and values are more likely to support market reforms but less likely to be nationalistic and less likely to support traditional social values. This latter set of preferences appears more in provinces with higher levels of development and among wealthier and better-educated respondents.

Enlightened One-Party Rule? Ideological Differences between Chinese Communist Party Members and the Mass Public: https://journals.sagepub.com/doi/abs/10.1177/1065912919850342
A popular view of nondemocratic regimes is that they draw followers mainly from those with an illiberal, authoritarian mind-set. We challenge this view by arguing that there exist a different class of autocracies that rule with a relatively enlightened base. Leveraging multiple nationally representative surveys from China over the past decade, we substantiate this claim by estimating and comparing the ideological preferences of Chinese Communist Party members and ordinary citizens. We find that party members on average hold substantially more modern and progressive views than the public on issues such as gender equality, political pluralism, and openness to international exchange. We also explore two mechanisms that may account for this party–public value gap—selection and socialization. We find that while education-based selection is the most dominant mechanism overall, socialization also plays a role, especially among older and less educated party members.

https://archive.is/ktcOY
Does this control for wealth and education?
--
Perhaps about half the best educated youth joined party.
pdf  study  economics  polisci  sociology  politics  ideology  coalitions  china  asia  things  phalanges  dimensionality  degrees-of-freedom  markets  democracy  capitalism  communism  authoritarianism  government  leviathan  tradition  values  correlation  exploratory  nationalism-globalism  heterodox  sinosphere  multi  antidemos  class  class-warfare  enlightenment-renaissance-restoration-reformation  left-wing  egalitarianism-hierarchy  gender  contrarianism  hmm  regularizer  poll  roots  causation  endogenous-exogenous  selection  network-structure  education  twitter  social  commentary  critique  backup
march 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre
february 2018 by nhaliday
Anisogamy - Wikipedia
Anisogamy is a fundamental concept of sexual dimorphism that helps explain phenotypic differences between sexes.[3] In most species a male and female sex exist, both of which are optimized for reproductive potential. Due to their differently sized and shaped gametes, both males and females have developed physiological and behavioral differences that optimize the individual’s fecundity.[3] Since most egg laying females typically must bear the offspring and have a more limited reproductive cycle, this typically makes females a limiting factor in the reproductive success rate of males in a species. This process is also true for females selecting males, and assuming that males and females are selecting for different traits in partners, would result in phenotypic differences between the sexes over many generations. This hypothesis, known as the Bateman’s Principle, is used to understand the evolutionary pressures put on males and females due to anisogamy.[4] Although this assumption has criticism, it is a generally accepted model for sexual selection within anisogamous species. The selection for different traits depending on sex within the same species is known as sex-specific selection, and accounts for the differing phenotypes found between the sexes of the same species. This sex-specific selection between sexes over time also lead to the development of secondary sex characteristics, which assist males and females in reproductive success.

...

Since this process is very energy-demanding and time consuming for the female, mate choice is often integrated into the female’s behavior.[3] Females will often be very selective of the males they choose to reproduce with, for the phenotype of the male can be indicative of the male’s physical health and heritable traits. Females employ mate choice to pressure males into displaying their desirable traits to females through courtship, and if successful, the male gets to reproduce. This encourages males and females of specific species to invest in courtship behaviors as well as traits that can display physical health to a potential mate. This process, known as sexual selection,[3] results in the development of traits to ease reproductive success rather than individual survival, such as the inflated size of a termite queen. It is also important for females to select against potential mates that may have a sexually transmitted infection, for the disease could not only hurt the female’s reproductive ability, but also damage the resulting offspring.[7]

Although not uncommon in males, females are more associated with parental care.[8] Since females are on a more limited reproductive schedule than males, a female often invests more in protecting the offspring to sexual maturity than the male. Like mate choice, the level of parental care varies greatly between species, and is often dependent on the number of offspring produced per sexual encounter.[8]

...

Since females are often the limiting factor in a species reproductive success, males are often expected by the females to search and compete for the female, known as intraspecific competition.[4] This can be seen in organisms such as bean beetles, as the male that searches for females more frequently is often more successful at finding mates and reproducing. In species undergoing this form of selection, a fit male would be one that is fast, has more refined sensory organs, and spatial awareness.[4]

Darwinian sex roles confirmed across the animal kingdom: http://advances.sciencemag.org/content/2/2/e1500983.full
Since Darwin’s conception of sexual selection theory, scientists have struggled to identify the evolutionary forces underlying the pervasive differences between male and female behavior, morphology, and physiology. The Darwin-Bateman paradigm predicts that anisogamy imposes stronger sexual selection on males, which, in turn, drives the evolution of conventional sex roles in terms of female-biased parental care and male-biased sexual dimorphism. Although this paradigm forms the cornerstone of modern sexual selection theory, it still remains untested across the animal tree of life. This lack of evidence has promoted the rise of alternative hypotheses arguing that sex differences are entirely driven by environmental factors or chance. We demonstrate that, across the animal kingdom, sexual selection, as captured by standard Bateman metrics, is indeed stronger in males than in females and that it is evolutionarily tied to sex biases in parental care and sexual dimorphism. Our findings provide the first comprehensive evidence that Darwin’s concept of conventional sex roles is accurate and refute recent criticism of sexual selection theory.

Coevolution of parental investment and sexually selected traits drives sex-role divergence: https://www.nature.com/articles/ncomms12517
Sex-role evolution theory attempts to explain the origin and direction of male–female differences. A fundamental question is why anisogamy, the difference in gamete size that defines the sexes, has repeatedly led to large differences in subsequent parental care. Here we construct models to confirm predictions that individuals benefit less from caring when they face stronger sexual selection and/or lower certainty of parentage. However, we overturn the widely cited claim that a negative feedback between the operational sex ratio and the opportunity cost of care selects for egalitarian sex roles. We further argue that our model does not predict any effect of the adult sex ratio (ASR) that is independent of the source of ASR variation. Finally, to increase realism and unify earlier models, we allow for coevolution between parental investment and investment in sexually selected traits. Our model confirms that small initial differences in parental investment tend to increase due to positive evolutionary feedback, formally supporting long-standing, but unsubstantiated, verbal arguments.

Parental investment, sexual selection and sex ratios: http://www.kokkonuts.org/wp-content/uploads/Parental_investment_review.pdf
The second argument takes the reasonable premise that anisogamy produces a male-biased operational sex ratio (OSR) leading to males competing for mates. Male care is then predicted to be less likely to evolve as it consumes resources that could otherwise be used to increase competitiveness. However, given each offspring has precisely two genetic parents (the Fisher condition), a biased OSR generates frequency-dependent selection, analogous to Fisherian sex ratio selection, that favours increased parental investment by whichever sex faces more intense competition. Sex role divergence is therefore still an evolutionary conundrum. Here we review some possible solutions. Factors that promote conventional sex roles are sexual selection on males (but non-random variance in male mating success must be high to override the Fisher condition), loss of paternity because of female multiple mating or group spawning and patterns of mortality that generate female-biased adult sex ratios (ASR). We present an integrative model that shows how these factors interact to generate sex roles. We emphasize the need to distinguish between the ASR and the operational sex ratio (OSR). If mortality is higher when caring than competing this diminishes the likelihood of sex role divergence because this strongly limits the mating success of the earlier deserting sex. We illustrate this in a model where a change in relative mortality rates while caring and competing generates a shift from a mammalian type breeding system (female-only care, male-biased OSR and female-biased ASR) to an avian type system (biparental care and a male-biased OSR and ASR).

LATE FEMINISM: https://jacobitemag.com/2017/08/01/late-feminism/
Woman has had a good run. For 200,000 years humankind’s anisogamous better (and bigger) half has enjoyed a position of desirability and safety befitting a scarce commodity. She has also piloted the evolutionary destiny of our species, both as a sexual selector and an agitator during man’s Promethean journey. In terms of comfort and agency, the human female is uniquely privileged within the annals of terrestrial biology.

But the era of female privilege is ending, in a steady decline that began around 1572. Woman’s biological niche is being crowded out by capital.

...

Strictly speaking, the breadth of the coming changes extend beyond even civilizational dynamics. They will affect things that are prior. One of the oldest and most practical definitions for a biological species defines its boundary as the largest group of organisms where two individuals, via sexual reproduction, can produce fertile offspring together. The imminent arrival of new reproductive technologies will render the sexual reproduction criteria either irrelevant or massively expanded, depending upon one’s perspective. Fertility of the offspring is similarly of limited relevance, since the modification of gametes will be de rigueur in any case. What this looming technology heralds is less a social revolution than it is a full sympatric speciation event.

Accepting the inevitability of the coming bespoke reproductive revolution, consider a few questions & probable answers regarding our external-womb-grown ubermenschen:

Q: What traits will be selected for?

A: Ability to thrive in a global market economy (i.e. ability to generate value for capital.)

Q: What material substrate will generate the new genomes?

A: Capital equipment.

Q: Who will be making the selection?

A: People, at least initially, (and who coincidentally will be making decisions that map 1-to-1 to the interests of capital.)

_Replace any of the above instances of the word capital with women, and you would have accurate answers for most of our species’ history._

...

In terms of pure informational content, the supernova seen from earth can be represented in a singularly compressed way: a flash of light on a black field where there previously was none. A single photon in the cone of the eye, at the limit. Whether … [more]
biodet  deep-materialism  new-religion  evolution  eden  gender  gender-diff  concept  jargon  wiki  reference  bio  roots  explanans  🌞  ideas  EGT  sex  analysis  things  phalanges  matching  parenting  water  competition  egalitarianism-hierarchy  ranking  multi  study  org:nat  nature  meta-analysis  survey  solid-study  male-variability  darwinian  empirical  realness  sapiens  models  evopsych  legacy  investing  uncertainty  outcome-risk  decision-theory  pdf  life-history  chart  accelerationism  horror  capital  capitalism  similarity  analogy  land  gnon  🐸  europe  the-great-west-whale  industrial-revolution  science  kinship  n-factor  speculation  personality  creative  pop-diff  curiosity  altruism  cooperate-defect  anthropology  cultural-dynamics  civil-liberty  recent-selection  technocracy  frontier  futurism  prediction  quotes  aphorism  religion  theos  enhancement  biotech  revolution  insight  history  early-modern  gallic  philosophy  enlightenment-renaissance-restoration-reformation  ci
january 2018 by nhaliday
National Defense Strategy of the United States of America
National Defense Strategy released with clear priority: Stay ahead of Russia and China: https://www.defensenews.com/breaking-news/2018/01/19/national-defense-strategy-released-with-clear-priority-stay-ahead-of-russia-and-china/

https://archive.is/RhBdG
https://archive.is/wRzRN
A saner allocation of US 'defense' funds would be something like 10% nuclear trident, 10% border patrol, & spend the rest innoculating against cyber & biological attacks.
and since the latter 2 are hopeless, just refund 80% of the defense budget.
--
Monopoly on force at sea is arguably worthwhile.
--
Given the value of the US market to any would-be adversary, id be willing to roll the dice & let it ride.
--
subs are part of the triad, surface ships are sitting ducks this day and age
--
But nobody does sink them, precisely because of the monopoly on force. It's a path-dependent equilibirum where (for now) no other actor can reap the benefits of destabilizing the monopoly, and we're probably drastically underestimating the ramifications if/when it goes away.
--
can lethal autonomous weapon systems get some
pdf  white-paper  org:gov  usa  government  trump  policy  nascent-state  foreign-policy  realpolitik  authoritarianism  china  asia  russia  antidemos  military  defense  world  values  enlightenment-renaissance-restoration-reformation  democracy  chart  politics  current-events  sulla  nuclear  arms  deterrence  strategy  technology  sky  oceans  korea  communism  innovation  india  europe  EU  MENA  multi  org:foreign  war  great-powers  thucydides  competition  twitter  social  discussion  backup  gnon  🐸  markets  trade  nationalism-globalism  equilibrium  game-theory  tactics  top-n  hi-order-bits  security  hacker  biotech  terrorism  disease  parasites-microbiome  migration  walls  internet
january 2018 by nhaliday
Sacred text as cultural genome: an inheritance mechanism and method for studying cultural evolution: Religion, Brain & Behavior: Vol 7, No 3
Yasha M. Hartberg & David Sloan Wilson

Any process of evolution requires a mechanism of inheritance for the transmission of information across generations and the expression of phenotypes during each generation. Genetic inheritance mechanisms have been studied for over a century but mechanisms of inheritance for human cultural evolution are far less well understood. Sacred religious texts have the properties required for an inheritance system. They are replicated across generations with high fidelity and are transcribed into action every generation by the invocation and interpretation of selected passages. In this article we borrow concepts and methods from genetics and epigenetics to study the “expressed phenotypes” of six Christian churches that differ along a conservative–progressive axis. Their phenotypic differences, despite drawing upon the same sacred text, can be explained in part by differential expression of the sacred text. Since the invocation and interpretation of sacred texts are often well preserved, our methods allow the expressed phenotypes of religious groups to be studied at any time and place in history.
study  interdisciplinary  bio  sociology  cultural-dynamics  anthropology  religion  christianity  theos  protestant-catholic  politics  ideology  correlation  organizing  institutions  analogy  genetics  genomics  epigenetics  comparison  culture  pdf  piracy  density  flexibility  noble-lie  deep-materialism  new-religion  universalism-particularism  homo-hetero  hypocrisy  group-selection  models  coordination  info-dynamics  evolution  impact  left-wing  right-wing  time  tradition  spreading  sanctity-degradation  coalitions  trees  usa  social-capital  hari-seldon  wisdom  the-basilisk  frequency  sociality  ecology  analytical-holistic
january 2018 by nhaliday
The idea of empire in the "Aeneid" on JSTOR
http://latindiscussion.com/forum/latin/to-rule-mankind-and-make-the-world-obey.11016/
Let's see...Aeneid, Book VI, ll. 851-853:

tu regere imperio populos, Romane, memento
(hae tibi erunt artes), pacique imponere morem,
parcere subiectis et debellare superbos.'

Which Dryden translated as:
To rule mankind, and make the world obey,
Disposing peace and war by thy own majestic way;
To tame the proud, the fetter'd slave to free:
These are imperial arts, and worthy thee."

If you wanted a literal translation,
"You, Roman, remember to rule people by command
(these were arts to you), and impose the custom to peace,
to spare the subjected and to vanquish the proud."

I don't want to derail your thread but pacique imponere morem -- "to impose the custom to peace"
Does it mean "be the toughest kid on the block," as in Pax Romana?

...

That 17th century one is a loose translation indeed. Myself I'd put it as

"Remember to rule over (all) the (world's) races by means of your sovereignty, oh Roman, (for indeed) you (alone) shall have the means (to do so), and to inculcate the habit of peace, and to have mercy on the enslaved and to destroy the arrogant."

http://classics.mit.edu/Virgil/aeneid.6.vi.html
And thou, great hero, greatest of thy name,
Ordain'd in war to save the sinking state,
And, by delays, to put a stop to fate!
Let others better mold the running mass
Of metals, and inform the breathing brass,
And soften into flesh a marble face;
Plead better at the bar; describe the skies,
And when the stars descend, and when they rise.
But, Rome, 't is thine alone, with awful sway,
To rule mankind, and make the world obey,
Disposing peace and war by thy own majestic way;
To tame the proud, the fetter'd slave to free:
These are imperial arts, and worthy thee."
study  article  letters  essay  pdf  piracy  history  iron-age  mediterranean  the-classics  big-peeps  literature  aphorism  quotes  classic  alien-character  sulla  poetry  conquest-empire  civilization  martial  vitality  peace-violence  order-disorder  domestication  courage  multi  poast  universalism-particularism  world  leviathan  foreign-lang  nascent-state  canon  org:junk  org:edu  tradeoffs  checklists  power  strategy  tactics  paradox  analytical-holistic  hari-seldon  aristos  wisdom  janus  parallax
january 2018 by nhaliday
Comparative Litigation Rates
We suggest that the notoriety of the U.S. does not result from the way citizens and judges handle routine disputes, which (different as it may be in developing countries) is not very different from in other wealthy, democratic societies,. Instead, American notoriety results from the peculiarly dysfunctional way judges handle disputes in discrete legal areas such as class actions and punitive damages
pdf  study  law  institutions  usa  alien-character  stereotypes  leviathan  polisci  political-econ  comparison  britain  japan  asia  europe  gallic  canada  anglo  roots  intricacy  data  pro-rata
december 2017 by nhaliday
The Politics of Mate Choice
TABLE 1 Spousal Concordance on 16 Traits Pearson’s r (n)

Church attendance .714 (4950)
W-P Index (28 items) .647 (3984)
Drinking frequency .599 (4984)
Political party support .596 (4547)
Education .498 (4957)
Height .227 (4964)
pdf  study  sociology  anthropology  sex  assortative-mating  correlation  things  phalanges  planning  long-term  human-bean  religion  theos  politics  polisci  ideology  ethanol  time-use  coalitions  education  embodied  integrity  sleep  rhythm  personality  psych-architecture  stress  psychiatry  self-report  extra-introversion  discipline  self-control  patience  data  database  list  top-n  objektbuch  values  habit  time  density  twin-study  longitudinal  tradition  time-preference  life-history  selection  psychology  social-psych  flux-stasis  demographics  frequency
december 2017 by nhaliday
Deliberate Practice and Performance in Music, Games, Sports, Education, and Professions: A Meta-Analysis
We found that deliberate practice explained 26% of the variance in performance for games, 21% for music, 18% for sports, 4% for education, and less than 1% for professions. We conclude that deliberate practice is important, but not as important as has been argued.
pdf  study  psychology  cog-psych  social-psych  teaching  tutoring  learning  studying  stylized-facts  metabuch  career  long-term  music  games  sports  education  labor  data  list  expert-experience  ability-competence  roots  variance-components  top-n  meta-analysis  practice  quixotic
december 2017 by nhaliday
Behaving Discretely: Heuristic Thinking in the Emergency Department
I find compelling evidence of heuristic thinking in this setting: patients arriving in the emergency department just after their 40th birthday are roughly 10% more likely to be tested for and 20% more likely to be diagnosed with ischemic heart disease (IHD) than patients arriving just before this date, despite the fact that the incidence of heart disease increases smoothly with age.

Figure 1: Proportion of ED patients tested for heart attack
pdf  study  economics  behavioral-econ  field-study  biases  heuristic  error  healthcare  medicine  meta:medicine  age-generation  aging  cardio  bounded-cognition  shift  trivia  cocktail  pro-rata
december 2017 by nhaliday
The Grumpy Economist: Bitcoin and Bubbles
Bitcoin is not a very good money. It is a pure fiat money (no backing), whose value comes from limited supply plus these demands. As such it has the huge price fluctuations we see. It's an electronic version of gold, and the price variation should be a warning to economists who long for a return to  gold. My bet is that stable-value cryptocurrencies, offering one dollar per currency unit and low transactions costs, will prosper in the role of money. At least until there is a big inflation or sovereign debt crisis and a stable-value cryptocurrency not linked to government debt emerges.

https://archive.is/Rrbg6
The Kareken-Wallace Cryptocurrency Price Indeterminacy theorem will someday receive the attention it deserves

https://www.mercatus.org/system/files/cryptocurrency-article.pdf
Cryptocurrencies also raise in a new way questions of exchange rate indeterminacy. As Kareken and Wallace (1981) observed, fiat currencies are all alike: slips of paper not redeemable for anything. Under a regime of floating exchange rates and no capital controls, and assuming some version of interest rate parity holds, there are an infinity of exchange rates between any two fiat currencies that constitute an equilibrium in their model.

The question of exchange rate indeterminacy is both more and less striking between cryptocurrencies than between fiat currencies. It is less striking because there are considerably more differences between cryptocurrencies than there are between paper money. Paper money is all basically the same. Cryptocurrencies sometimes have different characteristics from each other. For example, the algorithm used as the basis for mining makes a difference – it determines how professionalised the mining pools become. Litecoin uses an algorithm that tends to make mining less concentrated. Another difference is the capability of the cryptocurrency’s language for programming transactions. Ethereum is a new currency that boasts a much more robust language than Bitcoin. Zerocash is another currency that offers much stronger anonymity than Bitcoin. To the extent that cryptocurrencies differ from each other more than fiat currencies do, those differences might be able to pin down exchange rates in a model like Kareken and Wallace’s.

On the other hand, exchange rate indeterminacy could be more severe among cryptocurrencies than between fiat currencies because it is easy to simply create an exact copy of an open source cryptocurrency. There are even websites on which you can create and download the software for your own cryptocurrency with a few clicks of a mouse. These currencies are exactly alike except for their names and other identifying information. Furthermore, unlike fiat currencies, they don’t benefit from government acceptance or optimal currency area considerations that can tie a currency to a given territory.

Even identical currencies, however, can differ in terms of the quality of governance. Bitcoin currently has high quality governance institutions. The core developers are competent and conservative, and the mining and user communities are serious about making the currency work. An exact Bitcoin clone is likely to have a difficult time competing with Bitcoin unless it can promise similarly high-quality governance. When a crisis hits, users of identical currencies are going to want to hold the one that is mostly likely to weather the storm. Consequently, between currencies with identical technical characteristics, we think governance creates something close to a winner-take-all market. Network externalities are very strong in payment systems, and the governance question with respect to cryptocurrencies in particular compounds them.

https://archive.is/ldof8
Explaining a price rise via future increases in the asset's value isn't good economics. The invisible hand should be pushing today's price up to the point where it earns normal expected returns. +
I don't doubt the likelihood of a future cryptocurrency being widely used, but that doesn't pin down the price of any one cryptocurrency as the Kareken-Wallace result shows. There may be a big first mover advantage for Bitcoin but ease of replication makes it a fragile dominance.

https://archive.is/CtE6Q
I actually can't believe governments are allowing bitcoin to exist (they must be fully on board with going digital at some point)

btc will eventually come in direct competition with national currencies, which will have to raise rates dramatically, or die

http://www.thebigquestions.com/2017/12/08/matters-of-money/
The technology of Bitcoin Cash is very similar to the technology of Bitcoin. It offers the same sorts of anonymity, security, and so forth. There are some reasons to believe that in the future, Bitcoin Cash will be a bit easier to trade than Bitcoin (though that is not true in the present), and there are some other technological differences between them, but I’d be surprised to learn that those differences are accounting for any substantial fraction of the price differential.

The total supplies of Bitcoins and of Bitcoin Cash are currently about equal (because of the way that Bitcoin Cash originated). In each case, the supply will gradually grow to 21 million and then stop.

Question 1: Given the near identical properties of these two currencies, how can one sell for ten times the price of the other? Perhaps the answer involves the word “bubble”, but I’d be more interested in answers that assume (at least for the sake of argument) that the price of Bitcoin fairly reflects its properties as a store of value. Given that assumption, is the price differential entirely driven by the fact that Bitcoin came first? Is there that much of a first-mover advantage in this kind of game?

Question 2: Given the existence of other precious metals (e.g. platinum) what accounts for the dominance of gold as a physical store of value? (I note, for example, that when people buy gold as a store of value, they don’t often hesitate out of fear that gold will be displaced by platinum in the foreseeable future.) Is this entirely driven by the fact that gold happened to come first?

Question 3: Are Questions 1 and 2 the same question? Are the dominance of Bitcoin in the digital store-of-value market and the dominance of gold in the physical store-of-value market two sides of the same coin, so to speak? Or do they require fundamentally different explanations?

https://archive.is/kqTXg
Champ/Freeman in 2001 explain why the dollar-bitcoin exchange rate is inherently unstable, and why the price of cryptocurrencies is indeterminate:

https://archive.is/Y0OQB
Lay down a marker:
And remember that the modern macro dogma is that monetary systems matter little for prosperity, once bare competence is achieved.
econotariat  randy-ayndy  commentary  current-events  trends  cryptocurrency  bitcoin  money  monetary-fiscal  economics  cycles  multi  twitter  social  garett-jones  pdf  white-paper  article  macro  trade  incentives  equilibrium  backup  degrees-of-freedom  uncertainty  supply-demand  markets  gnon  🐸  government  gedanken  questions  comparison  analogy  explanans  fungibility-liquidity
december 2017 by nhaliday
Relative Quality of Foreign Nurses in the United States
We find a positive wage premium for nurses educated in the Philippines, but not for foreign nurses educated elsewhere. The premium peaked at 8% in 2000, and decreased to 4% in 2010.
pdf  study  economics  labor  industrial-org  migration  human-capital  healthcare  usa  asia  developing-world  general-survey  compensation  econ-productivity  data  ability-competence  quality
december 2017 by nhaliday
The Long-run Effects of Agricultural Productivity on Conflict, 1400-1900∗
This paper provides evidence of the long-run effects of a permanent increase in agricultural productivity on conflict. We construct a newly digitized and geo-referenced dataset of battles in Europe, the Near East and North Africa covering the period between 1400 and 1900 CE. For variation in permanent improvements in agricultural productivity, we exploit the introduction of potatoes from the Americas to the Old World after the Columbian Exchange. We find that the introduction of potatoes permanently reduced conflict for roughly two centuries. The results are driven by a reduction in civil conflicts

#4 An obvious counterfactual is of course the potato blight (1844 and beyond) in Europe. Here’s the Wikipedia page ‘revolutions of 1848’ https://en.wikipedia.org/wiki/Revolutions_of_1848
pdf  study  marginal-rev  economics  broad-econ  cliometrics  history  medieval  early-modern  age-of-discovery  branches  innovation  discovery  agriculture  food  econ-productivity  efficiency  natural-experiment  europe  the-great-west-whale  MENA  war  revolution  peace-violence  trivia  cocktail  stylized-facts  usa  endogenous-exogenous  control  geography  cost-benefit  multi  econotariat  links  poast  wiki  reference  events  roots
december 2017 by nhaliday
Is the speed of light really constant?
So what if the speed of light isn’t the same when moving toward or away from us? Are there any observable consequences? Not to the limits of observation so far. We know, for example, that any one-way speed of light is independent of the motion of the light source to 2 parts in a billion. We know it has no effect on the color of the light emitted to a few parts in 1020. Aspects such as polarization and interference are also indistinguishable from standard relativity. But that’s not surprising, because you don’t need to assume isotropy for relativity to work. In the 1970s, John Winnie and others showed that all the results of relativity could be modeled with anisotropic light so long as the two-way speed was a constant. The “extra” assumption that the speed of light is a uniform constant doesn’t change the physics, but it does make the mathematics much simpler. Since Einstein’s relativity is the simpler of two equivalent models, it’s the model we use. You could argue that it’s the right one citing Occam’s razor, or you could take Newton’s position that anything untestable isn’t worth arguing over.

SPECIAL RELATIVITY WITHOUT ONE-WAY VELOCITY ASSUMPTIONS:
https://sci-hub.bz/https://www.jstor.org/stable/186029
https://sci-hub.bz/https://www.jstor.org/stable/186671
nibble  scitariat  org:bleg  physics  relativity  electromag  speed  invariance  absolute-relative  curiosity  philosophy  direction  gedanken  axioms  definition  models  experiment  space  science  measurement  volo-avolo  synchrony  uniqueness  multi  pdf  piracy  study  article
november 2017 by nhaliday
Estimation of effect size distribution from genome-wide association studies and implications for future discoveries
We report a set of tools to estimate the number of susceptibility loci and the distribution of their effect sizes for a trait on the basis of discoveries from existing genome-wide association studies (GWASs). We propose statistical power calculations for future GWASs using estimated distributions of effect sizes. Using reported GWAS findings for height, Crohn’s disease and breast, prostate and colorectal (BPC) cancers, we determine that each of these traits is likely to harbor additional loci within the spectrum of low-penetrance common variants. These loci, which can be identified from sufficiently powerful GWASs, together could explain at least 15–20% of the known heritability of these traits. However, for BPC cancers, which have modest familial aggregation, our analysis suggests that risk models based on common variants alone will have modest discriminatory power (63.5% area under curve), even with new discoveries.

later paper:
Distribution of allele frequencies and effect sizes and their interrelationships for common genetic susceptibility variants: http://www.pnas.org/content/108/44/18026.full

Recent discoveries of hundreds of common susceptibility SNPs from genome-wide association studies provide a unique opportunity to examine population genetic models for complex traits. In this report, we investigate distributions of various population genetic parameters and their interrelationships using estimates of allele frequencies and effect-size parameters for about 400 susceptibility SNPs across a spectrum of qualitative and quantitative traits. We calibrate our analysis by statistical power for detection of SNPs to account for overrepresentation of variants with larger effect sizes in currently known SNPs that are expected due to statistical power for discovery. Across all qualitative disease traits, minor alleles conferred “risk” more often than “protection.” Across all traits, an inverse relationship existed between “regression effects” and allele frequencies. Both of these trends were remarkably strong for type I diabetes, a trait that is most likely to be influenced by selection, but were modest for other traits such as human height or late-onset diseases such as type II diabetes and cancers. Across all traits, the estimated effect-size distribution suggested the existence of increasingly large numbers of susceptibility SNPs with decreasingly small effects. For most traits, the set of SNPs with intermediate minor allele frequencies (5–20%) contained an unusually small number of susceptibility loci and explained a relatively small fraction of heritability compared with what would be expected from the distribution of SNPs in the general population. These trends could have several implications for future studies of common and uncommon variants.

...

Relationship Between Allele Frequency and Effect Size. We explored the relationship between allele frequency and effect size in different scales. An inverse relationship between the squared regression coefficient and f(1 − f) was observed consistently across different traits (Fig. 3). For a number of these traits, however, the strengths of these relationships become less pronounced after adjustment for ascertainment due to study power. The strength of the trend, as captured by the slope of the fitted line (Table 2), markedly varies between traits, with an almost 10-fold change between the two extremes of distinct types of traits. After adjustment, the most pronounced trend was seen for type I diabetes and Crohn’s disease among qualitative traits and LDL level among quantitative traits. In exploring the relationship between the frequency of the risk allele and the magnitude of the associated risk coefficient (Fig. S4), we observed a quadratic pattern that indicates increasing risk coefficients as the risk-allele frequency diverges away from 0.50 either toward 0 or toward 1. Thus, it appears that regression coefficients for common susceptibility SNPs increase in magnitude monotonically with decreasing minor-allele frequency, irrespective of whether the minor allele confers risk or protection. However, for some traits, such as type I diabetes, risk alleles were predominantly minor alleles, that is, they had frequencies of less than 0.50.
pdf  nibble  study  article  org:nat  🌞  biodet  genetics  population-genetics  GWAS  QTL  distribution  disease  cancer  stat-power  bioinformatics  magnitude  embodied  prediction  scale  scaling-up  variance-components  multi  missing-heritability  effect-size  regression  correlation  data
november 2017 by nhaliday
Use and Interpretation of LD Score Regression
LD Score regression distinguishes confounding from polygenicity in genome-wide association studies: https://sci-hub.bz/10.1038/ng.3211
- Po-Ru Loh, Nick Patterson, et al.

https://www.biorxiv.org/content/biorxiv/early/2014/02/21/002931.full.pdf

Both polygenicity (i.e. many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield inflated distributions of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from bias and true signal from polygenicity. We have developed an approach that quantifies the contributions of each by examining the relationship between test statistics and linkage disequilibrium (LD). We term this approach LD Score regression. LD Score regression provides an upper bound on the contribution of confounding bias to the observed inflation in test statistics and can be used to estimate a more powerful correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size.

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n3/extref/ng.3211-S1.pdf

An atlas of genetic correlations across human diseases
and traits: https://sci-hub.bz/10.1038/ng.3406

https://www.biorxiv.org/content/early/2015/01/27/014498.full.pdf

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n11/extref/ng.3406-S1.pdf

https://github.com/bulik/ldsc
ldsc is a command line tool for estimating heritability and genetic correlation from GWAS summary statistics. ldsc also computes LD Scores.
nibble  pdf  slides  talks  bio  biodet  genetics  genomics  GWAS  genetic-correlation  correlation  methodology  bioinformatics  concept  levers  🌞  tutorial  explanation  pop-structure  gene-drift  ideas  multi  study  org:nat  article  repo  software  tools  libraries  stats  hypothesis-testing  biases  confounding  gotchas  QTL  simulation  survey  preprint  population-genetics
november 2017 by nhaliday
Fitting a Structural Equation Model
seems rather unrigorous: nonlinear optimization, possibility of nonconvergence, doesn't even mention local vs. global optimality...
pdf  slides  lectures  acm  stats  hypothesis-testing  graphs  graphical-models  latent-variables  model-class  optimization  nonlinearity  gotchas  nibble  ML-MAP-E  iteration-recursion  convergence
november 2017 by nhaliday
ON THE ORIGIN OF STATES: STATIONARY BANDITS AND TAXATION IN EASTERN CONGO
As a foundation for this study, I organized the collection of village-level panel data on violent actors, managing teams of surveyors, village elders, and households in 380 war-torn areas of DRC. I introduce optimal taxation theory to the decision of violent actors to establish local monopolies of violence. The value of such decision hinges on their ability to tax the local population. A sharp rise in the global demand for coltan, a bulky commodity used in the electronics industry, leads violent actors to impose monopolies of violence and taxation in coltan sites, which persist even years after demand collapses. A similar rise in the demand for gold, easier to conceal and more difficult to tax, does not. However, the groups who nevertheless control gold sites are more likely to respond by undertaking investments in fiscal capacity, consistent with the difficulty to observe gold, and with well-documented trajectories of state formation in Europe (Ardant, 1975). The findings support the view that the expected revenue from taxation, determined in particular by tax base elasticity and costly investments in fiscal capacity, can explain the stages of state formation preceding the states as we recognize them today.
pdf  study  economics  growth-econ  broad-econ  political-econ  polisci  leviathan  north-weingast-like  unintended-consequences  institutions  microfoundations  econometrics  empirical  government  taxes  rent-seeking  supply-demand  incentives  property-rights  africa  developing-world  peace-violence  interests  longitudinal  natural-experiment  endogenous-exogenous  archaeology  trade  world  feudal  roots  ideas  cost-benefit  econ-productivity  traces
november 2017 by nhaliday
King Kong and Cold Fusion: Counterfactual analysis and the History of Technology
How “contingent” is technological history? Relying on models from evolutionary epistemology, I argue for an analogy with Darwinian Biology and thus a much greater degree of contingency than is normally supposed. There are three levels of contingency in technological development. The crucial driving force behind technology is what I call S-knowledge, that is, an understanding of the exploitable regularities of nature (which includes “science” as a subset). The development of techniques depend on the existence of epistemic bases in S. The “inevitability” of technology thus depends crucially on whether we condition it on the existence of the appropriate S-knowledge. Secondly, even if this knowledge emerges, there is nothing automatic about it being transformed into a technique that is, a set of instructions that transforms knowledge into production. Third, even if the techniques are proposed, there is selection which reflects the preferences and biases of an economy and injects another level of indeterminacy and contingency into the technological history of nations.

https://archive.is/MBmyV
Moslem conquest of Europe, or a Mongol conquest, or a post-1492 epidemic, or a victory of the counter-reformation would have prevented the Industrial Revolution (Joel Mokyr)
pdf  study  essay  economics  growth-econ  broad-econ  microfoundations  history  medieval  early-modern  industrial-revolution  divergence  volo-avolo  random  mokyr-allen-mccloskey  wealth-of-nations  europe  the-great-west-whale  occident  path-dependence  roots  knowledge  technology  society  multi  twitter  social  commentary  backup  conquest-empire  war  islam  MENA  disease  parasites-microbiome  counterfactual  age-of-discovery  enlightenment-renaissance-restoration-reformation  usa  scitariat  gnon  degrees-of-freedom
november 2017 by nhaliday
- Patterson, Reich et al., 2012
Population mixture is an important process in biology. We present a suite of methods for learning about population mixtures, implemented in a software package called ADMIXTOOLS, that support formal tests for whether mixture occurred and make it possible to infer proportions and dates of mixture. We also describe the development of a new single nucleotide polymorphism (SNP) array consisting of 629,433 sites with clearly documented ascertainment that was specifically designed for population genetic analyses and that we genotyped in 934 individuals from 53 diverse populations. To illustrate the methods, we give a number of examples that provide new insights about the history of human admixture. The most striking finding is a clear signal of admixture into northern Europe, with one ancestral population related to present-day Basques and Sardinians and the other related to present-day populations of northeast Asia and the Americas. This likely reflects a history of admixture between Neolithic migrants and the indigenous Mesolithic population of Europe, consistent with recent analyses of ancient bones from Sweden and the sequencing of the genome of the Tyrolean “Iceman.”
nibble  pdf  study  article  methodology  bio  sapiens  genetics  genomics  population-genetics  migration  gene-flow  software  trees  concept  history  antiquity  europe  roots  gavisti  🌞  bioinformatics  metrics  hypothesis-testing  levers  ideas  libraries  tools  pop-structure
november 2017 by nhaliday
SEXUAL DIMORPHISM, SEXUAL SELECTION, AND ADAPTATION IN POLYGENIC CHARACTERS - Lande - 1980 - Evolution - Wiley Online Library
https://archive.is/mcKvr
Lol, that's nothing, my biology teacher in high school told me sex differences couldn't evolve since all of us inherit 50% of genes from parents of both sexes. Being a raucous hispanic kid I burst out laughing, she was not pleased
--
Sex differences actually evolve more slowly because of that: something like 80 times more slowly.
...
Doesn't have that number, but in the same ballpark.

Sexual Dimorphism, Sexual Selection, And Adaptation In Polygenic Characters

Russell Lande

https://archive.is/AR8FY
I believe it, because sex differences [ in cases where the trait is not sex-limited ] evolve far more slowly than other things, on the order of 100 times more slowly. Lande 1980: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1558-5646.1980.tb04817.x

The deep past has a big vote in such cases.
...
as for the extent that women were voluntarily choosing mates 20k years ago, or 100k years ago - I surely don't know.

other time mentioned: https://pinboard.in/u:nhaliday/b:3a7c5b42dd50
study  article  bio  biodet  gender  gender-diff  evolution  genetics  population-genetics  methodology  nibble  sex  🌞  todo  pdf  piracy  marginal  comparison  pro-rata  data  multi  twitter  social  discussion  backup  west-hunter  scitariat  farmers-and-foragers  sexuality  evopsych  EEA
november 2017 by nhaliday
Friedrich von Hayek, “The Use of Knowledge in Society” (1945)
“The price system is just one of those formations which man has learned to use ... Through it not only a division of labor but also a coördinated utilization of resources based on an equally divided knowledge has become possible.”

“there is beyond question a body of very important but unorganized knowledge which cannot possibly be called scientific in the sense of knowledge of general rules: the knowledge of the particular circumstances of time and place. It is with respect to this that practically every individual has some advantage over all others because he possesses unique information of which beneficial use might be made”
pdf  org:junk  org:ngo  randy-ayndy  essay  big-peeps  economics  rhetoric  classic  markets  capitalism  coordination  info-dynamics  knowledge  bounded-cognition  supply-demand  decentralized  civil-liberty  institutions  quotes  reason
november 2017 by nhaliday
Gender differences in occupational distributions among workers
Women in the Work Force: https://www.theatlantic.com/magazine/archive/1986/09/women-in-the-work-force/304924/
Gender disparity in the workplace might have less to do with discrimination than with women making the choice to stay at home
pdf  org:gov  white-paper  data  database  economics  labor  gender  gender-diff  distribution  dysgenics  multi  news  org:mag  discrimination  demographics
november 2017 by nhaliday
Politics with Hidden Bases: Unearthing the Deep Roots of Party Systems
The research presented here uses a novel method to show that contemporary party systems may originate much further back than is usually assumed or might be expected—in reality many centuries. Using data on Ireland, a country with a political system that poses significant challenges to the universality of many political science theories, by identifying the ancestry of current party elites we find ethnic bases for the Irish party system arising from population movements that took place from the 12th century. Extensive Irish genealogical knowledge allows us to use surnames as a proxy for ethnic origin. Recent genetic analyses of Irish surnames corroborate Irish genealogical information. The results are particularly compelling given that Ireland is an extremely homogeneous society and therefore provides a tough case for our approach.
pdf  study  broad-econ  polisci  sociology  politics  government  correlation  path-dependence  cliometrics  anglo  britain  history  mostly-modern  time-series  pro-rata  distribution  demographics  coalitions  pop-structure  branches  hari-seldon
november 2017 by nhaliday
THE BIG FIVE PERSONALITY TRAITS AND PARTISANSHIP IN ENGLAND
We find that supporters of the major parties (Labour, the Conservatives and the Liberal Democrats) have substantively different personality traits. Moreover, we show that those not identifying with any party, who are close to holding the majority, are similar to those identifying with the Conservatives. We show that these results are robust to controlling for cognitive skills and parental party preferences, and to estimation on a subsample of siblings. The relationship between personality traits and party identification is stable across birth cohorts.

Table 2: Big Five Personality Traits: Predictions.
Figure 3: Relationship between personality traits and stable party identification

Conservative core supporters are antagonistic towards others (low Agreeableness), they are closed to new experiences (low Openness), they are energetic and enthusiastic (high Extraversion), they are goal-orientated (high Conscientiousness), and they are even-tempered (low Neuroticism).

In contrast, the core supporters of the Labour Party have a pro-social and communal attitude (high Agreeableness), they are open to new experiences and ideas (high Openness), but they are more anxious, tense and discontented (high Neuroticism) and less prone to goal-directed behavior (low Conscientiousness). The core supporters of the Liberal Democrats have similar traits to the typical Labour supporters with two exceptions. First, they do not show any particular tendency towards pro-social and communal attitudes (insignificant Agreeableness). Second, they are more reserved and introverted than the more extraverted supporters of either the Conservatives or Labour (low Extraversion).

Psychological and Personality Profiles of Political Extremists: https://arxiv.org/pdf/1704.00119.pdf
We revisit the debate over the appeal of extremism in the U.S. context by comparing publicly available Twitter messages written by over 355,000 political extremist followers with messages written by non-extremist U.S. users. Analysis of text-based psychological indicators supports the moral foundation theory which identifies emotion as a critical factor in determining political orientation of individuals. Extremist followers also differ from others in four of the Big Five personality traits.

Fig. 2. Comparing psychological profiles of the followers of moderate and extremist single-issue groups, compared to random users.

Overall, the differences in psychological profile between followers of extremist and moderate groups is much larger for left-wing extremists (environmentalists) than right-wing (anti-abortion and anti-immigrant).

Fig. 3. Big Five Personality Profiles.

Results show that extremist followers (whether left or right) are less agreeable, less neurotic, and more open than nonextremists.

Ideology as Motivated Cultural Cognition: How Culture Translates Personality into Policy Preferences: https://www.psa.ac.uk/sites/default/files/conference/papers/2017/Ideology%20as%20Motivated%20Cultural%20Cognition.pdf
This paper summarises the results of a quantitative analysis testing the theory that culture acts as an intermediary in the relationship between individual perceptual tendencies and political orientation. Political psychologists have long observed that more “left-wing” individuals tend to be more comfortable than “right-wing” individuals with ambiguity, disorder, and uncertainty, to equivocate more readily between conflicting viewpoints, and to be more willing to change their opinions. These traits are often summarised under the blanket term of “open-mindedness”. A recent increase in cross-cultural studies, however, has indicated that these relationships are far less robust, and even reversed, in social contexts outside of North America and Western Europe. The sociological concept of culture may provide an answer to this inconsistency: emergent idea-networks, irreducible to individuals, which nonetheless condition psychological motivations, so that perceptual factors resulting in left-wing preferences in one culture may result in opposing preferences in another. The key is that open-mindedness leads individuals to attack the dominant ideas which they encounter: if prevailing orthodoxies happen to be left-wing, then open minded individuals may become right-wing in protest. Using conditional process analysis of the British Election Study, I find evidence for three specific mechanisms whereby culture interferes with perceptual influences on politics. Conformity to the locally dominant culture mediates these influences, in the sense that open-minded people in Britain are only more left-wing because they are less culturally conformal. This relationship is itself moderated both by cultural group membership and by Philip Converse’s notion of “constraint”, individual-level connectivity between ideas, such that the strength of perceptual influence differs significantly between cultural groups and between levels of constraint to the idea of the political spectrum. Overall, I find compelling evidence for the importance of culture in shaping perceptions of policy choices.
pdf  study  polisci  sociology  politics  ideology  personality  psych-architecture  correlation  britain  coalitions  phalanges  data  things  multi  preprint  psychology  social-psych  cog-psych  culture-war  gnon  🐸  subculture  objective-measure  demographics  org:mat  creative  culture  society  cultural-dynamics  anthropology  hari-seldon  discipline  extra-introversion  stress  individualism-collectivism  expression-survival  values  poll  chart  curiosity  open-closed
november 2017 by nhaliday
Climate Risk, Cooperation, and the Co-Evolution of Culture and Institutions∗
We test this hypothesis for Europe combining high-resolution climate data for the period 1500-2000 with survey data at the sub-national level. We find that regions with higher inter-annual variability in precipitation and temperature display higher levels of trust. This effect is driven by variability in the growing season months, and by historical rather than recent variability. Regarding possible mechanisms, we show that regions with more variable climate were more closely connected to the Medieval trade network, indicating a higher propensity to engage in inter-community exchange. We also find that these regions were more likely to adopt participatory political institutions earlier on, and are characterized by a higher quality of local governments still today. Our results suggest that, by favoring the emergence of mutually-reinforcing norms and institutions, exposure to environmental risk had a long-lasting impact on human cooperation.
pdf  study  broad-econ  economics  cliometrics  path-dependence  growth-econ  political-econ  institutions  government  social-norms  culture  cultural-dynamics  correlation  history  early-modern  mostly-modern  values  poll  trust  n-factor  cooperate-defect  cohesion  democracy  environment  europe  the-great-west-whale  geography  trade  network-structure  general-survey  outcome-risk  uncertainty  branches  microfoundations  hari-seldon
november 2017 by nhaliday
Review of Yuval Harari's Sapiens: A Brief History of Humankind.
https://archive.is/MPO5Q
Yuval Harari's prominent book Sapiens: A Brief History of Humankind gets a thorough and well deserved fisking by C.R. Hallpike.

For Harari the great innovation that separated us from the apes was what he calls the Cognitive Revolution, around 70,000 years ago when we started migrating out of Africa, which he thinks gave us the same sort of modern minds that we have now. 'At the individual level, ancient foragers were the most knowledgeable and skilful people in history...Survival in that area required superb mental abilities from everyone' (55), and 'The people who carved the Stadel lion-man some 30,000 years ago had the same physical, emotional, and intellectual abilities we have' (44). Not surprisingly, then, 'We'd be able to explain to them everything we know - from the adventures of Alice in Wonderland to the paradoxes of quantum physics - and they could teach us how their people view the world' (23).

It's a sweet idea, and something like this imagined meeting actually took place a few years ago between the linguist Daniel Everett and the Piraha foragers of the Amazon in Peru (Everett 2008). But far from being able to discuss quantum theory with them, he found that the Piraha couldn't even count, and had no numbers of any kind, They could teach Everett how they saw the world, which was entirely confined to the immediate experience of the here-and-now, with no interest in past or future, or really in anything that could not be seen or touched. They had no myths or stories, so Alice in Wonderland would have fallen rather flat as well.

...

Summing up the book as a whole, one has often had to point out how surprisingly little he seems to have read on quite a number of essential topics. It would be fair to say that whenever his facts are broadly correct they are not new, and whenever he tries to strike out on his own he often gets things wrong, sometimes seriously. So we should not judge Sapiens as a serious contribution to knowledge but as 'infotainment', a publishing event to titillate its readers by a wild intellectual ride across the landscape of history, dotted with sensational displays of speculation, and ending with blood-curdling predictions about human destiny. By these criteria it is a most successful book.
pdf  books  review  expert-experience  critique  sapiens  history  antiquity  anthropology  multi  twitter  social  scitariat  commentary  quotes  attaq  westminster  backup  culture  realness  farmers-and-foragers  language  egalitarianism-hierarchy  inequality  learning  absolute-relative  malthus  tribalism  kinship  leviathan  government  leadership  volo-avolo  social-structure  taxes  studying  technology  religion  theos  sequential  universalism-particularism  antidemos  revolution  enlightenment-renaissance-restoration-reformation  science  europe  the-great-west-whale  age-of-discovery  iron-age  mediterranean  the-classics  reason  empirical  experiment  early-modern  islam  MENA  civic  institutions  the-trenches  innovation  agriculture  gnon
november 2017 by nhaliday
The political economy of fertility | SpringerLink
This paper studies the political economy of fertility. Specifically, I argue that fertility may be a strategic choice for ethnic groups engaged in redistributive conflict. I first present a simple conflict model where high fertility is optimal for each ethnic group if and only if the economy’s ethnic diversity is high, institutions are weak, or both. I then test the model in a cross-national dataset. Consistent with the theory, I find that economies where the product of ethnic diversity and a measure of institutional weakness is high have increased fertility rates. I conclude that fertility may depend on political factors.
study  sociology  speculation  stylized-facts  demographics  population  fertility  polisci  political-econ  institutions  nationalism-globalism  tribalism  us-them  self-interest  intervention  wonkish  pdf  piracy  microfoundations  phalanges  diversity  putnam-like  competition  israel  MENA  the-bones
november 2017 by nhaliday
The weirdest people in the world?
Abstract: Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.
pdf  study  microfoundations  anthropology  cultural-dynamics  sociology  psychology  social-psych  cog-psych  iq  biodet  behavioral-gen  variance-components  psychometrics  psych-architecture  visuo  spatial  morality  individualism-collectivism  n-factor  justice  egalitarianism-hierarchy  cooperate-defect  outliers  homo-hetero  evopsych  generalization  henrich  europe  the-great-west-whale  occident  organizing  🌞  universalism-particularism  applicability-prereqs  hari-seldon  extrema  comparison  GT-101  ecology  EGT  reinforcement  anglo  language  gavisti  heavy-industry  marginal  absolute-relative  reason  stylized-facts  nature  systematic-ad-hoc  analytical-holistic  science  modernity  behavioral-econ  s:*  illusion  cool  hmm  coordination  self-interest  social-norms  population  density  humanity  sapiens  farmers-and-foragers  free-riding  anglosphere  cost-benefit  china  asia  sinosphere  MENA  world  developing-world  neurons  theory-of-mind  network-structure  nordic  orient  signum  biases  usa  optimism  hypocrisy  humility  within-without  volo-avolo  domes
november 2017 by nhaliday
The Wilson Effect: the increase in heritability of IQ with age. - PubMed - NCBI
FIGURE 2 Estimates of genetic and shared environmental influence on g by age. The age scale is not linear (see text for details).
study  biodet  behavioral-gen  iq  psychology  cog-psych  metabuch  stylized-facts  variance-components  developmental  data  visualization  twin-study  correlation  🌞  pdf  piracy  age-generation  plots  psychometrics
november 2017 by nhaliday
Darwinian medicine - Randolph Nesse
The Dawn of Darwinian Medicine: https://sci-hub.tw/https://www.jstor.org/stable/2830330
TABLE 1 Examples of the use of the theory of natural selection to predict the existence of phenomena otherwise unsuspected
TABLE 2 A classification of phenomena associated with infectious disease
research-program  homepage  links  list  study  article  bio  medicine  disease  parasites-microbiome  epidemiology  evolution  darwinian  books  west-hunter  scitariat  🌞  red-queen  ideas  deep-materialism  biodet  EGT  heterodox  essay  equilibrium  incentives  survey  track-record  priors-posteriors  data  paying-rent  being-right  immune  multi  pdf  piracy  EEA  lens  nibble  🔬  maxim-gun
november 2017 by nhaliday
RBC Methodology and the Development of Aggregate Economic Theory
https://archive.is/S5oqD
https://archive.is/7ZnEH
Nobelist Ed Prescott illustrates how practical, surprising, and wise neoclassical growth theory--a.k.a. RBC--can be:

https://archive.is/07XrP
On the Equity Premium Puzzle (tm), or why do stocks earn so much more than bonds when stocks don't appear all that risky?
pdf  org:gov  economics  macro  study  white-paper  article  methodology  cycles  growth-econ  models  complex-systems  map-territory  multi  twitter  social  commentary  backup  econotariat  garett-jones  empirical  regularizer  evidence-based  occam  parsimony
october 2017 by nhaliday
Global Evidence on Economic Preferences
- Benjamin Enke et al

This paper studies the global variation in economic preferences. For this purpose, we present the Global Preference Survey (GPS), an experimentally validated survey dataset of time preference, risk preference, positive and negative reciprocity, altruism, and trust from 80,000 individuals in 76 countries. The data reveal substantial heterogeneity in preferences across countries, but even larger within-country heterogeneity. Across individuals, preferences vary with age, gender, and cognitive ability, yet these relationships appear partly country specific. At the country level, the data reveal correlations between preferences and bio-geographic and cultural variables such as agricultural suitability, language structure, and religion. Variation in preferences is also correlated with economic outcomes and behaviors. Within countries and subnational regions, preferences are linked to individual savings decisions, labor market choices, and prosocial behaviors. Across countries, preferences vary with aggregate outcomes ranging from per capita income, to entrepreneurial activities, to the frequency of armed conflicts.

...

This paper explores these questions by making use of the core features of the GPS: (i) coverage of 76 countries that represent approximately 90 percent of the world population; (ii) representative population samples within each country for a total of 80,000 respondents, (iii) measures designed to capture time preference, risk preference, altruism, positive reciprocity, negative reciprocity, and trust, based on an ex ante experimental validation procedure (Falk et al., 2016) as well as pre-tests in culturally heterogeneous countries, (iv) standardized elicitation and translation techniques through the pre-existing infrastructure of a global polling institute, Gallup. Upon publication, the data will be made publicly available online. The data on individual preferences are complemented by a comprehensive set of covariates provided by the Gallup World Poll 2012.

...

The GPS preference measures are based on twelve survey items, which were selected in an initial survey validation study (see Falk et al., 2016, for details). The validation procedure involved conducting multiple incentivized choice experiments for each preference, and testing the relative abilities of a wide range of different question wordings and formats to predict behavior in these choice experiments. The particular items used to construct the GPS preference measures were selected based on optimal performance out of menus of alternative items (for details see Falk et al., 2016). Experiments provide a valuable benchmark for selecting survey items, because they can approximate the ideal choice situations, specified in economic theory, in which individuals make choices in controlled decision contexts. Experimental measures are very costly, however, to implement in a globally representative sample, whereas survey measures are much less costly.⁴ Selecting survey measures that can stand in for incentivized revealed preference measures leverages the strengths of both approaches.

The Preference Survey Module: A Validated Instrument for Measuring Risk, Time, and Social Preferences: http://ftp.iza.org/dp9674.pdf

Table 1: Survey items of the GPS

Figure 1: World maps of patience, risk taking, and positive reciprocity.
Figure 2: World maps of negative reciprocity, altruism, and trust.

Figure 3: Gender coefficients by country. For each country, we regress the respective preference on gender, age and its square, and subjective math skills, and plot the resulting gender coefficients as well as their significance level. In order to make countries comparable, each preference was standardized (z-scores) within each country before computing the coefficients.

Figure 4: Cognitive ability coefficients by country. For each country, we regress the respective preference on gender, age and its square, and subjective math skills, and plot the resulting coefficients on subjective math skills as well as their significance level. In order to make countries comparable, each preference was standardized (z-scores) within each country before computing the coefficients.

Figure 5: Age profiles by OECD membership.

Table 6: Pairwise correlations between preferences and geographic and cultural variables

Figure 10: Distribution of preferences at individual level.
Figure 11: Distribution of preferences at country level.

interesting digression:
D Discussion of Measurement Error and Within- versus Between-Country Variation
study  dataset  data  database  let-me-see  economics  growth-econ  broad-econ  microfoundations  anthropology  cultural-dynamics  culture  psychology  behavioral-econ  values  🎩  pdf  piracy  world  spearhead  general-survey  poll  group-level  within-group  variance-components  🌞  correlation  demographics  age-generation  gender  iq  cooperate-defect  time-preference  temperance  labor  wealth  wealth-of-nations  entrepreneurialism  outcome-risk  altruism  trust  patience  developing-world  maps  visualization  n-factor  things  phalanges  personality  regression  gender-diff  pop-diff  geography  usa  canada  anglo  europe  the-great-west-whale  nordic  anglosphere  MENA  africa  china  asia  sinosphere  latin-america  self-report  hive-mind  GT-101  realness  long-short-run  endo-exo  signal-noise  communism  japan  korea  methodology  measurement  org:ngo  white-paper  endogenous-exogenous  within-without  hari-seldon
october 2017 by nhaliday
per page:    204080120160

bundles : meta

Copy this bookmark:

description:

tags: