CakeML
august 2019 by nhaliday
some interesting job openings in Sydney listed here
programming
pls
plt
functional
ocaml-sml
formal-methods
rigor
compilers
types
numerics
accuracy
estimate
research-program
homepage
anglo
jobs
tech
cool
august 2019 by nhaliday
Treadmill desk observations - Gwern.net
august 2019 by nhaliday
Notes relating to my use of a treadmill desk and 2 self-experiments showing walking treadmill use interferes with typing and memory performance.
...
While the result seems highly likely to be true for me, I don’t know how well it might generalize to other people. For example, perhaps more fit people can use a treadmill without harm and the negative effect is due to the treadmill usage tiring & distracting me; I try to walk 2 miles a day, but that’s not much compared to some people.
Given this harmful impact, I will avoid doing spaced repetition on my treadmill in the future, and given this & the typing result, will relegate any computer+treadmill usage to non-intellectually-demanding work like watching movies. This turned out to not be a niche use I cared about and I hardly ever used my treadmill afterwards, so in October 2016 I sold my treadmill for $70. I might investigate standing desks next for providing some exercise beyond sitting but without the distracting movement of walking on a treadmill.
ratty
gwern
data
analysis
quantified-self
health
fitness
get-fit
working-stiff
intervention
cost-benefit
psychology
cog-psych
retention
iq
branches
keyboard
ergo
efficiency
accuracy
null-result
increase-decrease
experiment
hypothesis-testing
...
While the result seems highly likely to be true for me, I don’t know how well it might generalize to other people. For example, perhaps more fit people can use a treadmill without harm and the negative effect is due to the treadmill usage tiring & distracting me; I try to walk 2 miles a day, but that’s not much compared to some people.
Given this harmful impact, I will avoid doing spaced repetition on my treadmill in the future, and given this & the typing result, will relegate any computer+treadmill usage to non-intellectually-demanding work like watching movies. This turned out to not be a niche use I cared about and I hardly ever used my treadmill afterwards, so in October 2016 I sold my treadmill for $70. I might investigate standing desks next for providing some exercise beyond sitting but without the distracting movement of walking on a treadmill.
august 2019 by nhaliday
Errors in Math Functions (The GNU C Library)
july 2019 by nhaliday
https://stackoverflow.com/questions/22259537/guaranteed-precision-of-sqrt-function-in-c-c
For C99, there are no specific requirements. But most implementations try to support Annex F: IEC 60559 floating-point arithmetic as good as possible. It says:
An implementation that defines __STDC_IEC_559__ shall conform to the specifications in this annex.
And:
The sqrt functions in <math.h> provide the IEC 60559 square root operation.
IEC 60559 (equivalent to IEEE 754) says about basic operations like sqrt:
Except for binary <-> decimal conversion, each of the operations shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then coerced this intermediate result to fit in the destination's format.
The final step consists of rounding according to several rounding modes but the result must always be the closest representable value in the target precision.
[ed.: The list of other such correctly rounded functions is included in the IEEE-754 standard (which I've put w/ the C1x and C++2x standard drafts) under section 9.2, and it mainly consists of stuff that can be expressed in terms of exponentials (exp, log, trig functions, powers) along w/ sqrt/hypot functions.
Fun fact: this question was asked by Yeputons who has a codeforces profile.]
https://stackoverflow.com/questions/20945815/math-precision-requirements-of-c-and-c-standard
oss
libraries
systems
c(pp)
numerics
documentation
objektbuch
list
linux
unix
multi
q-n-a
stackex
programming
nitty-gritty
sci-comp
accuracy
types
approximation
IEEE
protocol-metadata
gnu
For C99, there are no specific requirements. But most implementations try to support Annex F: IEC 60559 floating-point arithmetic as good as possible. It says:
An implementation that defines __STDC_IEC_559__ shall conform to the specifications in this annex.
And:
The sqrt functions in <math.h> provide the IEC 60559 square root operation.
IEC 60559 (equivalent to IEEE 754) says about basic operations like sqrt:
Except for binary <-> decimal conversion, each of the operations shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then coerced this intermediate result to fit in the destination's format.
The final step consists of rounding according to several rounding modes but the result must always be the closest representable value in the target precision.
[ed.: The list of other such correctly rounded functions is included in the IEEE-754 standard (which I've put w/ the C1x and C++2x standard drafts) under section 9.2, and it mainly consists of stuff that can be expressed in terms of exponentials (exp, log, trig functions, powers) along w/ sqrt/hypot functions.
Fun fact: this question was asked by Yeputons who has a codeforces profile.]
https://stackoverflow.com/questions/20945815/math-precision-requirements-of-c-and-c-standard
july 2019 by nhaliday
An Eye Tracking Study on camelCase and under_score Identifier Styles - IEEE Conference Publication
july 2019 by nhaliday
One main difference is that subjects were trained mainly in the underscore style and were all programmers. While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly.
ToCamelCaseorUnderscore: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.158.9499
An empirical study of 135 programmers and non-programmers was conducted to better understand the impact of identifier style on code readability. The experiment builds on past work of others who study how readers of natural language perform such tasks. Results indicate that camel casing leads to higher accuracy among all subjects regardless of training, and those trained in camel casing are able to recognize identifiers in the camel case style faster than identifiers in the underscore style.
https://en.wikipedia.org/wiki/Camel_case#Readability_studies
A 2009 study comparing snake case to camel case found that camel case identifiers could be recognised with higher accuracy among both programmers and non-programmers, and that programmers already trained in camel case were able to recognise those identifiers faster than underscored snake-case identifiers.[35]
A 2010 follow-up study, under the same conditions but using an improved measurement method with use of eye-tracking equipment, indicates: "While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly."[36]
study
psychology
cog-psych
hci
programming
best-practices
stylized-facts
null-result
multi
wiki
reference
concept
empirical
evidence-based
efficiency
accuracy
time
code-organizing
grokkability
protocol-metadata
form-design
grokkability-clarity
ToCamelCaseorUnderscore: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.158.9499
An empirical study of 135 programmers and non-programmers was conducted to better understand the impact of identifier style on code readability. The experiment builds on past work of others who study how readers of natural language perform such tasks. Results indicate that camel casing leads to higher accuracy among all subjects regardless of training, and those trained in camel casing are able to recognize identifiers in the camel case style faster than identifiers in the underscore style.
https://en.wikipedia.org/wiki/Camel_case#Readability_studies
A 2009 study comparing snake case to camel case found that camel case identifiers could be recognised with higher accuracy among both programmers and non-programmers, and that programmers already trained in camel case were able to recognise those identifiers faster than underscored snake-case identifiers.[35]
A 2010 follow-up study, under the same conditions but using an improved measurement method with use of eye-tracking equipment, indicates: "While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly."[36]
july 2019 by nhaliday
Browse the State-of-the-Art in Machine Learning
aggregator machine-learning deep-learning ai benchmarks state-of-art ranking top-n competition nibble dataset computer-vision comparison marginal nlp speedometer frontier open-problems list papers links info-foraging graphs time-series audio games medicine applications accuracy foreign-lang arrows questions
june 2019 by nhaliday
aggregator machine-learning deep-learning ai benchmarks state-of-art ranking top-n competition nibble dataset computer-vision comparison marginal nlp speedometer frontier open-problems list papers links info-foraging graphs time-series audio games medicine applications accuracy foreign-lang arrows questions
june 2019 by nhaliday
classification - ImageNet: what is top-1 and top-5 error rate? - Cross Validated
june 2019 by nhaliday
Now, in the case of top-1 score, you check if the top class (the one having the highest probability) is the same as the target label.
In the case of top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities).
nibble
q-n-a
overflow
machine-learning
deep-learning
metrics
comparison
ranking
top-n
classification
computer-vision
benchmarks
dataset
accuracy
error
jargon
In the case of top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities).
june 2019 by nhaliday
What every computer scientist should know about floating-point arithmetic
may 2019 by nhaliday
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.
https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).
This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.
...
Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.
https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html
This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]
cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.
In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]
A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.
...
There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).
[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble
pdf
papers
programming
systems
numerics
nitty-gritty
intricacy
approximation
accuracy
types
sci-comp
multi
q-n-a
stackex
hmm
oly-programming
accretion
formal-methods
yak-shaving
wiki
reference
algorithms
yoga
ground-up
divide-and-conquer
fourier
books
tidbits
chart
caltech
nostalgia
https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).
This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.
...
Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.
https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html
This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]
cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.
In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]
A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.
...
There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).
[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
may 2019 by nhaliday
[1803.00085] Chinese Text in the Wild
may 2019 by nhaliday
We introduce Chinese Text in the Wild, a very large dataset of Chinese text in street view images.
...
We give baseline results using several state-of-the-art networks, including AlexNet, OverFeat, Google Inception and ResNet for character recognition, and YOLOv2 for character detection in images. Overall Google Inception has the best performance on recognition with 80.5% top-1 accuracy, while YOLOv2 achieves an mAP of 71.0% on detection. Dataset, source code and trained models will all be publicly available on the website.
nibble
pdf
papers
preprint
machine-learning
deep-learning
deepgoog
state-of-art
china
asia
writing
language
dataset
error
accuracy
computer-vision
pic
ocr
org:mat
benchmarks
questions
...
We give baseline results using several state-of-the-art networks, including AlexNet, OverFeat, Google Inception and ResNet for character recognition, and YOLOv2 for character detection in images. Overall Google Inception has the best performance on recognition with 80.5% top-1 accuracy, while YOLOv2 achieves an mAP of 71.0% on detection. Dataset, source code and trained models will all be publicly available on the website.
may 2019 by nhaliday
Basic Error Rates
may 2019 by nhaliday
This page describes human error rates in a variety of contexts.
Most of the error rates are for mechanical errors. A good general figure for mechanical error rates appears to be about 0.5%.
Of course the denominator differs across studies. However only fairly simple actions are used in the denominator.
The Klemmer and Snyder study shows that much lower error rates are possible--in this case for people whose job consisted almost entirely of data entry.
The error rate for more complex logic errors is about 5%, based primarily on data on other pages, especially the program development page.
org:junk
list
links
objektbuch
data
database
error
accuracy
human-ml
machine-learning
ai
pro-rata
metrics
automation
benchmarks
marginal
nlp
language
density
writing
dataviz
meta:reading
speedometer
Most of the error rates are for mechanical errors. A good general figure for mechanical error rates appears to be about 0.5%.
Of course the denominator differs across studies. However only fairly simple actions are used in the denominator.
The Klemmer and Snyder study shows that much lower error rates are possible--in this case for people whose job consisted almost entirely of data entry.
The error rate for more complex logic errors is about 5%, based primarily on data on other pages, especially the program development page.
may 2019 by nhaliday
quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
april 2019 by nhaliday
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.
I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:
Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.
Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/
If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).
Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.
[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.
edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]
q-n-a
stackex
programming
engineering
nitty-gritty
error
flux-stasis
books
recommendations
software
checking
debugging
pro-rata
pls
comparison
parsimony
measure
data
objektbuch
speculation
accuracy
density
correctness
estimate
street-fighting
multi
quality
stylized-facts
methodology
I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:
Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.
Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/
If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).
Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.
[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.
edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]
april 2019 by nhaliday
Which benchmark programs are faster? | Computer Language Benchmarks Game
december 2018 by nhaliday
old:
https://salsa.debian.org/benchmarksgame-team/archive-alioth-benchmarksgame
https://web.archive.org/web/20170331153459/http://benchmarksgame.alioth.debian.org/
includes Scala
very outdated but more languages: https://web.archive.org/web/20110401183159/http://shootout.alioth.debian.org:80/
OCaml seems to offer the best tradeoff of performance vs parsimony (Haskell not so much :/)
https://blog.chewxy.com/2019/02/20/go-is-average/
http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependability-of.html
old official: https://web.archive.org/web/20130731195711/http://benchmarksgame.alioth.debian.org/u64q/code-used-time-used-shapes.php
https://web.archive.org/web/20121125103010/http://shootout.alioth.debian.org/u64q/code-used-time-used-shapes.php
Haskell does better here
other PL benchmarks:
https://github.com/kostya/benchmarks
BF 2.0:
Kotlin, C++ (GCC), Rust < Nim, D (GDC,LDC), Go, MLton < Crystal, Go (GCC), C# (.NET Core), Scala, Java, OCaml < D (DMD) < C# Mono < Javascript V8 < F# Mono, Javascript Node, Haskell (MArray) << LuaJIT << Python PyPy < Haskell < Racket <<< Python << Python3
mandel.b:
C++ (GCC) << Crystal < Rust, D (GDC), Go (GCC) < Nim, D (LDC) << C# (.NET Core) < MLton << Kotlin << OCaml << Scala, Java << D (DMD) << Go << C# Mono << Javascript Node << Haskell (MArray) << LuaJIT < Python PyPy << F# Mono <<< Racket
https://github.com/famzah/langs-performance
C++, Rust, Java w/ custom non-stdlib code < Python PyPy < C# .Net Core < Javscript Node < Go, unoptimized C++ (no -O2) << PHP << Java << Python3 << Python
comparison
pls
programming
performance
benchmarks
list
top-n
ranking
systems
time
multi
🖥
cost-benefit
tradeoffs
data
analysis
plots
visualization
measure
intricacy
parsimony
ocaml-sml
golang
rust
jvm
javascript
c(pp)
functional
haskell
backup
scala
realness
generalization
accuracy
techtariat
crosstab
database
repo
objektbuch
static-dynamic
gnu
mobile
https://salsa.debian.org/benchmarksgame-team/archive-alioth-benchmarksgame
https://web.archive.org/web/20170331153459/http://benchmarksgame.alioth.debian.org/
includes Scala
very outdated but more languages: https://web.archive.org/web/20110401183159/http://shootout.alioth.debian.org:80/
OCaml seems to offer the best tradeoff of performance vs parsimony (Haskell not so much :/)
https://blog.chewxy.com/2019/02/20/go-is-average/
http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependability-of.html
old official: https://web.archive.org/web/20130731195711/http://benchmarksgame.alioth.debian.org/u64q/code-used-time-used-shapes.php
https://web.archive.org/web/20121125103010/http://shootout.alioth.debian.org/u64q/code-used-time-used-shapes.php
Haskell does better here
other PL benchmarks:
https://github.com/kostya/benchmarks
BF 2.0:
Kotlin, C++ (GCC), Rust < Nim, D (GDC,LDC), Go, MLton < Crystal, Go (GCC), C# (.NET Core), Scala, Java, OCaml < D (DMD) < C# Mono < Javascript V8 < F# Mono, Javascript Node, Haskell (MArray) << LuaJIT << Python PyPy < Haskell < Racket <<< Python << Python3
mandel.b:
C++ (GCC) << Crystal < Rust, D (GDC), Go (GCC) < Nim, D (LDC) << C# (.NET Core) < MLton << Kotlin << OCaml << Scala, Java << D (DMD) << Go << C# Mono << Javascript Node << Haskell (MArray) << LuaJIT < Python PyPy << F# Mono <<< Racket
https://github.com/famzah/langs-performance
C++, Rust, Java w/ custom non-stdlib code < Python PyPy < C# .Net Core < Javscript Node < Go, unoptimized C++ (no -O2) << PHP << Java << Python3 << Python
december 2018 by nhaliday
Lateralization of brain function - Wikipedia
september 2018 by nhaliday
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]
Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69
Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]
...
Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".
Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.
These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.
The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.
The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.
The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.
...
Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.
Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.
The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.
...
RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.
The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.
Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.
Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.
...
Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.
The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.
...
We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.
If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.
...
Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.
Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon
reflection
books
summary
review
neuro
neuro-nitgrit
things
thinking
metabuch
order-disorder
apollonian-dionysian
bio
examples
near-far
symmetry
homo-hetero
logic
inference
intuition
problem-solving
analytical-holistic
n-factor
europe
the-great-west-whale
occident
alien-character
detail-architecture
art
theory-practice
philosophy
being-becoming
essence-existence
language
psychology
cog-psych
egalitarianism-hierarchy
direction
reason
learning
novelty
science
anglo
anglosphere
coarse-fine
neurons
truth
contradiction
matching
empirical
volo-avolo
curiosity
uncertainty
theos
axioms
intricacy
computation
analogy
essay
rhetoric
deep-materialism
new-religion
knowledge
expert-experience
confidence
biases
optimism
pessimism
realness
whole-partial-many
theory-of-mind
values
competition
reduction
subjective-objective
communication
telos-atelos
ends-means
turing
fiction
increase-decrease
innovation
creative
thick-thin
spengler
multi
ratty
hanson
complex-systems
structure
concrete
abstraction
network-s
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]
Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69
Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]
...
Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".
Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.
These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.
The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.
The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.
The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.
...
Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.
Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.
The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.
...
RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.
The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.
Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.
Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.
...
Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.
The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.
...
We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.
If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.
...
Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.
Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
september 2018 by nhaliday
Philosophy and Predictive Processing
psychology cog-psych neuro neuro-nitgrit neurons philosophy interdisciplinary links list reading predictive-processing models dennett within-without accuracy meta:prediction database wire-guided illusion evopsych evolution yvain ssc sapiens sleep
june 2018 by nhaliday
psychology cog-psych neuro neuro-nitgrit neurons philosophy interdisciplinary links list reading predictive-processing models dennett within-without accuracy meta:prediction database wire-guided illusion evopsych evolution yvain ssc sapiens sleep
june 2018 by nhaliday
Frontiers | The Predictive Processing Paradigm Has Roots in Kant | Frontiers in Systems Neuroscience
study article rhetoric essay critique psychology cog-psych yvain ssc accuracy meta:prediction predictive-processing neuro neuro-nitgrit neurons models thinking philosophy big-peeps history early-modern europe germanic enlightenment-renaissance-restoration-reformation duplication similarity novelty wire-guided
june 2018 by nhaliday
study article rhetoric essay critique psychology cog-psych yvain ssc accuracy meta:prediction predictive-processing neuro neuro-nitgrit neurons models thinking philosophy big-peeps history early-modern europe germanic enlightenment-renaissance-restoration-reformation duplication similarity novelty wire-guided
june 2018 by nhaliday
Commentary: Predictions and the brain: how musical sounds become rewarding
june 2018 by nhaliday
https://twitter.com/AOEUPL_PHE/status/1004807377076604928
https://archive.is/FgNHG
did i just learn something big?
Prerecorded music has ABSOLUTELY NO
SURVIVAL reward. Zero. It does not help
with procreation (well, unless you're the
one making the music, then you get
endless sex) and it does not help with
individual survival.
As such, one must seriously self test
(n=1) prerecorded music actually holds
you back.
If you're reading this and you try no
music for 2 weeks and fail, hit me up. I
have some mind blowing stuff to show
you in how you can control others with
music.
study
psychology
cog-psych
yvain
ssc
models
speculation
music
art
aesthetics
evolution
evopsych
accuracy
meta:prediction
neuro
neuro-nitgrit
neurons
error
roots
intricacy
hmm
wire-guided
machiavelli
dark-arts
predictive-processing
reinforcement
multi
science-anxiety
https://archive.is/FgNHG
did i just learn something big?
Prerecorded music has ABSOLUTELY NO
SURVIVAL reward. Zero. It does not help
with procreation (well, unless you're the
one making the music, then you get
endless sex) and it does not help with
individual survival.
As such, one must seriously self test
(n=1) prerecorded music actually holds
you back.
If you're reading this and you try no
music for 2 weeks and fail, hit me up. I
have some mind blowing stuff to show
you in how you can control others with
music.
june 2018 by nhaliday
Mind uploading - Wikipedia
concept wiki reference article hanson ratty ems futurism ai technology speedometer frontier simulation death prediction estimate time computation scale magnitude plots neuro neuro-nitgrit complexity coarse-fine brain-scan accuracy skunkworks bostrom enhancement ideas singularity eden-heaven speed risk ai-control paradox competition arms unintended-consequences offense-defense trust duty tribalism us-them volo-avolo strategy hardware software mystic religion theos hmm dennett within-without philosophy deep-materialism complex-systems structure reduction detail-architecture analytical-holistic approximation cs trends threat-modeling
march 2018 by nhaliday
concept wiki reference article hanson ratty ems futurism ai technology speedometer frontier simulation death prediction estimate time computation scale magnitude plots neuro neuro-nitgrit complexity coarse-fine brain-scan accuracy skunkworks bostrom enhancement ideas singularity eden-heaven speed risk ai-control paradox competition arms unintended-consequences offense-defense trust duty tribalism us-them volo-avolo strategy hardware software mystic religion theos hmm dennett within-without philosophy deep-materialism complex-systems structure reduction detail-architecture analytical-holistic approximation cs trends threat-modeling
march 2018 by nhaliday
Where is talent optimized? - Marginal REVOLUTION
january 2018 by nhaliday
http://marginalrevolution.com/marginalrevolution/2018/01/talent-optimization-weak-strong.html
http://marginalrevolution.com/marginalrevolution/2018/01/sectors-bad-finding-talent-comments.html
econotariat
marginal-rev
discussion
economics
arbitrage
questions
q-n-a
labor
career
progression
selection
recruiting
quality
human-capital
efficiency
markets
market-failure
supply-demand
list
analysis
sports
finance
management
elite
higher-ed
info-dynamics
society
social-structure
tech
subculture
housing
measurement
volo-avolo
accuracy
wire-guided
education
teaching
religion
theos
letters
academia
media
network-structure
discrimination
identity-politics
gender
race
politics
government
leadership
straussian
path-dependence
sequential
degrees-of-freedom
ranking
matching
science
objektbuch
speculation
error
biases
scholar
🎓
impro
quantitative-qualitative
thick-thin
scale
medicine
military
alt-inst
meta:medicine
ability-competence
criminal-justice
institutions
organizing
multi
chart
low-hanging
http://marginalrevolution.com/marginalrevolution/2018/01/sectors-bad-finding-talent-comments.html
january 2018 by nhaliday
What's new in CPUs since the 80s and how does it affect programmers?
techtariat dan-luu engineering programming systems performance hardware list top-n frontier trivia accuracy summary graphics memory-management concurrency virtualization caching os computer-memory metal-to-virtual IEEE
november 2017 by nhaliday
techtariat dan-luu engineering programming systems performance hardware list top-n frontier trivia accuracy summary graphics memory-management concurrency virtualization caching os computer-memory metal-to-virtual IEEE
november 2017 by nhaliday
Lessons From Bar Fight Litigation | Ordinary Times
reflection summary stories data analysis demographics gender class distribution race age-generation peace-violence embodied embodied-pack fighting law arms money track-record impetus chart ethanol sex street-fighting objektbuch estimate measurement accuracy gender-diff regularizer anthropology trivia cocktail
october 2017 by nhaliday
reflection summary stories data analysis demographics gender class distribution race age-generation peace-violence embodied embodied-pack fighting law arms money track-record impetus chart ethanol sex street-fighting objektbuch estimate measurement accuracy gender-diff regularizer anthropology trivia cocktail
october 2017 by nhaliday
Gauging the Uncertainty of the Economic Outlook Using Historical Forecasting Errors: The Federal Reserve’s Approach
september 2017 by nhaliday
First, if past performance is a reasonable guide to future accuracy, considerable uncertainty surrounds all macroeconomic projections, including those of FOMC participants. Second, different forecasters have similar accuracy. Third, estimates of uncertainty about future real activity and interest rates are now considerably greater than prior to the financial crisis; in contrast, estimates of inflation accuracy have changed little.
pdf
study
economics
macro
meta:prediction
tetlock
accuracy
org:gov
government
wonkish
moments
🎩
volo-avolo
september 2017 by nhaliday
How do archaeologists estimate the size of ancient populations?
august 2017 by nhaliday
Estimating The Population Sizes of Cities: http://irows.ucr.edu/research/citemp/estcit/estcit.htm
Estimating the population size of ancient settlements: http://www.parkdatabase.org/documents/download/2000_estimating_the_population_size_of_ancient_settlements.pdf
news
org:lite
archaeology
methodology
measurement
volo-avolo
history
antiquity
iron-age
medieval
demographics
population
density
explanation
sapiens
lens
multi
org:junk
org:edu
article
roots
accuracy
trivia
cocktail
estimate
approximation
magnitude
scale
evidence
Estimating the population size of ancient settlements: http://www.parkdatabase.org/documents/download/2000_estimating_the_population_size_of_ancient_settlements.pdf
august 2017 by nhaliday
All models are wrong - Wikipedia
august 2017 by nhaliday
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled "All models are wrong but some are useful". The section is copied below.
Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.
For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
thinking
metabuch
metameta
map-territory
models
accuracy
wire-guided
truth
philosophy
stats
data-science
methodology
lens
wiki
reference
complex-systems
occam
parsimony
science
nibble
hi-order-bits
info-dynamics
the-trenches
meta:science
physics
fluid
thermo
stat-mech
applicability-prereqs
theory-practice
elegance
simplification-normalization
Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.
For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
august 2017 by nhaliday
Diophantine approximation - Wikipedia
august 2017 by nhaliday
- rationals perfectly approximated by themselves, badly approximated (eps>1/bq) by other rationals
- irrationals well-approximated (eps~1/q^2) by rationals:
https://en.wikipedia.org/wiki/Dirichlet%27s_approximation_theorem
nibble
wiki
reference
math
math.NT
approximation
accuracy
levers
pigeonhole-markov
multi
tidbits
discrete
rounding
estimate
tightness
algebra
- irrationals well-approximated (eps~1/q^2) by rationals:
https://en.wikipedia.org/wiki/Dirichlet%27s_approximation_theorem
august 2017 by nhaliday
[1708.05070] Data-driven Advice for Applying Machine Learning to Bioinformatics Problems
august 2017 by nhaliday
Fig. 2
models are from sklean
papers
preprint
machine-learning
acm
model-class
best-practices
bioinformatics
benchmarks
nibble
checklists
objektbuch
data
top-n
ranking
engineering
data-science
methodology
accuracy
🖥
applications
python
libraries
ensembles
org:mat
models are from sklean
august 2017 by nhaliday
Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? | Scientific Reports
july 2017 by nhaliday
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest – and hope – that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited – in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
study
org:nat
papers
machine-learning
chemistry
measurement
volo-avolo
lower-bounds
analysis
realness
speedometer
nibble
🔬
applications
frontier
state-of-art
no-go
accuracy
interdisciplinary
july 2017 by nhaliday
How accurate are population forecasts?
july 2017 by nhaliday
2 The Accuracy of Past Projections: https://www.nap.edu/read/9828/chapter/4
good ebook:
Beyond Six Billion: Forecasting the World's Population (2000)
https://www.nap.edu/read/9828/chapter/2
Appendix A: Computer Software Packages for Projecting Population
https://www.nap.edu/read/9828/chapter/12
PDE Population Projections looks most relevant for my interests but it's also *ancient*
https://applieddemogtoolbox.github.io/Toolbox/
This Applied Demography Toolbox is a collection of applied demography computer programs, scripts, spreadsheets, databases and texts.
How Accurate Are the United Nations World Population Projections?: http://pages.stern.nyu.edu/~dbackus/BCH/demography/Keilman_JDR_98.pdf
cf. Razib on this: https://pinboard.in/u:nhaliday/b:d63e6df859e8
news
org:lite
prediction
meta:prediction
tetlock
demographics
population
demographic-transition
fertility
islam
world
developing-world
africa
europe
multi
track-record
accuracy
org:ngo
pdf
study
sociology
measurement
volo-avolo
methodology
estimate
data-science
error
wire-guided
priors-posteriors
books
guide
howto
software
tools
recommendations
libraries
gnxp
scitariat
good ebook:
Beyond Six Billion: Forecasting the World's Population (2000)
https://www.nap.edu/read/9828/chapter/2
Appendix A: Computer Software Packages for Projecting Population
https://www.nap.edu/read/9828/chapter/12
PDE Population Projections looks most relevant for my interests but it's also *ancient*
https://applieddemogtoolbox.github.io/Toolbox/
This Applied Demography Toolbox is a collection of applied demography computer programs, scripts, spreadsheets, databases and texts.
How Accurate Are the United Nations World Population Projections?: http://pages.stern.nyu.edu/~dbackus/BCH/demography/Keilman_JDR_98.pdf
cf. Razib on this: https://pinboard.in/u:nhaliday/b:d63e6df859e8
july 2017 by nhaliday
Econometric Modeling as Junk Science
june 2017 by nhaliday
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3
On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.
Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.
https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……
https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.
What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)
HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.
‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.
Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.
https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.
https://twitter.com/wwwojtekk/status/1190731344336293889
https://archive.is/EZu0h
Great (not completely new but still good to have it in one place) discussion of RCTs and inference in economics by Deaton, my favorite sentences (more general than just about RCT) below
Randomization in the tropics revisited: a theme and eleven variations: https://scholar.princeton.edu/sites/default/files/deaton/files/deaton_randomization_revisited_v3_2019.pdf
org:junk
org:edu
economics
econometrics
methodology
realness
truth
science
social-science
accuracy
generalization
essay
article
hmm
multi
study
🎩
empirical
causation
error
critique
sociology
criminology
hypothesis-testing
econotariat
broad-econ
cliometrics
endo-exo
replication
incentives
academia
measurement
wire-guided
intricacy
twitter
social
discussion
pseudoE
effect-size
reflection
field-study
stat-power
piketty
marginal-rev
commentary
data-science
expert-experience
regression
gotchas
rant
map-territory
pdf
simulation
moments
confidence
bias-variance
stats
endogenous-exogenous
control
meta:science
meta-analysis
outliers
summary
sampling
ensembles
monte-carlo
theory-practice
applicability-prereqs
chart
comparison
shift
ratty
unaffiliated
garett-jones
On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.
Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.
https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……
https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.
What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)
HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.
‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.
Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.
https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.
https://twitter.com/wwwojtekk/status/1190731344336293889
https://archive.is/EZu0h
Great (not completely new but still good to have it in one place) discussion of RCTs and inference in economics by Deaton, my favorite sentences (more general than just about RCT) below
Randomization in the tropics revisited: a theme and eleven variations: https://scholar.princeton.edu/sites/default/files/deaton/files/deaton_randomization_revisited_v3_2019.pdf
june 2017 by nhaliday
Historicity of the Bible - Wikipedia
june 2017 by nhaliday
Archaeological discoveries since the 19th century is open to interpretation, but broadly speaking they lend support to few of the Old Testament's historical narratives and offer evidence to challenge others.[a][3][4][b][c][d][8]
Pentateuch: http://www.newadvent.org/cathen/11646c.htm
Biblical Chronology: http://www.newadvent.org/cathen/03731a.htm
cf this guy's blog:
https://pinboard.in/u:nhaliday/b:d05248eef74a
https://biblicalsausage.wordpress.com
and Greg's twitter comment here (on unrelated subject):
https://pinboard.in/u:nhaliday/b:716812a8cd90
Most wars known to have happened in historical times haven't left much of an archaeological record.
history
antiquity
canon
literature
religion
judaism
christianity
theos
letters
realness
article
wiki
reference
accuracy
archaeology
exegesis-hermeneutics
multi
org:theos
protestant-catholic
truth
law
bible
Pentateuch: http://www.newadvent.org/cathen/11646c.htm
Biblical Chronology: http://www.newadvent.org/cathen/03731a.htm
cf this guy's blog:
https://pinboard.in/u:nhaliday/b:d05248eef74a
https://biblicalsausage.wordpress.com
and Greg's twitter comment here (on unrelated subject):
https://pinboard.in/u:nhaliday/b:716812a8cd90
Most wars known to have happened in historical times haven't left much of an archaeological record.
june 2017 by nhaliday
Kate O'Beirne: Women Less Informed about Politics -- Abolish 19th Amendment? | National Review
june 2017 by nhaliday
https://ropercenter.cornell.edu/public-perspective/ppscan/35/35023.pdf
http://journals.sagepub.com.sci-hub.cc/doi/full/10.1177/1065912916642867
https://www.theguardian.com/news/datablog/2013/jul/11/women-know-less-politics-than-men-worldwide
https://ropercenter.cornell.edu/public-perspective/ppscan/35/35023.pdf
http://web.pdx.edu/~mev/pdf/PS471_Readings_2012/Lizotte_Sidman.pdf
https://www.washingtonpost.com/news/monkey-cage/wp/2016/06/27/women-vote-at-higher-rates-than-men-that-might-help-clinton-in-november/
https://www.washingtonpost.com/opinions/catherine-rampell-why-women-are-far-more-likely-to-vote-then-men/2014/07/17/b4658192-0de8-11e4-8c9a-923ecc0c7d23_story.html
https://en.wikipedia.org/wiki/Voting_gender_gap_in_the_United_States
http://anepigone.blogspot.com/2017/09/why-nineteenth-was-not-in-original.html
news
org:mag
right-wing
data
poll
lol
gender
gender-diff
antidemos
descriptive
knowledge
government
politics
multi
study
polisci
world
org:lite
pdf
outcome-risk
piracy
elections
org:rec
wiki
reference
history
mostly-modern
usa
accuracy
wonkish
org:anglo
gnon
http://journals.sagepub.com.sci-hub.cc/doi/full/10.1177/1065912916642867
https://www.theguardian.com/news/datablog/2013/jul/11/women-know-less-politics-than-men-worldwide
https://ropercenter.cornell.edu/public-perspective/ppscan/35/35023.pdf
http://web.pdx.edu/~mev/pdf/PS471_Readings_2012/Lizotte_Sidman.pdf
https://www.washingtonpost.com/news/monkey-cage/wp/2016/06/27/women-vote-at-higher-rates-than-men-that-might-help-clinton-in-november/
https://www.washingtonpost.com/opinions/catherine-rampell-why-women-are-far-more-likely-to-vote-then-men/2014/07/17/b4658192-0de8-11e4-8c9a-923ecc0c7d23_story.html
https://en.wikipedia.org/wiki/Voting_gender_gap_in_the_United_States
http://anepigone.blogspot.com/2017/09/why-nineteenth-was-not-in-original.html
june 2017 by nhaliday
Logic | West Hunter
may 2017 by nhaliday
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.
No, we don’t.
http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html
compare: https://pinboard.in/u:nhaliday/b:190b299cf04a
Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.
For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.
...
If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.
But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?
First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.
Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.
...
Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.
Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.
We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.
And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.
Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.
Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.
This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.
This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.
For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.
Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.
https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.
But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter
scitariat
discussion
rant
thinking
rationality
metabuch
critique
systematic-ad-hoc
analytical-holistic
metameta
ideology
philosophy
info-dynamics
aphorism
darwinian
prudence
pragmatic
insight
tradition
s:*
2016
multi
gnon
right-wing
formal-values
values
slippery-slope
axioms
alt-inst
heuristic
anglosphere
optimate
flux-stasis
flexibility
paleocon
polisci
universalism-particularism
ratty
hanson
list
examples
migration
fertility
intervention
demographics
population
biotech
enhancement
energy-resources
biophysical-econ
nature
military
inequality
age-generation
time
ideas
debate
meta:rhetoric
local-global
long-short-run
gnosis-logos
gavisti
stochastic-processes
eden-heaven
politics
equilibrium
hive-mind
genetics
defense
competition
arms
peace-violence
walter-scheidel
speed
marginal
optimization
search
time-preference
patience
futurism
meta:prediction
accuracy
institutions
tetlock
theory-practice
wire-guided
priors-posteriors
distribution
moments
biases
epistemic
nea
No, we don’t.
http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html
compare: https://pinboard.in/u:nhaliday/b:190b299cf04a
Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.
For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.
...
If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.
But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?
First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.
Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.
...
Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.
Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.
We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.
And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.
Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.
Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.
This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.
This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.
For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.
Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.
https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.
But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
may 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
may 2017 by nhaliday
https://en.wikipedia.org/wiki/Coefficient_of_determination
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric
https://en.wikipedia.org/wiki/Explained_variation
http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.
this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)
https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
stats
science
hypothesis-testing
correlation
metrics
plots
regression
wiki
reference
nibble
methodology
multi
twitter
social
discussion
best-practices
econotariat
garett-jones
concept
conceptual-vocab
accuracy
causation
acm
matrix-factorization
todo
explanation
yoga
hsu
street-fighting
levers
🌞
2014
scitariat
variance-components
meta:prediction
biodet
s:**
mental-math
reddit
commentary
ssc
poast
gwern
data-science
metric-space
similarity
measure
dependence-independence
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric
https://en.wikipedia.org/wiki/Explained_variation
http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.
this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)
https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
may 2017 by nhaliday
The Future of the Global Muslim Population | Pew Research Center
april 2017 by nhaliday
http://www.pewforum.org/2011/01/27/future-of-the-global-muslim-population-regional-europe/
http://www.pewforum.org/2011/01/27/the-future-of-the-global-muslim-population/#the-americas
Europe’s Growing Muslim Population: http://www.pewforum.org/2017/11/29/europes-growing-muslim-population/
https://www.gnxp.com/WordPress/2017/11/30/crescent-over-the-north-sea/
Pew has a nice new report up, Europe’s Growing Muslim Population. Though it is important to read the whole thing, including the methods.
I laugh when people take projections of the year 2100 seriously. That’s because we don’t have a good sense of what might occur over 70+ years (read social and demographic projections from the 1940s and you’ll understand what I mean). Thirty years though is different. In the year 2050 children born today, such as my youngest son, will be entering the peak of their powers.
[cf.: http://blogs.discovermagazine.com/gnxp/2012/12/population-projects-50-years-into-the-future-fantasy/]
...
The problem with this is that there is a wide range of religious commitment and identification across Europe’s Muslim communities. On the whole, they are more religiously observant than non-Muslims in their nations of residence, but, for example, British Muslims are consistently more religious than French Muslims on surveys (or express views constant with greater religious conservatism).
People in Western countries are violent (yes) 29 52 34
lmao that's just ridiculous from the UK
https://www.gnxp.com/WordPress/2006/03/03/poll-of-british-muslims/
In short, read the poll closely, this isn’t an black & white community. It seems clear that some people simultaneously support Western society on principle while leaning toward separatism, while a subset, perhaps as large as 10%, are violently and radically hostile to the surrounding society.
news
org:data
data
analysis
database
religion
islam
population
demographics
fertility
world
developing-world
europe
usa
MENA
prediction
trends
migration
migrant-crisis
asia
africa
chart
multi
the-bones
white-paper
EU
gnxp
scitariat
poll
values
descriptive
hypocrisy
britain
gallic
germanic
pro-rata
maps
visualization
counterfactual
assimilation
iraq-syria
india
distribution
us-them
tribalism
peace-violence
order-disorder
terrorism
events
scale
meta:prediction
accuracy
time
org:sci
http://www.pewforum.org/2011/01/27/the-future-of-the-global-muslim-population/#the-americas
Europe’s Growing Muslim Population: http://www.pewforum.org/2017/11/29/europes-growing-muslim-population/
https://www.gnxp.com/WordPress/2017/11/30/crescent-over-the-north-sea/
Pew has a nice new report up, Europe’s Growing Muslim Population. Though it is important to read the whole thing, including the methods.
I laugh when people take projections of the year 2100 seriously. That’s because we don’t have a good sense of what might occur over 70+ years (read social and demographic projections from the 1940s and you’ll understand what I mean). Thirty years though is different. In the year 2050 children born today, such as my youngest son, will be entering the peak of their powers.
[cf.: http://blogs.discovermagazine.com/gnxp/2012/12/population-projects-50-years-into-the-future-fantasy/]
...
The problem with this is that there is a wide range of religious commitment and identification across Europe’s Muslim communities. On the whole, they are more religiously observant than non-Muslims in their nations of residence, but, for example, British Muslims are consistently more religious than French Muslims on surveys (or express views constant with greater religious conservatism).
People in Western countries are violent (yes) 29 52 34
lmao that's just ridiculous from the UK
https://www.gnxp.com/WordPress/2006/03/03/poll-of-british-muslims/
In short, read the poll closely, this isn’t an black & white community. It seems clear that some people simultaneously support Western society on principle while leaning toward separatism, while a subset, perhaps as large as 10%, are violently and radically hostile to the surrounding society.
april 2017 by nhaliday
FiveThirtyEight's Pollster Ratings | FiveThirtyEight
news org:data objektbuch info-foraging data poll ranking media list comparison sampling-bias epistemic wire-guided descriptive info-dynamics database top-n accuracy track-record let-me-see chart wonkish regression-to-mean politics polisci
february 2017 by nhaliday
news org:data objektbuch info-foraging data poll ranking media list comparison sampling-bias epistemic wire-guided descriptive info-dynamics database top-n accuracy track-record let-me-see chart wonkish regression-to-mean politics polisci
february 2017 by nhaliday
Embryo editing for intelligence - Gwern.net
february 2017 by nhaliday
https://twitter.com/pnin1957/status/917693229608337408
My hunch is CRISPR/Cas9 will not play a big role in intelligence enhancement. You'd have to edit so many loci b/c of small effect sizes, increasing errors. Embryo selection is much more promising. Peoples with high avg genetic values, of course, have an in-built advantage there.
ratty
gwern
enhancement
scaling-up
genetics
genomics
iq
🌞
CRISPR
futurism
biodet
new-religion
nibble
intervention
🔬
behavioral-gen
faq
chart
ideas
article
multi
twitter
social
commentary
gnon
unaffiliated
prediction
accuracy
technology
QTL
biotech
selection
comparison
scale
magnitude
hard-tech
skunkworks
speedometer
abortion-contraception-embryo
estimate
My hunch is CRISPR/Cas9 will not play a big role in intelligence enhancement. You'd have to edit so many loci b/c of small effect sizes, increasing errors. Embryo selection is much more promising. Peoples with high avg genetic values, of course, have an in-built advantage there.
february 2017 by nhaliday
Performance Trends in AI | Otium
january 2017 by nhaliday
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?
In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.
In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.
In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.
In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.
In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.
...
The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.
Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?
http://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/
ratty
core-rats
summary
prediction
trends
analysis
spock
ai
deep-learning
state-of-art
🤖
deepgoog
games
nlp
computer-vision
nibble
reinforcement
model-class
faq
org:bleg
shift
chart
technology
language
audio
accuracy
speaking
foreign-lang
definite-planning
china
asia
microsoft
google
ideas
article
speedometer
whiggish-hegelian
yvain
ssc
smoothness
data
hsu
scitariat
genetics
iq
enhancement
genetic-load
neuro
neuro-nitgrit
brain-scan
time-series
multiplicative
iteration-recursion
additive
multi
arrows
In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.
In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.
In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.
In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.
In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.
...
The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.
Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?
http://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/
january 2017 by nhaliday
Best Practices for ML Engineering from Google [pdf] | Hacker News
hn commentary google data-science machine-learning best-practices engineering pragmatic shipping knowledge multi pdf guide list checklists reference system-design code-organizing minimum-viable heuristic metrics features interpretability debugging reflection optimization ensembles accuracy quantitative-qualitative marginal ubiquity network-structure social tradeoffs grokkability grokkability-clarity methodology
january 2017 by nhaliday
hn commentary google data-science machine-learning best-practices engineering pragmatic shipping knowledge multi pdf guide list checklists reference system-design code-organizing minimum-viable heuristic metrics features interpretability debugging reflection optimization ensembles accuracy quantitative-qualitative marginal ubiquity network-structure social tradeoffs grokkability grokkability-clarity methodology
january 2017 by nhaliday
Assessment of alternative genotyping strategies to maximize imputation accuracy at minimal cost | Genetics Selection Evolution | Full Text
study biotech genetics genomics bio scaling-up bioinformatics cost-benefit money comparison efficiency frontier accuracy measurement methodology
november 2016 by nhaliday
study biotech genetics genomics bio scaling-up bioinformatics cost-benefit money comparison efficiency frontier accuracy measurement methodology
november 2016 by nhaliday
Faster than Fisher | West Hunter
november 2016 by nhaliday
There’s a simple model of the spread of an advantageous allele: You take σ, the typical distance people move in one generation, and s, the selective advantage: the advantageous allele spreads as a nonlinear wave at speed σ * √(2s). The problem is, that’s slow. Suppose that s = 0.10 (a large advantage), σ = 10 kilometers, and a generation time of 30 years: the allele would take almost 7,000 years to expand out 1000 kilometers.
...
This big expansion didn’t just happen from peasants marrying the girl next door: it required migrations and conquests. This one looks as if it rode with the Indo-European expansion: I’ll bet it started out in a group that had domesticated only horses.
The same processes, migration and conquest, must explain the wide distribution of many geographically widespread selective sweeps and partial sweeps. They were adaptive, all right, but expanded much faster than possible from purely local diffusion. We already have reason to think that SLC24A5 was carried to Europe by Middle Eastern farmers; the same is probably true for the haplotype that carries the high-activity ergothioniene transporter and the 35delG connexin-26/GJB2 deafness mutation. The Indo-Europeans probably introduced the T-13910 LCT mutation and the delta-F508 cystic fibrosis mutation, so we should see delta-F508 in northwest India and Pakistan – and we do !
https://westhunt.wordpress.com/2014/11/22/faster-than-fisher/#comment-63067
To entertain a (possibly mistaken) physical analogy, it sounds like you’re suggested a sort genetic convection through space, as opposed to conduction. I.e. Entire masses of folks, carrying a new selected variant, are displacing others – as opposed to the slow gene flow process of “girl-next-door.” Is that about right? (Hopefully I haven’t revealed my ignorance of basic thermodynamics here…)
Has there been any attempt to estimate sigma from these time periods?
Genetic Convection: https://westhunt.wordpress.com/2015/02/22/genetic-convection/
People are sometimes interested in estimating the point of origin of a sweeping allele: this is probably effectively impossible even if diffusion were the only spread mechanism, since the selective advantage might well vary in both time and space. But that’s ok, since population movements – genetic convection – are real and very important. This means that the difficulties in estimating the origin of a Fisher wave are totally insignificant, compared to the difficulties of estimating the effects of past colonizations, conquests and Völkerwanderungs. So when Yuval Itan and Mark Thomas estimated that 13,910 T LCT allele originated in central Europe, in the early Neolithic, they didn’t just go wrong because of failing to notice that the same allele is fairly common in northern India: no, their whole notion was unsound in the first place. We’re talking turbulence on steroids. Hari Seldon couldn’t figure this one out from the existing geographic distribution.
west-hunter
genetics
population-genetics
street-fighting
levers
evolution
gavisti
🌞
selection
giants
nibble
fisher
speed
gene-flow
scitariat
stylized-facts
methodology
archaeology
waves
frontier
agri-mindset
analogy
visual-understanding
physics
thermo
interdisciplinary
spreading
spatial
geography
poast
multi
volo-avolo
accuracy
estimate
order-disorder
time
homo-hetero
branches
trees
distribution
data
hari-seldon
aphorism
cliometrics
aDNA
mutation
lexical
...
This big expansion didn’t just happen from peasants marrying the girl next door: it required migrations and conquests. This one looks as if it rode with the Indo-European expansion: I’ll bet it started out in a group that had domesticated only horses.
The same processes, migration and conquest, must explain the wide distribution of many geographically widespread selective sweeps and partial sweeps. They were adaptive, all right, but expanded much faster than possible from purely local diffusion. We already have reason to think that SLC24A5 was carried to Europe by Middle Eastern farmers; the same is probably true for the haplotype that carries the high-activity ergothioniene transporter and the 35delG connexin-26/GJB2 deafness mutation. The Indo-Europeans probably introduced the T-13910 LCT mutation and the delta-F508 cystic fibrosis mutation, so we should see delta-F508 in northwest India and Pakistan – and we do !
https://westhunt.wordpress.com/2014/11/22/faster-than-fisher/#comment-63067
To entertain a (possibly mistaken) physical analogy, it sounds like you’re suggested a sort genetic convection through space, as opposed to conduction. I.e. Entire masses of folks, carrying a new selected variant, are displacing others – as opposed to the slow gene flow process of “girl-next-door.” Is that about right? (Hopefully I haven’t revealed my ignorance of basic thermodynamics here…)
Has there been any attempt to estimate sigma from these time periods?
Genetic Convection: https://westhunt.wordpress.com/2015/02/22/genetic-convection/
People are sometimes interested in estimating the point of origin of a sweeping allele: this is probably effectively impossible even if diffusion were the only spread mechanism, since the selective advantage might well vary in both time and space. But that’s ok, since population movements – genetic convection – are real and very important. This means that the difficulties in estimating the origin of a Fisher wave are totally insignificant, compared to the difficulties of estimating the effects of past colonizations, conquests and Völkerwanderungs. So when Yuval Itan and Mark Thomas estimated that 13,910 T LCT allele originated in central Europe, in the early Neolithic, they didn’t just go wrong because of failing to notice that the same allele is fairly common in northern India: no, their whole notion was unsound in the first place. We’re talking turbulence on steroids. Hari Seldon couldn’t figure this one out from the existing geographic distribution.
november 2016 by nhaliday
Practical advice for analysis of large, complex data sets
best-practices data-science engineering google expert advice pragmatic 🖥 techtariat intricacy huge-data-the-biggest street-fighting nitty-gritty outliers confidence replication reference expert-experience apollonian-dionysian distribution accuracy signal-noise examples universalism-particularism homo-hetero time time-series flux-stasis metrics hypothesis-testing system-design methodology
november 2016 by nhaliday
best-practices data-science engineering google expert advice pragmatic 🖥 techtariat intricacy huge-data-the-biggest street-fighting nitty-gritty outliers confidence replication reference expert-experience apollonian-dionysian distribution accuracy signal-noise examples universalism-particularism homo-hetero time time-series flux-stasis metrics hypothesis-testing system-design methodology
november 2016 by nhaliday
Information Processing: Expert Prediction: hard and soft
hsu tetlock street-fighting intelligence science len:short comparison thick-thin scitariat analytical-holistic meta:prediction the-world-is-just-atoms signaling complex-systems bio interdisciplinary physics genetics genomics GWAS candidate-gene bounded-cognition crooked realness being-right info-dynamics kumbaya-kult accuracy education status vampire-squid
november 2016 by nhaliday
hsu tetlock street-fighting intelligence science len:short comparison thick-thin scitariat analytical-holistic meta:prediction the-world-is-just-atoms signaling complex-systems bio interdisciplinary physics genetics genomics GWAS candidate-gene bounded-cognition crooked realness being-right info-dynamics kumbaya-kult accuracy education status vampire-squid
november 2016 by nhaliday
Non-Shared Environment Doesn’t Just Mean Schools And Peers | Slate Star Codex
thinking yvain len:short methodology critique insight ssc ratty models biodet roots behavioral-gen variance-components environmental-effects regularizer education measurement volo-avolo accuracy random signal-noise immune mutation genetics genomics 🌞 epigenetics speculation ideas composition-decomposition systematic-ad-hoc effect-size data developmental intricacy confounding causation obesity epidemiology interpretation
september 2016 by nhaliday
thinking yvain len:short methodology critique insight ssc ratty models biodet roots behavioral-gen variance-components environmental-effects regularizer education measurement volo-avolo accuracy random signal-noise immune mutation genetics genomics 🌞 epigenetics speculation ideas composition-decomposition systematic-ad-hoc effect-size data developmental intricacy confounding causation obesity epidemiology interpretation
september 2016 by nhaliday
natural language processing blog: Debugging machine learning
september 2016 by nhaliday
I've been thinking, mostly in the context of teaching, about how to specifically teach debugging of machine learning. Personally I find it very helpful to break things down in terms of the usual error terms: Bayes error (how much error is there in the best possible classifier), approximation error (how much do you pay for restricting to some hypothesis class), estimation error (how much do you pay because you only have finite samples), optimization error (how much do you pay because you didn't find a global optimum to your optimization problem). I've generally found that trying to isolate errors to one of these pieces, and then debugging that piece in particular (eg., pick a better optimizer versus pick a better hypothesis class) has been useful.
machine-learning
debugging
checklists
best-practices
pragmatic
expert
init
system-design
data-science
acmtariat
error
engineering
clarity
intricacy
model-selection
org:bleg
nibble
noise-structure
signal-noise
knowledge
accuracy
expert-experience
checking
grokkability-clarity
methodology
september 2016 by nhaliday
The Elephant in the Brain: Hidden Motives in Everday Life
august 2016 by nhaliday
https://www.youtube.com/watch?v=V84_F1QWdeU
A Book Response Prediction: https://www.overcomingbias.com/2017/03/a-book-response-prediction.html
I predict that one of the most common responses will be something like “extraordinary claims require extraordinary evidence.” While the evidence we offer is suggestive, for claims as counterintuitive as ours on topics as important as these, evidence should be held to a higher standard than the one our book meets. We should shut up until we can prove our claims.
I predict that another of the most common responses will be something like “this is all well known.” Wise observers have known and mentioned such things for centuries. Perhaps foolish technocrats who only read in their narrow literatures are ignorant of such things, but our book doesn’t add much to what true scholars and thinkers have long known.
https://nintil.com/2018/01/16/this-review-is-not-about-reviewing-the-elephant-in-the-brain/
http://www.overcomingbias.com/2018/01/a-long-review-of-elephant-in-the-brain.html
https://nintil.com/2018/01/28/ad-hoc-explanations-a-rejoinder-to-hanson/
Elephant in the Brain on Religious Hypocrisy:
http://econlog.econlib.org/archives/2018/01/elephant_in_the.html
http://www.overcomingbias.com/2018/01/caplan-critiques-our-religion-chapter.html
books
postrat
simler
hanson
impro
anthropology
insight
todo
X-not-about-Y
signaling
🦀
new-religion
psychology
contrarianism
👽
ratty
rationality
hidden-motives
2017
s:**
p:null
ideas
impetus
multi
video
presentation
unaffiliated
review
summary
education
higher-ed
human-capital
propaganda
nationalism-globalism
civic
domestication
medicine
meta:medicine
healthcare
economics
behavioral-econ
supply-demand
roots
questions
charity
hypocrisy
peter-singer
big-peeps
philosophy
morality
ethics
formal-values
cog-psych
evopsych
thinking
conceptual-vocab
intricacy
clarity
accuracy
truth
is-ought
realness
religion
theos
christianity
islam
cultural-dynamics
within-without
neurons
EEA
analysis
article
links
meta-analysis
survey
judaism
compensation
labor
correlation
endogenous-exogenous
causation
critique
politics
government
polisci
political-econ
emotion
health
study
list
class
art
status
effective-altruism
evidence-based
epistemic
error
contradiction
prediction
culture
aphorism
quotes
discovery
no
A Book Response Prediction: https://www.overcomingbias.com/2017/03/a-book-response-prediction.html
I predict that one of the most common responses will be something like “extraordinary claims require extraordinary evidence.” While the evidence we offer is suggestive, for claims as counterintuitive as ours on topics as important as these, evidence should be held to a higher standard than the one our book meets. We should shut up until we can prove our claims.
I predict that another of the most common responses will be something like “this is all well known.” Wise observers have known and mentioned such things for centuries. Perhaps foolish technocrats who only read in their narrow literatures are ignorant of such things, but our book doesn’t add much to what true scholars and thinkers have long known.
https://nintil.com/2018/01/16/this-review-is-not-about-reviewing-the-elephant-in-the-brain/
http://www.overcomingbias.com/2018/01/a-long-review-of-elephant-in-the-brain.html
https://nintil.com/2018/01/28/ad-hoc-explanations-a-rejoinder-to-hanson/
Elephant in the Brain on Religious Hypocrisy:
http://econlog.econlib.org/archives/2018/01/elephant_in_the.html
http://www.overcomingbias.com/2018/01/caplan-critiques-our-religion-chapter.html
august 2016 by nhaliday
The Future of Genetic Enhancement is Not in the West | Quillette
august 2016 by nhaliday
https://qz.com/750908/the-future-of-genetic-enhancement-is-in-china-and-india/
If it becomes possible to safely genetically increase babies’ IQ, it will become inevitable: https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/07/14/if-it-becomes-possible-to-safely-genetically-increase-babies-iq-it-will-become-inevitable/
Baby Genome Sequencing for Sale in China: https://www.technologyreview.com/s/608086/baby-genome-sequencing-for-sale-in-china/
Chinese parents can now decode the genomes of their healthy newborns, revealing disease risks as well as the likelihood of physical traits like male-pattern baldness.
https://gnxp.nofe.me/2017/06/16/the-cultural-revolution-that-will-happen-in-china/
https://gnxp.nofe.me/2017/07/26/the-future-will-be-genetically-engineered/
http://www.nature.com/news/china-s-embrace-of-embryo-selection-raises-thorny-questions-1.22468
http://infoproc.blogspot.com/2017/08/embryo-selection-in-china-nature.html
China launches massive genome research initiative: https://news.cgtn.com/news/7767544e34637a6333566d54/share_p.html
research ethics:
First results of CRISPR gene editing of normal embryos released: https://www.newscientist.com/article/2123973-first-results-of-crispr-gene-editing-of-normal-embryos-released/
http://www.bbc.com/news/health-32530334
https://www.technologyreview.com/s/608350/first-human-embryos-edited-in-us/
http://infoproc.blogspot.com/2017/07/first-human-embryos-edited-in-us-mit.html
https://news.ycombinator.com/item?id=14912382
http://www.nature.com/nature/journal/vaop/ncurrent/full/nature23305.html
caveats: https://ipscell.com/2017/08/4-reasons-mitalipov-paper-doesnt-herald-safe-crispr-human-genetic-modification/
https://www.scientificamerican.com/article/first-crispr-human-clinical-trial-gets-a-green-light-from-the-u-s/
http://www.nature.com/news/crispr-gene-editing-tested-in-a-person-for-the-first-time-1.20988
https://news.ycombinator.com/item?id=12960844
So this title is a bit misleading; something like, "cells edited with CRISPR injected into a person for the first time" would be better. While CRISPR is promising for topological treatments, that's not what happened here.
https://www.scientificamerican.com/article/chinese-scientists-to-pioneer-first-human-crispr-trial/
China sprints ahead in CRISPR therapy race: http://science.sciencemag.org/content/358/6359/20
China, Unhampered by Rules, Races Ahead in Gene-Editing Trials: https://www.wsj.com/articles/china-unhampered-by-rules-races-ahead-in-gene-editing-trials-1516562360
U.S. scientists helped devise the Crispr biotechnology tool. First to test it in humans are Chinese doctors
https://twitter.com/mr_scientism/status/955207026333929472
https://archive.is/lJ761
https://www.npr.org/sections/health-shots/2018/01/24/579925801/chinese-scientists-clone-monkeys-using-method-that-created-dolly-the-sheep
https://twitter.com/0xa59a2d/status/956344998626242560
https://archive.is/azH4S
https://twitter.com/AngloRemnant/status/956348983303114753
https://archive.is/RclJG
https://twitter.com/AngloRemnant/status/956352891287228416
https://archive.is/BfHuV
http://www.acsh.org/news/2017/03/07/did-gene-therapy-cure-sickle-cell-disease-10950
lol: http://www.theonion.com/infographic/pros-and-cons-gene-editing-56740
Japan set to allow gene editing in human embryos [ed.: (for research)]: https://www.nature.com/articles/d41586-018-06847-7
Draft guidelines permit gene-editing tools for research into early human development.
futurism
prediction
enhancement
biotech
essay
china
asia
culture
poll
len:short
new-religion
accelerationism
letters
news
org:mag
org:popup
🌞
sinosphere
🔬
sanctity-degradation
morality
values
democracy
authoritarianism
genetics
CRISPR
scaling-up
orient
multi
org:lite
india
competition
speedometer
org:rec
right-wing
rhetoric
slippery-slope
iq
usa
incentives
technology
org:nat
org:sci
org:biz
trends
current-events
genomics
gnxp
scitariat
commentary
hsu
org:foreign
volo-avolo
regulation
coordination
cooperate-defect
moloch
popsci
announcement
politics
government
policy
science
ethics
:/
org:anglo
cancer
medicine
hn
tech
immune
sapiens
study
summary
bio
disease
critique
regularizer
accuracy
lol
comedy
hard-tech
skunkworks
twitter
social
backup
gnon
🐸
randy-ayndy
civil-liberty
FDA
duplication
left-wing
chart
abortion-contraception-embryo
If it becomes possible to safely genetically increase babies’ IQ, it will become inevitable: https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/07/14/if-it-becomes-possible-to-safely-genetically-increase-babies-iq-it-will-become-inevitable/
Baby Genome Sequencing for Sale in China: https://www.technologyreview.com/s/608086/baby-genome-sequencing-for-sale-in-china/
Chinese parents can now decode the genomes of their healthy newborns, revealing disease risks as well as the likelihood of physical traits like male-pattern baldness.
https://gnxp.nofe.me/2017/06/16/the-cultural-revolution-that-will-happen-in-china/
https://gnxp.nofe.me/2017/07/26/the-future-will-be-genetically-engineered/
http://www.nature.com/news/china-s-embrace-of-embryo-selection-raises-thorny-questions-1.22468
http://infoproc.blogspot.com/2017/08/embryo-selection-in-china-nature.html
China launches massive genome research initiative: https://news.cgtn.com/news/7767544e34637a6333566d54/share_p.html
research ethics:
First results of CRISPR gene editing of normal embryos released: https://www.newscientist.com/article/2123973-first-results-of-crispr-gene-editing-of-normal-embryos-released/
http://www.bbc.com/news/health-32530334
https://www.technologyreview.com/s/608350/first-human-embryos-edited-in-us/
http://infoproc.blogspot.com/2017/07/first-human-embryos-edited-in-us-mit.html
https://news.ycombinator.com/item?id=14912382
http://www.nature.com/nature/journal/vaop/ncurrent/full/nature23305.html
caveats: https://ipscell.com/2017/08/4-reasons-mitalipov-paper-doesnt-herald-safe-crispr-human-genetic-modification/
https://www.scientificamerican.com/article/first-crispr-human-clinical-trial-gets-a-green-light-from-the-u-s/
http://www.nature.com/news/crispr-gene-editing-tested-in-a-person-for-the-first-time-1.20988
https://news.ycombinator.com/item?id=12960844
So this title is a bit misleading; something like, "cells edited with CRISPR injected into a person for the first time" would be better. While CRISPR is promising for topological treatments, that's not what happened here.
https://www.scientificamerican.com/article/chinese-scientists-to-pioneer-first-human-crispr-trial/
China sprints ahead in CRISPR therapy race: http://science.sciencemag.org/content/358/6359/20
China, Unhampered by Rules, Races Ahead in Gene-Editing Trials: https://www.wsj.com/articles/china-unhampered-by-rules-races-ahead-in-gene-editing-trials-1516562360
U.S. scientists helped devise the Crispr biotechnology tool. First to test it in humans are Chinese doctors
https://twitter.com/mr_scientism/status/955207026333929472
https://archive.is/lJ761
https://www.npr.org/sections/health-shots/2018/01/24/579925801/chinese-scientists-clone-monkeys-using-method-that-created-dolly-the-sheep
https://twitter.com/0xa59a2d/status/956344998626242560
https://archive.is/azH4S
https://twitter.com/AngloRemnant/status/956348983303114753
https://archive.is/RclJG
https://twitter.com/AngloRemnant/status/956352891287228416
https://archive.is/BfHuV
http://www.acsh.org/news/2017/03/07/did-gene-therapy-cure-sickle-cell-disease-10950
lol: http://www.theonion.com/infographic/pros-and-cons-gene-editing-56740
Japan set to allow gene editing in human embryos [ed.: (for research)]: https://www.nature.com/articles/d41586-018-06847-7
Draft guidelines permit gene-editing tools for research into early human development.
august 2016 by nhaliday
Latency Numbers Every Programmer Should Know
may 2016 by nhaliday
by year: https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html
https://softwareengineering.stackexchange.com/questions/312485/how-can-jeff-deans-latency-numbers-every-programmer-should-know-be-accurate-i
http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/
this isn't terribly helpful (I'm looking more for a rule of thumb to estimate throughput from distance)
http://ipnetwork.bgtmo.ip.att.net/pws/network_delay.html
https://serverfault.com/questions/137348/how-much-network-latency-is-typical-for-east-west-coast-usa
https://serverfault.com/questions/61719/how-does-geography-affect-network-latency
https://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly
systems
networking
performance
programming
os
engineering
tech
paste
cheatsheet
objektbuch
street-fighting
🖥
techtariat
big-picture
caching
magnitude
nitty-gritty
scaling-tech
let-me-see
quantitative-qualitative
chart
reference
nibble
career
interview-prep
time
scale
measure
comparison
metal-to-virtual
multi
sequential
visualization
trends
multiplicative
speed
web
dynamic
q-n-a
stackex
estimate
accuracy
org:edu
org:junk
visual-understanding
benchmarks
latency-throughput
client-server
thinking
howto
explanation
crosstab
within-group
usa
geography
maps
urban-rural
correlation
https://softwareengineering.stackexchange.com/questions/312485/how-can-jeff-deans-latency-numbers-every-programmer-should-know-be-accurate-i
http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/
this isn't terribly helpful (I'm looking more for a rule of thumb to estimate throughput from distance)
http://ipnetwork.bgtmo.ip.att.net/pws/network_delay.html
https://serverfault.com/questions/137348/how-much-network-latency-is-typical-for-east-west-coast-usa
https://serverfault.com/questions/61719/how-does-geography-affect-network-latency
https://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly
may 2016 by nhaliday
Don’t invert that matrix (2010) | Hacker News
may 2016 by nhaliday
However, one of the reasons he's given is not correct: Druinsky and Toledo have shown (http://arxiv.org/abs/1201.6035) that -- despite the very widespread belief to the contrary -- solving a linear system be calculating the inverse can be as accurate (though not nearly as efficient) as solving it directly.
tutorial
programming
commentary
hn
numerics
multi
hmm
techtariat
org:mat
preprint
papers
regularizer
best-practices
accuracy
may 2016 by nhaliday
Word Embeddings: Explaining their properties – Off the convex path
machine-learning nlp tcs academia research probability off-convex linear-algebra acmtariat embeddings linearity sanjeev-arora isotropy org:bleg nibble direction features regularization dimensionality convexity-curvature nonlinearity accuracy correlation explanans roots latent-variables matrix-factorization generative boltzmann org:popup liner-notes papers summary acm stochastic-processes
february 2016 by nhaliday
machine-learning nlp tcs academia research probability off-convex linear-algebra acmtariat embeddings linearity sanjeev-arora isotropy org:bleg nibble direction features regularization dimensionality convexity-curvature nonlinearity accuracy correlation explanans roots latent-variables matrix-factorization generative boltzmann org:popup liner-notes papers summary acm stochastic-processes
february 2016 by nhaliday
How Old Are Fairy Tales? - The Atlantic
january 2016 by nhaliday
Many folklorists disagreed. Some have claimed that many classic fairy tales are recent inventions that followed the advent of mass-printed literature. Others noted that human stories, unlike human genes, aren't just passed down vertically through generations, but horizontally within generations. “They’re passed across societies through trade, exchange, migration, and conquest,” says Tehrani. “The consensus was that these processes would have destroyed any deep signatures of descent from ancient ancestral populations.”
Not so. Tehrani and da Silva found that although neighboring cultures can easily exchange stories, they also often reject the tales of their neighbors. Several stories were less likely to appear in one population if they were told within an adjacent one.
Meanwhile, a quarter of the Tales of Magic showed clear signatures of shared descent from ancient ancestors. “Most people would assume that folktales are rapidly changing and easily exchanged between social groups,” says Simon Greenhill from the Australian National University. “But this shows that many tales are actually surprisingly stable over time and seem to track population history well.” Similarly, a recent study found that flood “myths” among Aboriginal Australians can be traced back to real sea level rises 7,000 years ago.
Many of the Tales of Magic were similarly ancient, as the Grimms suggested. Beauty and the Beast and Rumpelstiltskin were first written down in the 17th and 18th centuries respectively, but they are actually between 2,500 and 6,000 years old—not quite tales as old as time, but perhaps as old as wheels and writing.
The Smith and the Devil is probably 6,000 years old, too. In this story, a crafty blacksmith sells his soul to an evil supernatural entity in exchange for awesome smithing powers, which he then uses to leash the entity to an immovable object. The basic tale has been adapted in everything from Faust to blues lore, but the most ancient version, involving the blacksmith, comes from the Bronze Age! It predates the last common ancestor of all Indo-European languages. “It's constantly being updated and recycled, but it's older than Christianity,” says Tehrani.
This result might help to settle a debate about the origins of Indo-European languages. It rules out the idea that these tongues originated among Neolithic farmers, who lived 9,000 years ago in what is now modern Turkey. After all, how could these people, who hadn’t invented metallurgy, have concocted a story where the hero is a blacksmith? A rival hypothesis becomes far more likely: Indo-European languages emerged 5,000 to 6,000 years ago among pastoralists from the Russian steppes, who knew how to work metal.
The Smith and the Devil: https://en.wikipedia.org/wiki/The_Smith_and_the_Devil
The Smith and the Devil is a European fairy tale. The story is of a smith who makes a pact with a malevolent being—commonly the Devil (in later times), Death or a genie—selling his soul for some power, then tricks the devil out of his prize. In one version, the smith gains the power to weld any materials, then uses this power to stick the devil to an immovable object, allowing the smith to renege on the bargain.[1]
...
According to George Monbiot, the blacksmith is a motif of folklore throughout (and beyond) Europe associated with malevolence (the medieval vision of Hell may draw upon the image the smith at his forge), and several variant tales tell of smiths entering into a pact with the devil to obtain fire and the means of smelting metal.[6]
According to research applying phylogenetic techniques to linguistics by folklorist Sara Graça da Silva and anthropologist Jamie Tehrani,[7] "The Smith and the Devil" may be one of the oldest European folk tales, with the basic plot stable throughout the Indo-European speaking world from India to Scandinavia, possibly being first told in Indo-European 6,000 years ago in the Bronze Age.[1][8][9] Folklorist John Lindow, however, notes that a word for "smith" may not have existed in Indo-European, and if so the tale may not be that old.[9]
Revealed: how Indigenous Australian storytelling accurately records sea level rises 7,000 years ago: http://www.theguardian.com/australia-news/2015/sep/16/indigenous-australian-storytelling-records-sea-level-rises-over-millenia
https://en.wikipedia.org/wiki/Geomythology
https://westhunt.wordpress.com/2017/07/26/legends/
I wonder how long oral history lasts. What’s the oldest legend that has some clear fragment of truth in it?
https://westhunt.wordpress.com/2017/07/26/legends/#comment-93821
The Black Sea deluge hypothesis, being the origin of the different deluge myths around the Middle East?
--
People have lived in river valleys for a long time now, and they flood. I mean, deluge myths could also go back to the end of the Ice Age, when many lands went underwater as sea level rose. But how can you tell? Now if there was a one-time thing that had a special identifying trait, say purple rain, that might be convincing.
https://westhunt.wordpress.com/2017/07/26/legends/#comment-93883
RE: untangling actual historical events and personages from myth and legend,
Obviously, it’s pretty damn tough. In most cases (THE ILIAD, the Pentateuch, etc), we simply lack the proper controls (literary sources written down at a time reasonably close to the events in question). Hence, we have to rely on a combination of archaeology plus intuition.Was a city sacked at roughly the proper time? Does a given individual appear to be based on a real person?
https://westhunt.wordpress.com/2017/07/26/legends/#comment-93867
I’m partial to the notion that the “forbidden fruit” was wheat, making the Garden of Eden a story about the dawn of agriculture, and the story of Cain and Abel the first conflict between settled farmer and semi-nomadic pastoralist. That would make it perhaps 6 millennia old when first written down.
--
The story of Cain and Abel is indeed the conflict between the agricultural and pastoral ways of life
same conclusion as me: https://pinboard.in/u:nhaliday/b:9130f5f3c17b
great blog: https://biblicalsausage.wordpress.com/
https://en.wikipedia.org/wiki/Euhemerus
Euhemerus (also spelled Euemeros or Evemerus; Ancient Greek: Εὐήμερος Euhēmeros, "happy; prosperous"; late fourth century BC), was a Greek mythographer at the court of Cassander, the king of Macedon. Euhemerus' birthplace is disputed, with Messina in Sicily as the most probable location, while others suggest Chios or Tegea.[citation needed]
The philosophy attributed to and named for Euhemerus, euhemerism, holds that many mythological tales can be attributed to historical persons and events, the accounts of which have become altered and exaggerated over time.
Euhemerus's work combined elements of fiction and political utopianism. In the ancient world he was considered an atheist. Early Christian writers, such as Lactantius, used Euhemerus's belief that the ancient gods were originally human to confirm their inferiority regarding the Christian God.
https://en.wikipedia.org/wiki/Euhemerism
In the ancient skeptic philosophical tradition of Theodorus of Cyrene and the Cyrenaics, Euhemerus forged a new method of interpretation for the contemporary religious beliefs. Though his work is lost, the reputation of Euhemerus was that he believed that much of Greek mythology could be interpreted as natural or historical events subsequently given supernatural characteristics through retelling. Subsequently Euhemerus was considered to be an atheist by his opponents, most notably Callimachus.[7]
...
Euhemerus' views were rooted in the deification of men, usually kings, into gods through apotheosis. In numerous cultures, kings were exalted or venerated into the status of divine beings and worshipped after their death, or sometimes even while they ruled. Dion, the tyrant ruler of Syracuse, was deified while he was alive and modern scholars consider his apotheosis to have influenced Euhemerus' views on the origin of all gods.[8] Euhemerus was also living during the contemporaneous deification of the Seleucids and "pharaoization" of the Ptolemies in a fusion of Hellenic and Egyptian traditions.
...
Hostile to paganism, the early Christians, such as the Church Fathers, embraced euhemerism in attempt to undermine the validity of pagan gods.[13] The usefulness of euhemerist views to early Christian apologists may be summed up in Clement of Alexandria's triumphant cry in Cohortatio ad gentes: "Those to whom you bow were once men like yourselves."[14]
https://en.wikipedia.org/wiki/Sacred_king
https://en.wikipedia.org/wiki/Imperial_cult
culture
history
cocktail
anthropology
news
myth
org:mag
narrative
roots
spreading
theos
archaeology
tradition
multi
climate-change
environment
oceans
h2o
org:lite
anglo
org:anglo
west-hunter
scitariat
accuracy
truth
trees
ed-yong
sapiens
farmers-and-foragers
fluid
trivia
nihil
flux-stasis
time
antiquity
retention
age-generation
estimate
epidemiology
evolution
migration
cultural-dynamics
language
gavisti
foreign-lang
wormholes
religion
christianity
interdisciplinary
fiction
speculation
poast
discussion
writing
speaking
communication
thick-thin
whole-partial-many
literature
analysis
nitty-gritty
blog
stream
deep-materialism
new-religion
apollonian-dionysian
subjective-objective
absolute-relative
hmm
big-peeps
iron-age
the-classics
mediterranean
antidemos
leviathan
sanctity-degradation
signal-noise
stylized-facts
conquest-empire
the-devil
god-man-beast-victim
ideology
illusion
intricacy
tip-of-tongue
exegesis-hermeneutics
interpretation
linguistics
traces
bible
judaism
realness
paganism
sequential
Not so. Tehrani and da Silva found that although neighboring cultures can easily exchange stories, they also often reject the tales of their neighbors. Several stories were less likely to appear in one population if they were told within an adjacent one.
Meanwhile, a quarter of the Tales of Magic showed clear signatures of shared descent from ancient ancestors. “Most people would assume that folktales are rapidly changing and easily exchanged between social groups,” says Simon Greenhill from the Australian National University. “But this shows that many tales are actually surprisingly stable over time and seem to track population history well.” Similarly, a recent study found that flood “myths” among Aboriginal Australians can be traced back to real sea level rises 7,000 years ago.
Many of the Tales of Magic were similarly ancient, as the Grimms suggested. Beauty and the Beast and Rumpelstiltskin were first written down in the 17th and 18th centuries respectively, but they are actually between 2,500 and 6,000 years old—not quite tales as old as time, but perhaps as old as wheels and writing.
The Smith and the Devil is probably 6,000 years old, too. In this story, a crafty blacksmith sells his soul to an evil supernatural entity in exchange for awesome smithing powers, which he then uses to leash the entity to an immovable object. The basic tale has been adapted in everything from Faust to blues lore, but the most ancient version, involving the blacksmith, comes from the Bronze Age! It predates the last common ancestor of all Indo-European languages. “It's constantly being updated and recycled, but it's older than Christianity,” says Tehrani.
This result might help to settle a debate about the origins of Indo-European languages. It rules out the idea that these tongues originated among Neolithic farmers, who lived 9,000 years ago in what is now modern Turkey. After all, how could these people, who hadn’t invented metallurgy, have concocted a story where the hero is a blacksmith? A rival hypothesis becomes far more likely: Indo-European languages emerged 5,000 to 6,000 years ago among pastoralists from the Russian steppes, who knew how to work metal.
The Smith and the Devil: https://en.wikipedia.org/wiki/The_Smith_and_the_Devil
The Smith and the Devil is a European fairy tale. The story is of a smith who makes a pact with a malevolent being—commonly the Devil (in later times), Death or a genie—selling his soul for some power, then tricks the devil out of his prize. In one version, the smith gains the power to weld any materials, then uses this power to stick the devil to an immovable object, allowing the smith to renege on the bargain.[1]
...
According to George Monbiot, the blacksmith is a motif of folklore throughout (and beyond) Europe associated with malevolence (the medieval vision of Hell may draw upon the image the smith at his forge), and several variant tales tell of smiths entering into a pact with the devil to obtain fire and the means of smelting metal.[6]
According to research applying phylogenetic techniques to linguistics by folklorist Sara Graça da Silva and anthropologist Jamie Tehrani,[7] "The Smith and the Devil" may be one of the oldest European folk tales, with the basic plot stable throughout the Indo-European speaking world from India to Scandinavia, possibly being first told in Indo-European 6,000 years ago in the Bronze Age.[1][8][9] Folklorist John Lindow, however, notes that a word for "smith" may not have existed in Indo-European, and if so the tale may not be that old.[9]
Revealed: how Indigenous Australian storytelling accurately records sea level rises 7,000 years ago: http://www.theguardian.com/australia-news/2015/sep/16/indigenous-australian-storytelling-records-sea-level-rises-over-millenia
https://en.wikipedia.org/wiki/Geomythology
https://westhunt.wordpress.com/2017/07/26/legends/
I wonder how long oral history lasts. What’s the oldest legend that has some clear fragment of truth in it?
https://westhunt.wordpress.com/2017/07/26/legends/#comment-93821
The Black Sea deluge hypothesis, being the origin of the different deluge myths around the Middle East?
--
People have lived in river valleys for a long time now, and they flood. I mean, deluge myths could also go back to the end of the Ice Age, when many lands went underwater as sea level rose. But how can you tell? Now if there was a one-time thing that had a special identifying trait, say purple rain, that might be convincing.
https://westhunt.wordpress.com/2017/07/26/legends/#comment-93883
RE: untangling actual historical events and personages from myth and legend,
Obviously, it’s pretty damn tough. In most cases (THE ILIAD, the Pentateuch, etc), we simply lack the proper controls (literary sources written down at a time reasonably close to the events in question). Hence, we have to rely on a combination of archaeology plus intuition.Was a city sacked at roughly the proper time? Does a given individual appear to be based on a real person?
https://westhunt.wordpress.com/2017/07/26/legends/#comment-93867
I’m partial to the notion that the “forbidden fruit” was wheat, making the Garden of Eden a story about the dawn of agriculture, and the story of Cain and Abel the first conflict between settled farmer and semi-nomadic pastoralist. That would make it perhaps 6 millennia old when first written down.
--
The story of Cain and Abel is indeed the conflict between the agricultural and pastoral ways of life
same conclusion as me: https://pinboard.in/u:nhaliday/b:9130f5f3c17b
great blog: https://biblicalsausage.wordpress.com/
https://en.wikipedia.org/wiki/Euhemerus
Euhemerus (also spelled Euemeros or Evemerus; Ancient Greek: Εὐήμερος Euhēmeros, "happy; prosperous"; late fourth century BC), was a Greek mythographer at the court of Cassander, the king of Macedon. Euhemerus' birthplace is disputed, with Messina in Sicily as the most probable location, while others suggest Chios or Tegea.[citation needed]
The philosophy attributed to and named for Euhemerus, euhemerism, holds that many mythological tales can be attributed to historical persons and events, the accounts of which have become altered and exaggerated over time.
Euhemerus's work combined elements of fiction and political utopianism. In the ancient world he was considered an atheist. Early Christian writers, such as Lactantius, used Euhemerus's belief that the ancient gods were originally human to confirm their inferiority regarding the Christian God.
https://en.wikipedia.org/wiki/Euhemerism
In the ancient skeptic philosophical tradition of Theodorus of Cyrene and the Cyrenaics, Euhemerus forged a new method of interpretation for the contemporary religious beliefs. Though his work is lost, the reputation of Euhemerus was that he believed that much of Greek mythology could be interpreted as natural or historical events subsequently given supernatural characteristics through retelling. Subsequently Euhemerus was considered to be an atheist by his opponents, most notably Callimachus.[7]
...
Euhemerus' views were rooted in the deification of men, usually kings, into gods through apotheosis. In numerous cultures, kings were exalted or venerated into the status of divine beings and worshipped after their death, or sometimes even while they ruled. Dion, the tyrant ruler of Syracuse, was deified while he was alive and modern scholars consider his apotheosis to have influenced Euhemerus' views on the origin of all gods.[8] Euhemerus was also living during the contemporaneous deification of the Seleucids and "pharaoization" of the Ptolemies in a fusion of Hellenic and Egyptian traditions.
...
Hostile to paganism, the early Christians, such as the Church Fathers, embraced euhemerism in attempt to undermine the validity of pagan gods.[13] The usefulness of euhemerist views to early Christian apologists may be summed up in Clement of Alexandria's triumphant cry in Cohortatio ad gentes: "Those to whom you bow were once men like yourselves."[14]
https://en.wikipedia.org/wiki/Sacred_king
https://en.wikipedia.org/wiki/Imperial_cult
january 2016 by nhaliday
Overcoming Bias : This is the Dream Time
evolution history culture philosophy hanson signaling contrarianism insight essay values thinking civilization 🤖 new-religion moloch ems ratty technology pre-2013 analogy frisson mostly-modern legacy zeitgeist modernity malthus fertility farmers-and-foragers similarity uniqueness other-xtian long-short-run emotion class multiplicative iteration-recursion innovation novelty local-global communication hidden-motives X-not-about-Y realness duty comparison cultural-dynamics anthropology metabuch stylized-facts homo-hetero near-far drugs art status truth is-ought meta:rhetoric impro demographic-transition extrema coordination cooperate-defect axelrod anthropic risk uncertainty chart illusion economics growth-econ broad-econ big-picture futurism selection competition darwinian roots branches medicine healthcare meta:medicine fashun myth fiction summary accuracy psychology cog-psych social-psych evopsych within-without EEA neurons info-dynamics hari-seldon formal-values flux-stasis sing
october 2014 by nhaliday
evolution history culture philosophy hanson signaling contrarianism insight essay values thinking civilization 🤖 new-religion moloch ems ratty technology pre-2013 analogy frisson mostly-modern legacy zeitgeist modernity malthus fertility farmers-and-foragers similarity uniqueness other-xtian long-short-run emotion class multiplicative iteration-recursion innovation novelty local-global communication hidden-motives X-not-about-Y realness duty comparison cultural-dynamics anthropology metabuch stylized-facts homo-hetero near-far drugs art status truth is-ought meta:rhetoric impro demographic-transition extrema coordination cooperate-defect axelrod anthropic risk uncertainty chart illusion economics growth-econ broad-econ big-picture futurism selection competition darwinian roots branches medicine healthcare meta:medicine fashun myth fiction summary accuracy psychology cog-psych social-psych evopsych within-without EEA neurons info-dynamics hari-seldon formal-values flux-stasis sing
october 2014 by nhaliday
bundles : abstract ‧ acm ‧ prediction
related tags
:/ ⊕ ability-competence ⊕ abortion-contraception-embryo ⊕ absolute-relative ⊕ abstraction ⊕ academia ⊕ accelerationism ⊕ accretion ⊕ accuracy ⊖ acm ⊕ acmtariat ⊕ additive ⊕ aDNA ⊕ advice ⊕ aesthetics ⊕ africa ⊕ age-generation ⊕ aggregator ⊕ agri-mindset ⊕ ai ⊕ ai-control ⊕ algebra ⊕ algorithms ⊕ alien-character ⊕ alignment ⊕ alt-inst ⊕ analogy ⊕ analysis ⊕ analytical-holistic ⊕ anglo ⊕ anglosphere ⊕ announcement ⊕ anomie ⊕ anthropic ⊕ anthropology ⊕ antidemos ⊕ antiquity ⊕ aphorism ⊕ apollonian-dionysian ⊕ applicability-prereqs ⊕ applications ⊕ approximation ⊕ arbitrage ⊕ archaeology ⊕ arms ⊕ arrows ⊕ art ⊕ article ⊕ asia ⊕ assimilation ⊕ attention ⊕ audio ⊕ authoritarianism ⊕ automation ⊕ axelrod ⊕ axioms ⊕ backup ⊕ bangbang ⊕ behavioral-econ ⊕ behavioral-gen ⊕ being-becoming ⊕ being-right ⊕ benchmarks ⊕ berkeley ⊕ best-practices ⊕ bias-variance ⊕ biases ⊕ bible ⊕ big-peeps ⊕ big-picture ⊕ bio ⊕ biodet ⊕ bioinformatics ⊕ biophysical-econ ⊕ biotech ⊕ blog ⊕ boltzmann ⊕ books ⊕ bostrom ⊕ bounded-cognition ⊕ brain-scan ⊕ branches ⊕ brands ⊕ britain ⊕ broad-econ ⊕ c(pp) ⊕ caching ⊕ calculation ⊕ caltech ⊕ cancer ⊕ candidate-gene ⊕ canon ⊕ career ⊕ causation ⊕ charity ⊕ chart ⊕ cheatsheet ⊕ checking ⊕ checklists ⊕ chemistry ⊕ china ⊕ christianity ⊕ civic ⊕ civil-liberty ⊕ civilization ⊕ clarity ⊕ class ⊕ classification ⊕ client-server ⊕ climate-change ⊕ cliometrics ⊕ coalitions ⊕ coarse-fine ⊕ cocktail ⊕ code-organizing ⊕ cog-psych ⊕ comedy ⊕ commentary ⊕ communication ⊕ comparison ⊕ compensation ⊕ competition ⊕ compilers ⊕ complex-systems ⊕ complexity ⊕ composition-decomposition ⊕ computation ⊕ computer-memory ⊕ computer-vision ⊕ concept ⊕ conceptual-vocab ⊕ concrete ⊕ concurrency ⊕ confidence ⊕ confounding ⊕ conquest-empire ⊕ consilience ⊕ contradiction ⊕ contrarianism ⊕ control ⊕ convexity-curvature ⊕ cool ⊕ cooperate-defect ⊕ coordination ⊕ core-rats ⊕ correctness ⊕ correlation ⊕ cost-benefit ⊕ counterfactual ⊕ creative ⊕ criminal-justice ⊕ criminology ⊕ CRISPR ⊕ critique ⊕ crooked ⊕ crosstab ⊕ cs ⊕ cultural-dynamics ⊕ culture ⊕ curiosity ⊕ current-events ⊕ dan-luu ⊕ dark-arts ⊕ darwinian ⊕ data ⊕ data-science ⊕ database ⊕ dataset ⊕ dataviz ⊕ death ⊕ debate ⊕ debt ⊕ debugging ⊕ deep-learning ⊕ deep-materialism ⊕ deepgoog ⊕ defense ⊕ definite-planning ⊕ degrees-of-freedom ⊕ democracy ⊕ demographic-transition ⊕ demographics ⊕ dennett ⊕ density ⊕ dependence-independence ⊕ descriptive ⊕ detail-architecture ⊕ developing-world ⊕ developmental ⊕ differential ⊕ dimensionality ⊕ direction ⊕ discovery ⊕ discrete ⊕ discrimination ⊕ discussion ⊕ disease ⊕ distribution ⊕ divide-and-conquer ⊕ documentation ⊕ domestication ⊕ drugs ⊕ duplication ⊕ duty ⊕ dynamic ⊕ early-modern ⊕ econometrics ⊕ economics ⊕ econotariat ⊕ ed-yong ⊕ eden-heaven ⊕ education ⊕ EEA ⊕ effect-size ⊕ effective-altruism ⊕ efficiency ⊕ egalitarianism-hierarchy ⊕ elections ⊕ elegance ⊕ elite ⊕ embeddings ⊕ embodied ⊕ embodied-pack ⊕ emotion ⊕ empirical ⊕ ems ⊕ end-times ⊕ endo-exo ⊕ endogenous-exogenous ⊕ ends-means ⊕ energy-resources ⊕ engineering ⊕ enhancement ⊕ enlightenment-renaissance-restoration-reformation ⊕ ensembles ⊕ environment ⊕ environmental-effects ⊕ epidemiology ⊕ epigenetics ⊕ epistemic ⊕ equilibrium ⊕ ergo ⊕ error ⊕ essay ⊕ essence-existence ⊕ estimate ⊕ ethanol ⊕ ethics ⊕ EU ⊕ europe ⊕ events ⊕ evidence ⊕ evidence-based ⊕ evolution ⊕ evopsych ⊕ examples ⊕ exegesis-hermeneutics ⊕ existence ⊕ experiment ⊕ expert ⊕ expert-experience ⊕ explanans ⊕ explanation ⊕ extrema ⊕ faq ⊕ farmers-and-foragers ⊕ fashun ⊕ FDA ⊕ features ⊕ fertility ⊕ fiction ⊕ field-study ⊕ fighting ⊕ finance ⊕ fisher ⊕ fitness ⊕ flexibility ⊕ fluid ⊕ flux-stasis ⊕ foreign-lang ⊕ form-design ⊕ formal-methods ⊕ formal-values ⊕ fourier ⊕ free-riding ⊕ frisson ⊕ frontier ⊕ functional ⊕ futurism ⊕ gallic ⊕ games ⊕ garett-jones ⊕ gavisti ⊕ gender ⊕ gender-diff ⊕ gene-flow ⊕ generalization ⊕ generative ⊕ genetic-load ⊕ genetics ⊕ genomics ⊕ geography ⊕ germanic ⊕ get-fit ⊕ giants ⊕ gibbon ⊕ gnon ⊕ gnosis-logos ⊕ gnu ⊕ gnxp ⊕ god-man-beast-victim ⊕ golang ⊕ google ⊕ gotchas ⊕ government ⊕ graphics ⊕ graphs ⊕ great-powers ⊕ grokkability ⊕ grokkability-clarity ⊕ ground-up ⊕ growth-econ ⊕ guide ⊕ GWAS ⊕ gwern ⊕ h2o ⊕ hanson ⊕ hard-tech ⊕ hardware ⊕ hari-seldon ⊕ haskell ⊕ hci ⊕ health ⊕ healthcare ⊕ heuristic ⊕ hi-order-bits ⊕ hidden-motives ⊕ higher-ed ⊕ history ⊕ hive-mind ⊕ hmm ⊕ hn ⊕ homepage ⊕ homo-hetero ⊕ housing ⊕ howto ⊕ hsu ⊕ huge-data-the-biggest ⊕ human-capital ⊕ human-ml ⊕ hypocrisy ⊕ hypothesis-testing ⊕ ideas ⊕ identity ⊕ identity-politics ⊕ ideology ⊕ idk ⊕ IEEE ⊕ illusion ⊕ immune ⊕ impetus ⊕ impro ⊕ incentives ⊕ increase-decrease ⊕ india ⊕ inequality ⊕ inference ⊕ info-dynamics ⊕ info-foraging ⊕ init ⊕ innovation ⊕ insight ⊕ institutions ⊕ intel ⊕ intelligence ⊕ interdisciplinary ⊕ interests ⊕ interpretability ⊕ interpretation ⊕ intervention ⊕ interview-prep ⊕ intricacy ⊕ intuition ⊕ iq ⊕ iraq-syria ⊕ iron-age ⊕ is-ought ⊕ islam ⊕ isotropy ⊕ iteration-recursion ⊕ jargon ⊕ javascript ⊕ jobs ⊕ judaism ⊕ judgement ⊕ jvm ⊕ keyboard ⊕ knowledge ⊕ kumbaya-kult ⊕ labor ⊕ language ⊕ latency-throughput ⊕ latent-variables ⊕ law ⊕ leadership ⊕ learning ⊕ left-wing ⊕ legacy ⊕ len:short ⊕ lens ⊕ let-me-see ⊕ letters ⊕ levers ⊕ leviathan ⊕ lexical ⊕ libraries ⊕ linear-algebra ⊕ linear-models ⊕ linearity ⊕ liner-notes ⊕ linguistics ⊕ links ⊕ linux ⊕ list ⊕ literature ⊕ local-global ⊕ logic ⊕ lol ⊕ long-short-run ⊕ low-hanging ⊕ lower-bounds ⊕ machiavelli ⊕ machine-learning ⊕ macro ⊕ magnitude ⊕ malthus ⊕ management ⊕ map-territory ⊕ maps ⊕ marginal ⊕ marginal-rev ⊕ market-failure ⊕ markets ⊕ markov ⊕ matching ⊕ math ⊕ math.NT ⊕ matrix-factorization ⊕ measure ⊕ measurement ⊕ media ⊕ medicine ⊕ medieval ⊕ mediterranean ⊕ memory-management ⊕ MENA ⊕ mental-math ⊕ meta-analysis ⊕ meta:medicine ⊕ meta:prediction ⊕ meta:reading ⊕ meta:rhetoric ⊕ meta:science ⊕ metabuch ⊕ metal-to-virtual ⊕ metameta ⊕ methodology ⊕ metric-space ⊕ metrics ⊕ microsoft ⊕ migrant-crisis ⊕ migration ⊕ military ⊕ minimum-viable ⊕ mobile ⊕ model-class ⊕ model-selection ⊕ models ⊕ modernity ⊕ moloch ⊕ moments ⊕ money ⊕ monte-carlo ⊕ morality ⊕ mostly-modern ⊕ multi ⊕ multiplicative ⊕ music ⊕ mutation ⊕ mystic ⊕ myth ⊕ n-factor ⊕ narrative ⊕ nationalism-globalism ⊕ nature ⊕ near-far ⊕ network-structure ⊕ networking ⊕ neuro ⊕ neuro-nitgrit ⊕ neurons ⊕ new-religion ⊕ news ⊕ nibble ⊕ nihil ⊕ nitty-gritty ⊕ nlp ⊕ no-go ⊕ noahpinion ⊕ noise-structure ⊕ nonlinearity ⊕ nostalgia ⊕ novelty ⊕ null-result ⊕ numerics ⊕ obesity ⊕ objektbuch ⊕ ocaml-sml ⊕ occam ⊕ occident ⊕ oceans ⊕ ocr ⊕ off-convex ⊕ offense-defense ⊕ oly-programming ⊕ open-problems ⊕ optimate ⊕ optimism ⊕ optimization ⊕ order-disorder ⊕ orders ⊕ org:anglo ⊕ org:biz ⊕ org:bleg ⊕ org:bv ⊕ org:data ⊕ org:edu ⊕ org:foreign ⊕ org:gov ⊕ org:junk ⊕ org:lite ⊕ org:mag ⊕ org:mat ⊕ org:nat ⊕ org:ngo ⊕ org:popup ⊕ org:rec ⊕ org:sci ⊕ org:theos ⊕ organizing ⊕ orient ⊕ os ⊕ oss ⊕ other-xtian ⊕ outcome-risk ⊕ outliers ⊕ overflow ⊕ p:null ⊕ paganism ⊕ paleocon ⊕ papers ⊕ paradox ⊕ parallax ⊕ parsimony ⊕ paste ⊕ path-dependence ⊕ patience ⊕ paying-rent ⊕ pdf ⊕ peace-violence ⊕ people ⊕ performance ⊕ pessimism ⊕ peter-singer ⊕ philosophy ⊕ physics ⊕ pic ⊕ pigeonhole-markov ⊕ piketty ⊕ piracy ⊕ plots ⊕ pls ⊕ plt ⊕ poast ⊕ policy ⊕ polisci ⊕ political-econ ⊕ politics ⊕ poll ⊕ popsci ⊕ population ⊕ population-genetics ⊕ postrat ⊕ pragmatic ⊕ pre-2013 ⊕ prediction ⊕ predictive-processing ⊕ preprint ⊕ presentation ⊕ priors-posteriors ⊕ privacy ⊕ pro-rata ⊕ probability ⊕ problem-solving ⊕ programming ⊕ progression ⊕ project ⊕ propaganda ⊕ protestant-catholic ⊕ protocol-metadata ⊕ prudence ⊕ pseudoE ⊕ psychiatry ⊕ psychology ⊕ python ⊕ q-n-a ⊕ qra ⊕ QTL ⊕ quality ⊕ quantified-self ⊕ quantitative-qualitative ⊕ questions ⊕ quotes ⊕ race ⊕ random ⊕ randy-ayndy ⊕ ranking ⊕ rant ⊕ rationality ⊕ ratty ⊕ reading ⊕ realness ⊕ reason ⊕ recommendations ⊕ recruiting ⊕ reddit ⊕ reduction ⊕ reference ⊕ reflection ⊕ regression ⊕ regression-to-mean ⊕ regularization ⊕ regularizer ⊕ regulation ⊕ reinforcement ⊕ religion ⊕ replication ⊕ repo ⊕ research ⊕ research-program ⊕ responsibility ⊕ retention ⊕ review ⊕ rhetoric ⊕ right-wing ⊕ rigor ⊕ risk ⊕ roots ⊕ rot ⊕ rounding ⊕ rust ⊕ s:* ⊕ s:** ⊕ sampling ⊕ sampling-bias ⊕ sanctity-degradation ⊕ sanjeev-arora ⊕ sapiens ⊕ scala ⊕ scale ⊕ scaling-tech ⊕ scaling-up ⊕ scholar ⊕ sci-comp ⊕ science ⊕ science-anxiety ⊕ scitariat ⊕ search ⊕ selection ⊕ sequential ⊕ sex ⊕ shift ⊕ shipping ⊕ signal-noise ⊕ signaling ⊕ similarity ⊕ simler ⊕ simplification-normalization ⊕ simulation ⊕ singularity ⊕ sinosphere ⊕ skunkworks ⊕ sleep ⊕ slippery-slope ⊕ smoothness ⊕ social ⊕ social-psych ⊕ social-science ⊕ social-structure ⊕ society ⊕ sociology ⊕ software ⊕ sparsity ⊕ spatial ⊕ speaking ⊕ speculation ⊕ speed ⊕ speedometer ⊕ spengler ⊕ spock ⊕ sports ⊕ spreading ⊕ ssc ⊕ stackex ⊕ stat-mech ⊕ stat-power ⊕ state-of-art ⊕ static-dynamic ⊕ stats ⊕ status ⊕ stochastic-processes ⊕ stories ⊕ strategy ⊕ straussian ⊕ stream ⊕ street-fighting ⊕ stress ⊕ structure ⊕ study ⊕ stylized-facts ⊕ subculture ⊕ subjective-objective ⊕ summary ⊕ supply-demand ⊕ survey ⊕ symmetry ⊕ system-design ⊕ systematic-ad-hoc ⊕ systems ⊕ tcs ⊕ teaching ⊕ tech ⊕ technology ⊕ techtariat ⊕ telos-atelos ⊕ temperature ⊕ terrorism ⊕ tetlock ⊕ the-bones ⊕ the-classics ⊕ the-devil ⊕ the-great-west-whale ⊕ the-self ⊕ the-trenches ⊕ the-world-is-just-atoms ⊕ theory-of-mind ⊕ theory-practice ⊕ theos ⊕ thermo ⊕ thick-thin ⊕ things ⊕ thinking ⊕ threat-modeling ⊕ tidbits ⊕ tightness ⊕ time ⊕ time-preference ⊕ time-series ⊕ tip-of-tongue ⊕ todo ⊕ tools ⊕ top-n ⊕ traces ⊕ track-record ⊕ tradeoffs ⊕ tradition ⊕ trees ⊕ trends ⊕ tribalism ⊕ trivia ⊕ trust ⊕ truth ⊕ turing ⊕ tutorial ⊕ twitter ⊕ types ⊕ ubiquity ⊕ unaffiliated ⊕ uncertainty ⊕ unintended-consequences ⊕ uniqueness ⊕ universalism-particularism ⊕ unix ⊕ urban-rural ⊕ us-them ⊕ usa ⊕ utopia-dystopia ⊕ values ⊕ vampire-squid ⊕ variance-components ⊕ video ⊕ virtualization ⊕ visual-understanding ⊕ visualization ⊕ visuo ⊕ volo-avolo ⊕ walter-scheidel ⊕ war ⊕ waves ⊕ web ⊕ west-hunter ⊕ whiggish-hegelian ⊕ white-paper ⊕ whole-partial-many ⊕ wiki ⊕ wire-guided ⊕ within-group ⊕ within-without ⊕ wonkish ⊕ working-stiff ⊕ world ⊕ wormholes ⊕ writing ⊕ X-not-about-Y ⊕ yak-shaving ⊕ yoga ⊕ yvain ⊕ zeitgeist ⊕ 🌞 ⊕ 🎓 ⊕ 🎩 ⊕ 🐸 ⊕ 👽 ⊕ 🔬 ⊕ 🖥 ⊕ 🤖 ⊕ 🦀 ⊕Copy this bookmark: