exponential function - Feynman's Trick for Approximating $e^x$ - Mathematics Stack Exchange

6 weeks ago by nhaliday

1. e^2.3 ~ 10

2. e^.7 ~ 2

3. e^x ~ 1+x

e = 2.71828...

errors (absolute, relative):

1. +0.0258, 0.26%

2. -0.0138, -0.68%

3. 1 + x approximates e^x on [-.3, .3] with absolute error < .05, and relative error < 5.6% (3.7% for [0, .3]).

nibble
q-n-a
overflow
math
feynman
giants
mental-math
calculation
multiplicative
AMT
identity
objektbuch
explanation
howto
estimate
street-fighting
stories
approximation
data
trivia
nitty-gritty
2. e^.7 ~ 2

3. e^x ~ 1+x

e = 2.71828...

errors (absolute, relative):

1. +0.0258, 0.26%

2. -0.0138, -0.68%

3. 1 + x approximates e^x on [-.3, .3] with absolute error < .05, and relative error < 5.6% (3.7% for [0, .3]).

6 weeks ago by nhaliday

CakeML

august 2019 by nhaliday

some interesting job openings in Sydney listed here

programming
pls
plt
functional
ocaml-sml
formal-methods
rigor
compilers
types
numerics
accuracy
estimate
research-program
homepage
anglo
jobs
tech
cool
august 2019 by nhaliday

Anti-hash test. - Codeforces

august 2019 by nhaliday

- Thue-Morse sequence

- nice paper: http://www.mii.lt/olympiads_in_informatics/pdf/INFOL119.pdf

In general, polynomial string hashing is a useful technique in construction of efficient string algorithms. One simply needs to remember to carefully select the modulus M and the variable of the polynomial p depending on the application. A good rule of thumb is to pick both values as prime numbers with M as large as possible so that no integer overflow occurs and p being at least the size of the alphabet.

2.2. Upper Bound on M

[stuff about 32- and 64-bit integers]

2.3. Lower Bound on M

On the other side Mis bounded due to the well-known birthday paradox: if we consider a collection of m keys with m ≥ 1.2√M then the chance of a collision to occur within this collection is at least 50% (assuming that the distribution of fingerprints is close to uniform on the set of all strings). Thus if the birthday paradox applies then one needs to choose M=ω(m^2)to have a fair chance to avoid a collision. However, one should note that not always the birthday paradox applies. As a benchmark consider the following two problems.

I generally prefer to use Schwartz-Zippel to reason about collision probabilities w/ this kind of thing, eg, https://people.eecs.berkeley.edu/~sinclair/cs271/n3.pdf.

A good way to get more accurate results: just use multiple primes and the Chinese remainder theorem to get as large an M as you need w/o going beyond 64-bit arithmetic.

more on this: https://codeforces.com/blog/entry/60442

oly
oly-programming
gotchas
howto
hashing
algorithms
strings
random
best-practices
counterexample
multi
pdf
papers
nibble
examples
fields
polynomials
lecture-notes
yoga
probability
estimate
magnitude
hacker
adversarial
CAS
lattice
discrete
- nice paper: http://www.mii.lt/olympiads_in_informatics/pdf/INFOL119.pdf

In general, polynomial string hashing is a useful technique in construction of efficient string algorithms. One simply needs to remember to carefully select the modulus M and the variable of the polynomial p depending on the application. A good rule of thumb is to pick both values as prime numbers with M as large as possible so that no integer overflow occurs and p being at least the size of the alphabet.

2.2. Upper Bound on M

[stuff about 32- and 64-bit integers]

2.3. Lower Bound on M

On the other side Mis bounded due to the well-known birthday paradox: if we consider a collection of m keys with m ≥ 1.2√M then the chance of a collision to occur within this collection is at least 50% (assuming that the distribution of fingerprints is close to uniform on the set of all strings). Thus if the birthday paradox applies then one needs to choose M=ω(m^2)to have a fair chance to avoid a collision. However, one should note that not always the birthday paradox applies. As a benchmark consider the following two problems.

I generally prefer to use Schwartz-Zippel to reason about collision probabilities w/ this kind of thing, eg, https://people.eecs.berkeley.edu/~sinclair/cs271/n3.pdf.

A good way to get more accurate results: just use multiple primes and the Chinese remainder theorem to get as large an M as you need w/o going beyond 64-bit arithmetic.

more on this: https://codeforces.com/blog/entry/60442

august 2019 by nhaliday

galois theory - Existence of irreducible polynomial of arbitrary degree over finite field without use of primitive element theorem? - Mathematics Stack Exchange

nibble q-n-a overflow math math.CA algebra multiplicative tidbits proofs existence pigeonhole-markov estimate fields identity measure

july 2019 by nhaliday

nibble q-n-a overflow math math.CA algebra multiplicative tidbits proofs existence pigeonhole-markov estimate fields identity measure

july 2019 by nhaliday

Infographics: Operation Costs in CPU Clock Cycles - IT Hare on Soft.ware

july 2019 by nhaliday

covers arithmetic, branches, memory reads/writes, function calls and dynamic polymorphism (virtual function calls), memory allocation, concurrency/OS ops, exceptions

division is very expensive (moreso than multiplication), exceptions are crazy expensive

https://latkin.org/blog/2014/11/09/a-simple-benchmark-of-various-math-operations/

https://stackoverflow.com/questions/15745819/why-is-division-more-expensive-than-multiplication

https://stackoverflow.com/questions/2550281/floating-point-vs-integer-calculations-on-modern-hardware

https://stackoverflow.com/questions/1146455/whats-the-relative-speed-of-floating-point-add-vs-floating-point-multiply

some nice tricks for avoiding division when calculating modulo p:

https://www.nayuki.io/page/barrett-reduction-algorithm

https://www.nayuki.io/page/montgomery-reduction-algorithm

techtariat
nitty-gritty
objektbuch
data
comparison
list
cost-benefit
performance
time
programming
hardware
IEEE
analysis
links
sci-comp
numerics
types
pls
additive
multiplicative
systems
c(pp)
os
oop
computer-memory
error
error-handling
visualization
concurrency
caching
multi
chart
pro-rata
street-fighting
estimate
tricks
hacker
exposition
yoga
levers
static-dynamic
metal-to-virtual
latency-throughput
division is very expensive (moreso than multiplication), exceptions are crazy expensive

https://latkin.org/blog/2014/11/09/a-simple-benchmark-of-various-math-operations/

https://stackoverflow.com/questions/15745819/why-is-division-more-expensive-than-multiplication

https://stackoverflow.com/questions/2550281/floating-point-vs-integer-calculations-on-modern-hardware

https://stackoverflow.com/questions/1146455/whats-the-relative-speed-of-floating-point-add-vs-floating-point-multiply

some nice tricks for avoiding division when calculating modulo p:

https://www.nayuki.io/page/barrett-reduction-algorithm

https://www.nayuki.io/page/montgomery-reduction-algorithm

july 2019 by nhaliday

Laurence Tratt: What Challenges and Trade-Offs do Optimising Compilers Face?

july 2019 by nhaliday

Summary

It's important to be realistic: most people don't care about program performance most of the time. Modern computers are so fast that most programs run fast enough even with very slow language implementations. In that sense, I agree with Daniel's premise: optimising compilers are often unimportant. But “often” is often unsatisfying, as it is here. Users find themselves transitioning from not caring at all about performance to suddenly really caring, often in the space of a single day.

This, to me, is where optimising compilers come into their own: they mean that even fewer people need care about program performance. And I don't mean that they get us from, say, 98 to 99 people out of 100 not needing to care: it's probably more like going from 80 to 99 people out of 100 not needing to care. This is, I suspect, more significant than it seems: it means that many people can go through an entire career without worrying about performance. Martin Berger reminded me of A N Whitehead’s wonderful line that “civilization advances by extending the number of important operations which we can perform without thinking about them” and this seems a classic example of that at work. Even better, optimising compilers are widely tested and thus generally much more reliable than the equivalent optimisations performed manually.

But I think that those of us who work on optimising compilers need to be honest with ourselves, and with users, about what performance improvement one can expect to see on a typical program. We have a tendency to pick the maximum possible improvement and talk about it as if it's the mean, when there's often a huge difference between the two. There are many good reasons for that gap, and I hope in this blog post I've at least made you think about some of the challenges and trade-offs that optimising compilers are subject to.

[1]

Most readers will be familiar with Knuth’s quip that “premature optimisation is the root of all evil.” However, I doubt that any of us have any real idea what proportion of time is spent in the average part of the average program. In such cases, I tend to assume that Pareto’s principle won't be far too wrong (i.e. that 80% of execution time is spent in 20% of code). In 1971 a study by Knuth and others of Fortran programs, found that 50% of execution time was spent in 4% of code. I don't know of modern equivalents of this study, and for them to be truly useful, they'd have to be rather big. If anyone knows of something along these lines, please let me know!

techtariat
programming
compilers
performance
tradeoffs
cost-benefit
engineering
yak-shaving
pareto
plt
c(pp)
rust
golang
trivia
data
objektbuch
street-fighting
estimate
distribution
pro-rata
It's important to be realistic: most people don't care about program performance most of the time. Modern computers are so fast that most programs run fast enough even with very slow language implementations. In that sense, I agree with Daniel's premise: optimising compilers are often unimportant. But “often” is often unsatisfying, as it is here. Users find themselves transitioning from not caring at all about performance to suddenly really caring, often in the space of a single day.

This, to me, is where optimising compilers come into their own: they mean that even fewer people need care about program performance. And I don't mean that they get us from, say, 98 to 99 people out of 100 not needing to care: it's probably more like going from 80 to 99 people out of 100 not needing to care. This is, I suspect, more significant than it seems: it means that many people can go through an entire career without worrying about performance. Martin Berger reminded me of A N Whitehead’s wonderful line that “civilization advances by extending the number of important operations which we can perform without thinking about them” and this seems a classic example of that at work. Even better, optimising compilers are widely tested and thus generally much more reliable than the equivalent optimisations performed manually.

But I think that those of us who work on optimising compilers need to be honest with ourselves, and with users, about what performance improvement one can expect to see on a typical program. We have a tendency to pick the maximum possible improvement and talk about it as if it's the mean, when there's often a huge difference between the two. There are many good reasons for that gap, and I hope in this blog post I've at least made you think about some of the challenges and trade-offs that optimising compilers are subject to.

[1]

Most readers will be familiar with Knuth’s quip that “premature optimisation is the root of all evil.” However, I doubt that any of us have any real idea what proportion of time is spent in the average part of the average program. In such cases, I tend to assume that Pareto’s principle won't be far too wrong (i.e. that 80% of execution time is spent in 20% of code). In 1971 a study by Knuth and others of Fortran programs, found that 50% of execution time was spent in 4% of code. I don't know of modern equivalents of this study, and for them to be truly useful, they'd have to be rather big. If anyone knows of something along these lines, please let me know!

july 2019 by nhaliday

The Existential Risk of Math Errors - Gwern.net

july 2019 by nhaliday

How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)

2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?

“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”

- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":

https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs

https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs

]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:

https://mathoverflow.net/questions/11517/computer-algebra-errors

I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2

They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:

Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:

https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/

https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods

Update: measured effort

In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]

ratty
gwern
analysis
essay
realness
truth
correctness
reason
philosophy
math
proofs
formal-methods
cs
programming
engineering
worse-is-better/the-right-thing
intuition
giants
old-anglo
error
street-fighting
heuristic
zooming
risk
threat-modeling
software
lens
logic
inference
physics
differential
geometry
estimate
distribution
robust
speculation
nonlinearity
cost-benefit
convexity-curvature
measure
scale
trivia
cocktail
history
early-modern
europe
math.CA
rigor
news
org:mag
org:sci
miri-cfar
pdf
thesis
comparison
examples
org:junk
q-n-a
stackex
pragmatic
tradeoffs
cracker-prog
techtariat
invariance
DSL
chart
ecosystem
grokkability
heavyweights
CAS
static-dynamic
lower-bounds
complexity
tcs
open-problems
big-surf
ideas
certificates-recognition
proof-systems
PCP
mediterranean
SDP
meta:prediction
epistemic
questions
guessing
distributed
overflow
nibble
soft-question
track-record
big-list
hmm
frontier
state-of-art
move-fast-(and-break-things)
grokkability-clarity
technical-writing
trust
1. Mistakes where the theorem is still true, but the proof was incorrect (type I)

2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?

“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”

- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":

https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs

https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs

]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:

https://mathoverflow.net/questions/11517/computer-algebra-errors

I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2

They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:

Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:

https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/

https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods

Update: measured effort

In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]

july 2019 by nhaliday

Why is Software Engineering so difficult? - James Miller

may 2019 by nhaliday

basic message: No silver bullet!

most interesting nuggets:

Scale and Complexity

- Windows 7 > 50 million LOC

Expect a staggering number of bugs.

Bugs?

- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.

- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.

Bug removal

- Testing typically exercises only half the code.

Better bug removal?

- There are better ways to do testing that do produce fantastic programs.”

- Are we sure about this fact?

* No, its only an opinion!

* In general Software Engineering has ....

NO FACTS!

So why not do this?

- The costs are unbelievable.

- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.

pdf
slides
engineering
nitty-gritty
programming
best-practices
roots
comparison
cost-benefit
software
systematic-ad-hoc
structure
error
frontier
debugging
checking
formal-methods
context
detail-architecture
intricacy
big-picture
system-design
correctness
scale
scaling-tech
shipping
money
data
stylized-facts
street-fighting
objektbuch
pro-rata
estimate
pessimism
degrees-of-freedom
volo-avolo
no-go
things
thinking
summary
quality
density
methodology
most interesting nuggets:

Scale and Complexity

- Windows 7 > 50 million LOC

Expect a staggering number of bugs.

Bugs?

- Well-written C and C++ code contains some 5 to 10 errors per 100 LOC after a clean compile, but before inspection and testing.

- At a 5% rate any 50 MLOC program will start off with some 2.5 million bugs.

Bug removal

- Testing typically exercises only half the code.

Better bug removal?

- There are better ways to do testing that do produce fantastic programs.”

- Are we sure about this fact?

* No, its only an opinion!

* In general Software Engineering has ....

NO FACTS!

So why not do this?

- The costs are unbelievable.

- It’s not unusual for the qualification process to produce a half page of documentation for each line of code.

may 2019 by nhaliday

quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange

april 2019 by nhaliday

Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."

(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.

edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]

q-n-a
stackex
programming
engineering
nitty-gritty
error
flux-stasis
books
recommendations
software
checking
debugging
pro-rata
pls
comparison
parsimony
measure
data
objektbuch
speculation
accuracy
density
correctness
estimate
street-fighting
multi
quality
stylized-facts
methodology
I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."

(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.

edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]

april 2019 by nhaliday

An adaptability limit to climate change due to heat stress

august 2018 by nhaliday

Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wet-bulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedence of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Trajectories of the Earth System in the Anthropocene: http://www.pnas.org/content/early/2018/07/31/1810141115

We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be.

study
org:nat
environment
climate-change
humanity
existence
risk
futurism
estimate
physics
thermo
prediction
temperature
nature
walls
civilization
flexibility
rigidity
embodied
multi
manifolds
plots
equilibrium
phase-transition
oscillation
comparison
complex-systems
earth
Trajectories of the Earth System in the Anthropocene: http://www.pnas.org/content/early/2018/07/31/1810141115

We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be.

august 2018 by nhaliday

Theory of Self-Reproducing Automata - John von Neumann

april 2018 by nhaliday

Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time

- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing

pdf
article
papers
essay
nibble
math
cs
computation
bio
neuro
neuro-nitgrit
scale
magnitude
comparison
acm
von-neumann
giants
thermo
phys-energy
speed
performance
time
density
frequency
hardware
ems
efficiency
dirty-hands
street-fighting
fermi
estimate
retention
physics
interdisciplinary
multi
wiki
links
people
🔬
atoms
duplication
iteration-recursion
turing
complexity
measure
nature
technology
complex-systems
bits
information-theory
circuits
robust
structure
composition-decomposition
evolution
mutation
axioms
analogy
thinking
input-output
hi-order-bits
coding-theory
flexibility
rigidity
automata-languages
Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time

- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing

april 2018 by nhaliday

Mind uploading - Wikipedia

concept wiki reference article hanson ratty ems futurism ai technology speedometer frontier simulation death prediction estimate time computation scale magnitude plots neuro neuro-nitgrit complexity coarse-fine brain-scan accuracy skunkworks bostrom enhancement ideas singularity eden-heaven speed risk ai-control paradox competition arms unintended-consequences offense-defense trust duty tribalism us-them volo-avolo strategy hardware software mystic religion theos hmm dennett within-without philosophy deep-materialism complex-systems structure reduction detail-architecture analytical-holistic approximation cs trends threat-modeling

march 2018 by nhaliday

concept wiki reference article hanson ratty ems futurism ai technology speedometer frontier simulation death prediction estimate time computation scale magnitude plots neuro neuro-nitgrit complexity coarse-fine brain-scan accuracy skunkworks bostrom enhancement ideas singularity eden-heaven speed risk ai-control paradox competition arms unintended-consequences offense-defense trust duty tribalism us-them volo-avolo strategy hardware software mystic religion theos hmm dennett within-without philosophy deep-materialism complex-systems structure reduction detail-architecture analytical-holistic approximation cs trends threat-modeling

march 2018 by nhaliday

Existential Risks: Analyzing Human Extinction Scenarios

march 2018 by nhaliday

https://twitter.com/robinhanson/status/981291048965087232

https://archive.is/dUTD5

Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?

Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408

https://archive.is/RpygO

How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/

An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.

bostrom
ratty
miri-cfar
skunkworks
philosophy
org:junk
list
top-n
frontier
speedometer
risk
futurism
local-global
scale
death
nihil
technology
simulation
anthropic
nuclear
deterrence
environment
climate-change
arms
competition
ai
ai-control
genetics
genomics
biotech
parasites-microbiome
disease
offense-defense
physics
tails
network-structure
epidemiology
space
geoengineering
dysgenics
ems
authoritarianism
government
values
formal-values
moloch
enhancement
property-rights
coordination
cooperate-defect
flux-stasis
ideas
prediction
speculation
humanity
singularity
existence
cybernetics
study
article
letters
eden-heaven
gedanken
multi
twitter
social
discussion
backup
hanson
metrics
optimization
time
long-short-run
janus
telos-atelos
poll
forms-instances
threat-modeling
selection
interview
expert-experience
malthus
volo-avolo
intel
leviathan
drugs
pharma
data
estimate
nature
longevity
expansionism
homo-hetero
utopia-dystopia
https://archive.is/dUTD5

Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?

Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408

https://archive.is/RpygO

How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/

An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.

march 2018 by nhaliday

Stein's example - Wikipedia

february 2018 by nhaliday

Stein's example (or phenomenon or paradox), in decision theory and estimation theory, is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955.[1]

An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent; this occurs in channel estimation in telecommunications, for instance (different factors affect overall channel performance). On the other hand, if one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse.

...

Many simple, practical estimators achieve better performance than the ordinary estimator. The best-known example is the James–Stein estimator, which works by starting at X and moving towards a particular point (such as the origin) by an amount inversely proportional to the distance of X from that point.

nibble
concept
levers
wiki
reference
acm
stats
probability
decision-theory
estimate
distribution
atoms
An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent; this occurs in channel estimation in telecommunications, for instance (different factors affect overall channel performance). On the other hand, if one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse.

...

Many simple, practical estimators achieve better performance than the ordinary estimator. The best-known example is the James–Stein estimator, which works by starting at X and moving towards a particular point (such as the origin) by an amount inversely proportional to the distance of X from that point.

february 2018 by nhaliday

Information Processing: US Needs a National AI Strategy: A Sputnik Moment?

february 2018 by nhaliday

FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/

Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/

https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf

Deciphering China’s AI Dream

The context, components, capabilities, and consequences of

China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/

Brussels is failing to grasp threats and opportunities of artificial intelligence.

By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104

https://archive.is/m3Njh

US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha

https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html

https://twitter.com/mr_scientism/status/981685030417326080

https://archive.is/3wbHm

AI-risk was a mistake.

hsu
scitariat
commentary
video
presentation
comparison
usa
china
asia
sinosphere
frontier
technology
science
ai
speedometer
innovation
google
barons
deepgoog
stories
white-paper
strategy
migration
iran
human-capital
corporation
creative
alien-character
military
human-ml
nationalism-globalism
security
investing
government
games
deterrence
defense
nuclear
arms
competition
risk
ai-control
musk
optimism
multi
news
org:mag
europe
EU
80000-hours
effective-altruism
proposal
article
realness
offense-defense
war
biotech
altruism
language
foreign-lang
philosophy
the-great-west-whale
enhancement
foreign-policy
geopolitics
anglo
jobs
career
planning
hmm
travel
charity
tech
intel
media
teaching
tutoring
russia
india
miri-cfar
pdf
automation
class
labor
polisci
society
trust
n-factor
corruption
leviathan
ethics
authoritarianism
individualism-collectivism
revolution
economics
inequality
civic
law
regulation
data
scale
pro-rata
capital
zero-positive-sum
cooperate-defect
distribution
time-series
tre
A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/

Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/

https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf

Deciphering China’s AI Dream

The context, components, capabilities, and consequences of

China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/

Brussels is failing to grasp threats and opportunities of artificial intelligence.

By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104

https://archive.is/m3Njh

US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha

https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html

https://twitter.com/mr_scientism/status/981685030417326080

https://archive.is/3wbHm

AI-risk was a mistake.

february 2018 by nhaliday

Sex, Drugs, and Bitcoin: How Much Illegal Activity Is Financed Through Cryptocurrencies? by Sean Foley, Jonathan R. Karlsen, Tālis J. Putniņš :: SSRN

february 2018 by nhaliday

Cryptocurrencies are among the largest unregulated markets in the world. We find that approximately one-quarter of bitcoin users and one-half of bitcoin transactions are associated with illegal activity. Around $72 billion of illegal activity per year involves bitcoin, which is close to the scale of the US and European markets for illegal drugs. The illegal share of bitcoin activity declines with mainstream interest in bitcoin and with the emergence of more opaque cryptocurrencies. The techniques developed in this paper have applications in cryptocurrency surveillance. Our findings suggest that cryptocurrencies are transforming the way black markets operate by enabling “black e-commerce.”

study
economics
law
leviathan
bitcoin
cryptocurrency
crypto
impetus
scale
markets
civil-liberty
randy-ayndy
crime
criminology
measurement
estimate
pro-rata
money
monetary-fiscal
crypto-anarchy
drugs
internet
tradecraft
opsec
security
intel
february 2018 by nhaliday

Team *Decorations Until Epiphany* on Twitter: "@RoundSqrCupola maybe just C https://t.co/SFPXb3qrAE"

december 2017 by nhaliday

https://archive.is/k0fsS

Remember ‘BRICs’? Now it’s just ICs.

--

maybe just C

Solow predicts that if 2 countries have the same TFP, then the poorer nation should grow faster. But poorer India grows more slowly than China.

Solow thinking leads one to suspect India has substantially lower TFP.

Recent growth is great news, but alas 5 years isn't the long run!

FWIW under Solow conditional convergence assumptions--historically robust--the fact that a country as poor as India grows only a few % faster than the world average is a sign they'll end up poorer than S Europe.

see his spreadsheet here: http://mason.gmu.edu/~gjonesb/SolowForecast.xlsx

spearhead
econotariat
garett-jones
unaffiliated
twitter
social
discussion
india
asia
china
economics
macro
growth-econ
econ-metrics
wealth
wealth-of-nations
convergence
world
developing-world
trends
time-series
cjones-like
prediction
multi
backup
the-bones
long-short-run
europe
mediterranean
comparison
simulation
econ-productivity
great-powers
thucydides
broad-econ
pop-diff
microfoundations
🎩
marginal
hive-mind
rindermann-thompson
hari-seldon
tools
calculator
estimate
Remember ‘BRICs’? Now it’s just ICs.

--

maybe just C

Solow predicts that if 2 countries have the same TFP, then the poorer nation should grow faster. But poorer India grows more slowly than China.

Solow thinking leads one to suspect India has substantially lower TFP.

Recent growth is great news, but alas 5 years isn't the long run!

FWIW under Solow conditional convergence assumptions--historically robust--the fact that a country as poor as India grows only a few % faster than the world average is a sign they'll end up poorer than S Europe.

see his spreadsheet here: http://mason.gmu.edu/~gjonesb/SolowForecast.xlsx

december 2017 by nhaliday

Reflections on Random Kitchen Sinks – arg min blog

acmtariat ben-recht org:bleg nibble talks video reflection success ranking machine-learning acm papers liner-notes research stories random kernels approximation frontier rigor michael-jordan estimate summary tightness linear-algebra replication science the-trenches realness deep-learning model-class concept exposition tricks gradient-descent optimization composition-decomposition parsimony examples reduction systematic-ad-hoc numerics intricacy robust perturbation empirical rounding

december 2017 by nhaliday

acmtariat ben-recht org:bleg nibble talks video reflection success ranking machine-learning acm papers liner-notes research stories random kernels approximation frontier rigor michael-jordan estimate summary tightness linear-algebra replication science the-trenches realness deep-learning model-class concept exposition tricks gradient-descent optimization composition-decomposition parsimony examples reduction systematic-ad-hoc numerics intricacy robust perturbation empirical rounding

december 2017 by nhaliday

galaxy - How do astronomers estimate the total mass of dust in clouds and galaxies? - Astronomy Stack Exchange

december 2017 by nhaliday

Dust absorbs stellar light (primarily in the ultraviolet), and is heated up. Subsequently it cools by emitting infrared, "thermal" radiation. Assuming a dust composition and grain size distribution, the amount of emitted IR light per unit dust mass can be calculated as a function of temperature. Observing the object at several different IR wavelengths, a Planck curve can be fitted to the data points, yielding the dust temperature. The more UV light incident on the dust, the higher the temperature.

The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).

nibble
q-n-a
overflow
space
measurement
measure
estimate
physics
electromag
visuo
methodology
The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).

december 2017 by nhaliday

Negative Results in Empirical Soft Eng - Journal Special Issue

techtariat programming engineering pragmatic software tech list links study summary commentary carmack empirical evidence-based shipping null-result replication expert-experience ability-competence metrics correlation degrees-of-freedom devtools formal-methods best-practices 🖥 working-stiff measure estimate

november 2017 by nhaliday

techtariat programming engineering pragmatic software tech list links study summary commentary carmack empirical evidence-based shipping null-result replication expert-experience ability-competence metrics correlation degrees-of-freedom devtools formal-methods best-practices 🖥 working-stiff measure estimate

november 2017 by nhaliday

Lessons From Bar Fight Litigation | Ordinary Times

reflection summary stories data analysis demographics gender class distribution race age-generation peace-violence embodied embodied-pack fighting law arms money track-record impetus chart ethanol sex street-fighting objektbuch estimate measurement accuracy gender-diff regularizer anthropology trivia cocktail

october 2017 by nhaliday

reflection summary stories data analysis demographics gender class distribution race age-generation peace-violence embodied embodied-pack fighting law arms money track-record impetus chart ethanol sex street-fighting objektbuch estimate measurement accuracy gender-diff regularizer anthropology trivia cocktail

october 2017 by nhaliday

Tax Evasion and Inequality

october 2017 by nhaliday

This paper attempts to estimate the size and distribution of tax evasion in rich countries. We combine stratified random audits—the key source used to study tax evasion so far—with new micro-data leaked from two large offshore financial institutions, HSBC Switzerland (“Swiss leaks”) and Mossack Fonseca (“Panama Papers”). We match these data to population-wide wealth records in Norway, Sweden, and Denmark. We find that tax evasion rises sharply with wealth, a phenomenon that random audits fail to capture. On average about 3% of personal taxes are evaded in Scandinavia, but this figure rises to about 30% in the top 0.01% of the wealth distribution, a group that includes households with more than $40 million in net wealth. A simple model of the supply of tax evasion services can explain why evasion rises steeply with wealth. Taking tax evasion into account increases the rise in inequality seen in tax data since the 1970s markedly, highlighting the need to move beyond tax data to capture income and wealth at the top, even in countries where tax compliance is generally high. We also find that after reducing tax evasion—by using tax amnesties—tax evaders do not legally avoid taxes more. This result suggests that fighting tax evasion can be an effective way to collect more tax revenue from the ultra-wealthy.

Figure 1

America’s unreported economy: measuring the size, growth and determinants of income tax evasion in the U.S.: https://link.springer.com/article/10.1007/s10611-011-9346-x

This study empirically investigates the extent of noncompliance with the tax code and examines the determinants of federal income tax evasion in the U.S. Employing a refined version of Feige’s (Staff Papers, International Monetary Fund 33(4):768–881, 1986, 1989) General Currency Ratio (GCR) model to estimate a time series of unreported income as our measure of tax evasion, we find that 18–23% of total reportable income may not properly be reported to the IRS. This gives rise to a 2009 “tax gap” in the range of $390–$540 billion. As regards the determinants of tax noncompliance, we find that federal income tax evasion is an increasing function of the average effective federal income tax rate, the unemployment rate, the nominal interest rate, and per capita real GDP, and a decreasing function of the IRS audit rate. Despite important refinements of the traditional currency ratio approach for estimating the aggregate size and growth of unreported economies, we conclude that the sensitivity of the results to different benchmarks, imperfect data sources and alternative specifying assumptions precludes obtaining results of sufficient accuracy and reliability to serve as effective policy guides.

pdf
study
economics
micro
evidence-based
data
europe
nordic
scale
class
compensation
money
monetary-fiscal
political-econ
redistribution
taxes
madisonian
inequality
history
mostly-modern
natural-experiment
empirical
🎩
cocktail
correlation
models
supply-demand
GT-101
crooked
elite
vampire-squid
nationalism-globalism
multi
pro-rata
usa
time-series
trends
world-war
cold-war
government
todo
planning
long-term
trivia
law
crime
criminology
estimate
speculation
measurement
labor
macro
econ-metrics
wealth
stock-flow
time
density
criminal-justice
frequency
dark-arts
traces
evidence
Figure 1

America’s unreported economy: measuring the size, growth and determinants of income tax evasion in the U.S.: https://link.springer.com/article/10.1007/s10611-011-9346-x

This study empirically investigates the extent of noncompliance with the tax code and examines the determinants of federal income tax evasion in the U.S. Employing a refined version of Feige’s (Staff Papers, International Monetary Fund 33(4):768–881, 1986, 1989) General Currency Ratio (GCR) model to estimate a time series of unreported income as our measure of tax evasion, we find that 18–23% of total reportable income may not properly be reported to the IRS. This gives rise to a 2009 “tax gap” in the range of $390–$540 billion. As regards the determinants of tax noncompliance, we find that federal income tax evasion is an increasing function of the average effective federal income tax rate, the unemployment rate, the nominal interest rate, and per capita real GDP, and a decreasing function of the IRS audit rate. Despite important refinements of the traditional currency ratio approach for estimating the aggregate size and growth of unreported economies, we conclude that the sensitivity of the results to different benchmarks, imperfect data sources and alternative specifying assumptions precludes obtaining results of sufficient accuracy and reliability to serve as effective policy guides.

october 2017 by nhaliday

Caught in the act | West Hunter

september 2017 by nhaliday

The fossil record is sparse. Let me try to explain that. We have at most a few hundred Neanderthal skeletons, most in pretty poor shape. How many Neanderthals ever lived? I think their population varied in size quite a bit – lowest during glacial maxima, probably highest in interglacials. Their degree of genetic diversity suggests an effective population size of ~1000, but that would be dominated by the low points (harmonic average). So let’s say 50,000 on average, over their whole range (Europe, central Asia, the Levant, perhaps more). Say they were around for 300,000 years, with a generation time of 30 years – 10,000 generations, for a total of five hundred million Neanderthals over all time. So one in a million Neanderthals ends up in a museum: one every 20 generations. Low time resolution!

So if anatomically modern humans rapidly wiped out Neanderthals, we probably couldn’t tell. In much the same way, you don’t expect to find the remains of many dinosaurs killed by the Cretaceous meteor impact (at most one millionth of one generation, right?), or of Columbian mammoths killed by a wave of Amerindian hunters. Sometimes invaders leave a bigger footprint: a bunch of cities burning down with no rebuilding tells you something. But even when you know that population A completely replaced population B, it can be hard to prove that just how it happened. After all, population A could have all committed suicide just before B showed up. Stranger things have happened – but not often.

west-hunter
scitariat
discussion
ideas
data
objektbuch
scale
magnitude
estimate
population
sapiens
archaics
archaeology
pro-rata
history
antiquity
methodology
volo-avolo
measurement
pop-structure
density
time
frequency
apollonian-dionysian
traces
evidence
So if anatomically modern humans rapidly wiped out Neanderthals, we probably couldn’t tell. In much the same way, you don’t expect to find the remains of many dinosaurs killed by the Cretaceous meteor impact (at most one millionth of one generation, right?), or of Columbian mammoths killed by a wave of Amerindian hunters. Sometimes invaders leave a bigger footprint: a bunch of cities burning down with no rebuilding tells you something. But even when you know that population A completely replaced population B, it can be hard to prove that just how it happened. After all, population A could have all committed suicide just before B showed up. Stranger things have happened – but not often.

september 2017 by nhaliday

Atrocity statistics from the Roman Era

september 2017 by nhaliday

Christian Martyrs [make link]

Gibbon, Decline & Fall v.2 ch.XVI: < 2,000 k. under Roman persecution.

Ludwig Hertling ("Die Zahl de Märtyrer bis 313", 1944) estimated 100,000 Christians killed between 30 and 313 CE. (cited -- unfavorably -- by David Henige, Numbers From Nowhere, 1998)

Catholic Encyclopedia, "Martyr": number of Christian martyrs under the Romans unknown, unknowable. Origen says not many. Eusebius says thousands.

...

General population decline during The Fall of Rome: 7,000,000 [make link]

- Colin McEvedy, The New Penguin Atlas of Medieval History (1992)

- From 2nd Century CE to 4th Century CE: Empire's population declined from 45M to 36M [i.e. 9M]

- From 400 CE to 600 CE: Empire's population declined by 20% [i.e. 7.2M]

- Paul Bairoch, Cities and economic development: from the dawn of history to the present, p.111

- "The population of Europe except Russia, then, having apparently reached a high point of some 40-55 million people by the start of the third century [ca.200 C.E.], seems to have fallen by the year 500 to about 30-40 million, bottoming out at about 20-35 million around 600." [i.e. ca.20M]

- Francois Crouzet, A History of the European Economy, 1000-2000 (University Press of Virginia: 2001) p.1.

- "The population of Europe (west of the Urals) in c. AD 200 has been estimated at 36 million; by 600, it had fallen to 26 million; another estimate (excluding ‘Russia’) gives a more drastic fall, from 44 to 22 million." [i.e. 10M or 22M]

also:

The geometric mean of these two extremes would come to 4½ per day, which is a credible daily rate for the really bad years.

why geometric mean? can you get it as the MLE given min{X1, ..., Xn} and max{X1, ..., Xn} for {X_i} iid Poissons? some kinda limit? think it might just be a rule of thumb.

yeah, it's a rule of thumb. found it it his book (epub).

org:junk
data
let-me-see
scale
history
iron-age
mediterranean
the-classics
death
nihil
conquest-empire
war
peace-violence
gibbon
trivia
multi
todo
AMT
expectancy
heuristic
stats
ML-MAP-E
data-science
estimate
magnitude
population
demographics
database
list
religion
christianity
leviathan
Gibbon, Decline & Fall v.2 ch.XVI: < 2,000 k. under Roman persecution.

Ludwig Hertling ("Die Zahl de Märtyrer bis 313", 1944) estimated 100,000 Christians killed between 30 and 313 CE. (cited -- unfavorably -- by David Henige, Numbers From Nowhere, 1998)

Catholic Encyclopedia, "Martyr": number of Christian martyrs under the Romans unknown, unknowable. Origen says not many. Eusebius says thousands.

...

General population decline during The Fall of Rome: 7,000,000 [make link]

- Colin McEvedy, The New Penguin Atlas of Medieval History (1992)

- From 2nd Century CE to 4th Century CE: Empire's population declined from 45M to 36M [i.e. 9M]

- From 400 CE to 600 CE: Empire's population declined by 20% [i.e. 7.2M]

- Paul Bairoch, Cities and economic development: from the dawn of history to the present, p.111

- "The population of Europe except Russia, then, having apparently reached a high point of some 40-55 million people by the start of the third century [ca.200 C.E.], seems to have fallen by the year 500 to about 30-40 million, bottoming out at about 20-35 million around 600." [i.e. ca.20M]

- Francois Crouzet, A History of the European Economy, 1000-2000 (University Press of Virginia: 2001) p.1.

- "The population of Europe (west of the Urals) in c. AD 200 has been estimated at 36 million; by 600, it had fallen to 26 million; another estimate (excluding ‘Russia’) gives a more drastic fall, from 44 to 22 million." [i.e. 10M or 22M]

also:

The geometric mean of these two extremes would come to 4½ per day, which is a credible daily rate for the really bad years.

why geometric mean? can you get it as the MLE given min{X1, ..., Xn} and max{X1, ..., Xn} for {X_i} iid Poissons? some kinda limit? think it might just be a rule of thumb.

yeah, it's a rule of thumb. found it it his book (epub).

september 2017 by nhaliday

Medicine as a pseudoscience | West Hunter

august 2017 by nhaliday

The idea that venesection was a good thing, or at least not so bad, on the grounds that one in a few hundred people have hemochromatosis (in Northern Europe) reminds me of the people who don’t wear a seatbelt, since it would keep them from being thrown out of their convertible into a waiting haystack, complete with nubile farmer’s daughter. Daughters. It could happen. But it’s not the way to bet.

Back in the good old days, Charles II, age 53, had a fit one Sunday evening, while fondling two of his mistresses.

Monday they bled him (cupping and scarifying) of eight ounces of blood. Followed by an antimony emetic, vitriol in peony water, purgative pills, and a clyster. Followed by another clyster after two hours. Then syrup of blackthorn, more antimony, and rock salt. Next, more laxatives, white hellebore root up the nostrils. Powdered cowslip flowers. More purgatives. Then Spanish Fly. They shaved his head and stuck blistering plasters all over it, plastered the soles of his feet with tar and pigeon-dung, then said good-night.

...

Friday. The king was worse. He tells them not to let poor Nelly starve. They try the Oriental Bezoar Stone, and more bleeding. Dies at noon.

Most people didn’t suffer this kind of problem with doctors, since they never saw one. Charles had six. Now Bach and Handel saw the same eye surgeon, John Taylor – who blinded both of them. Not everyone can put that on his resume!

You may wonder how medicine continued to exist, if it had a negative effect, on the whole. There’s always the placebo effect – at least there would be, if it existed. Any real placebo effect is very small: I’d guess exactly zero. But there is regression to the mean. You see the doctor when you’re feeling worse than average – and afterwards, if he doesn’t kill you outright, you’re likely to feel better. Which would have happened whether you’d seen him or not, but they didn’t often do RCTs back in the day – I think James Lind was the first (1747).

Back in the late 19th century, Christian Scientists did better than others when sick, because they didn’t believe in medicine. For reasons I think mistaken, because Mary Baker Eddy rejected the reality of the entire material world, but hey, it worked. Parenthetically, what triggered all that New Age nonsense in 19th century New England? Hash?

This did not change until fairly recently. Sometime in the early 20th medicine, clinical medicine, what doctors do, hit break-even. Now we can’t do without it. I wonder if there are, or will be, other examples of such a pile of crap turning (mostly) into a real science.

good tweet: https://twitter.com/bowmanthebard/status/897146294191390720

The brilliant GP I've had for 35+ years has retired. How can I find another one who meets my requirements?

1 is overweight

2 drinks more than officially recommended amounts

3 has an amused, tolerant atitude to human failings

4 is well aware that we're all going to die anyway, & there are better or worse ways to die

5 has a healthy skeptical attitude to mainstream medical science

6 is wholly dismissive of "a|ternative” medicine

7 believes in evolution

8 thinks most diseases get better without intervention, & knows the dangers of false positives

9 understands the base rate fallacy

EconPapers: Was Civil War Surgery Effective?: http://econpapers.repec.org/paper/htrhcecon/444.htm

contra Greg Cochran:

To shed light on the subject, I analyze a data set created by Dr. Edmund Andrews, a Civil war surgeon with the 1st Illinois Light Artillery. Dr. Andrews’s data can be rendered into an observational data set on surgical intervention and recovery, with controls for wound location and severity. The data also admits instruments for the surgical decision. My analysis suggests that Civil War surgery was effective, and increased the probability of survival of the typical wounded soldier, with average treatment effect of 0.25-0.28.

Medical Prehistory: https://westhunt.wordpress.com/2016/03/14/medical-prehistory/

What ancient medical treatments worked?

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76878

In some very, very limited conditions, bleeding?

--

Bad for you 99% of the time.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76947

Colchicine – used to treat gout – discovered by the Ancient Greeks.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76973

Dracunculiasis (Guinea worm)

Wrap the emerging end of the worm around a stick and slowly pull it out.

(3,500 years later, this remains the standard treatment.)

https://en.wikipedia.org/wiki/Ebers_Papyrus

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76971

Some of the progress is from formal medicine, most is from civil engineering, better nutrition ( ag science and physical chemistry), less crowded housing.

Nurses vs doctors: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/

Medicine, the things that doctors do, was an ineffective pseudoscience until fairly recently. Until 1800 or so, they were wrong about almost everything. Bleeding, cupping, purging, the four humors – useless. In the 1800s, some began to realize that they were wrong, and became medical nihilists that improved outcomes by doing less. Some patients themselves came to this realization, as when Civil War casualties hid from the surgeons and had better outcomes. Sometime in the early 20th century, MDs reached break-even, and became an increasingly positive influence on human health. As Lewis Thomas said, medicine is the youngest science.

Nursing, on the other hand, has always been useful. Just making sure that a patient is warm and nourished when too sick to take care of himself has helped many survive. In fact, some of the truly crushing epidemics have been greatly exacerbated when there were too few healthy people to take care of the sick.

Nursing must be old, but it can’t have existed forever. Whenever it came into existence, it must have changed the selective forces acting on the human immune system. Before nursing, being sufficiently incapacitated would have been uniformly fatal – afterwards, immune responses that involved a period of incapacitation (with eventual recovery) could have been selectively favored.

when MDs broke even: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/#comment-58981

I’d guess the 1930s. Lewis Thomas thought that he was living through big changes. They had a working serum therapy for lobar pneumonia ( antibody-based). They had many new vaccines ( diphtheria in 1923, whopping cough in 1926, BCG and tetanus in 1927, yellow fever in 1935, typhus in 1937.) Vitamins had been mostly worked out. Insulin was discovered in 1929. Blood transfusions. The sulfa drugs, first broad-spectrum antibiotics, showed up in 1935.

DALYs per doctor: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/

The disability-adjusted life year (DALY) is a measure of overall disease burden – the number of years lost. I’m wondering just much harm premodern medicine did, per doctor. How many healthy years of life did a typical doctor destroy (net) in past times?

...

It looks as if the average doctor (in Western medicine) killed a bunch of people over his career ( when contrasted with doing nothing). In the Charles Manson class.

Eventually the market saw through this illusion. Only took a couple of thousand years.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100741

That a very large part of healthcare spending is done for non-health reasons. He has a chapter on this in his new book, also check out his paper “Showing That You Care: The Evolution of Health Altruism” http://mason.gmu.edu/~rhanson/showcare.pdf

--

I ran into too much stupidity to finish the article. Hanson’s a loon. For example when he talks about the paradox of blacks being more sentenced on drug offenses than whites although they use drugs at similar rate. No paradox: guys go to the big house for dealing, not for using. Where does he live – Mars?

I had the same reaction when Hanson parroted some dipshit anthropologist arguing that the stupid things people do while drunk are due to social expectations, not really the alcohol.

Horseshit.

I don’t think that being totally unable to understand everybody around you necessarily leads to deep insights.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100744

What I’ve wondered is if there was anything that doctors did that actually was helpful and if perhaps that little bit of success helped them fool people into thinking the rest of it helped.

--

Setting bones. extracting arrows: spoon of Diocles. Colchicine for gout. Extracting the Guinea worm. Sometimes they got away with removing the stone. There must be others.

--

Quinine is relatively recent: post-1500. Obstetrical forceps also. Caesarean deliveries were almost always fatal to the mother until fairly recently.

Opium has been around for a long while : it works.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100839

If pre-modern medicine was indeed worse than useless – how do you explain no one noticing that patients who get expensive treatments are worse off than those who didn’t?

--

were worse off. People are kinda dumb – you’ve noticed?

--

My impression is that while people may be “kinda dumb”, ancient customs typically aren’t.

Even if we assume that all people who lived prior to the 19th century were too dumb to make the rational observation, wouldn’t you expect this ancient practice to be subject to selective pressure?

--

Your impression is wrong. Do you think that there some slick reason for Carthaginians incinerating their first-born?

Theodoric of York, bloodletting: https://www.youtube.com/watch?v=yvff3TViXmY

details on blood-letting and hemochromatosis: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100746

Starting Over: https://westhunt.wordpress.com/2018/01/23/starting-over/

Looking back on it, human health would have … [more]

west-hunter
scitariat
discussion
ideas
medicine
meta:medicine
science
realness
cost-benefit
the-trenches
info-dynamics
europe
the-great-west-whale
history
iron-age
the-classics
mediterranean
medieval
early-modern
mostly-modern
🌞
harvard
aphorism
rant
healthcare
regression-to-mean
illusion
public-health
multi
usa
northeast
pre-ww2
checklists
twitter
social
albion
ability-competence
study
cliometrics
war
trivia
evidence-based
data
intervention
effect-size
revolution
speculation
sapiens
drugs
antiquity
lived-experience
list
survey
questions
housing
population
density
nutrition
wiki
embodied
immune
evolution
poast
chart
markets
civil-liberty
randy-ayndy
market-failure
impact
scale
pro-rata
estimate
street-fighting
fermi
marginal
truth
recruiting
alt-inst
academia
social-science
space
physics
interdisciplinary
ratty
lesswrong
autism
👽
subculture
hanson
people
track-record
crime
criminal-justice
criminology
race
ethanol
error
video
lol
comedy
tradition
institutions
iq
intelligence
MENA
impetus
legacy
Back in the good old days, Charles II, age 53, had a fit one Sunday evening, while fondling two of his mistresses.

Monday they bled him (cupping and scarifying) of eight ounces of blood. Followed by an antimony emetic, vitriol in peony water, purgative pills, and a clyster. Followed by another clyster after two hours. Then syrup of blackthorn, more antimony, and rock salt. Next, more laxatives, white hellebore root up the nostrils. Powdered cowslip flowers. More purgatives. Then Spanish Fly. They shaved his head and stuck blistering plasters all over it, plastered the soles of his feet with tar and pigeon-dung, then said good-night.

...

Friday. The king was worse. He tells them not to let poor Nelly starve. They try the Oriental Bezoar Stone, and more bleeding. Dies at noon.

Most people didn’t suffer this kind of problem with doctors, since they never saw one. Charles had six. Now Bach and Handel saw the same eye surgeon, John Taylor – who blinded both of them. Not everyone can put that on his resume!

You may wonder how medicine continued to exist, if it had a negative effect, on the whole. There’s always the placebo effect – at least there would be, if it existed. Any real placebo effect is very small: I’d guess exactly zero. But there is regression to the mean. You see the doctor when you’re feeling worse than average – and afterwards, if he doesn’t kill you outright, you’re likely to feel better. Which would have happened whether you’d seen him or not, but they didn’t often do RCTs back in the day – I think James Lind was the first (1747).

Back in the late 19th century, Christian Scientists did better than others when sick, because they didn’t believe in medicine. For reasons I think mistaken, because Mary Baker Eddy rejected the reality of the entire material world, but hey, it worked. Parenthetically, what triggered all that New Age nonsense in 19th century New England? Hash?

This did not change until fairly recently. Sometime in the early 20th medicine, clinical medicine, what doctors do, hit break-even. Now we can’t do without it. I wonder if there are, or will be, other examples of such a pile of crap turning (mostly) into a real science.

good tweet: https://twitter.com/bowmanthebard/status/897146294191390720

The brilliant GP I've had for 35+ years has retired. How can I find another one who meets my requirements?

1 is overweight

2 drinks more than officially recommended amounts

3 has an amused, tolerant atitude to human failings

4 is well aware that we're all going to die anyway, & there are better or worse ways to die

5 has a healthy skeptical attitude to mainstream medical science

6 is wholly dismissive of "a|ternative” medicine

7 believes in evolution

8 thinks most diseases get better without intervention, & knows the dangers of false positives

9 understands the base rate fallacy

EconPapers: Was Civil War Surgery Effective?: http://econpapers.repec.org/paper/htrhcecon/444.htm

contra Greg Cochran:

To shed light on the subject, I analyze a data set created by Dr. Edmund Andrews, a Civil war surgeon with the 1st Illinois Light Artillery. Dr. Andrews’s data can be rendered into an observational data set on surgical intervention and recovery, with controls for wound location and severity. The data also admits instruments for the surgical decision. My analysis suggests that Civil War surgery was effective, and increased the probability of survival of the typical wounded soldier, with average treatment effect of 0.25-0.28.

Medical Prehistory: https://westhunt.wordpress.com/2016/03/14/medical-prehistory/

What ancient medical treatments worked?

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76878

In some very, very limited conditions, bleeding?

--

Bad for you 99% of the time.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76947

Colchicine – used to treat gout – discovered by the Ancient Greeks.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76973

Dracunculiasis (Guinea worm)

Wrap the emerging end of the worm around a stick and slowly pull it out.

(3,500 years later, this remains the standard treatment.)

https://en.wikipedia.org/wiki/Ebers_Papyrus

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76971

Some of the progress is from formal medicine, most is from civil engineering, better nutrition ( ag science and physical chemistry), less crowded housing.

Nurses vs doctors: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/

Medicine, the things that doctors do, was an ineffective pseudoscience until fairly recently. Until 1800 or so, they were wrong about almost everything. Bleeding, cupping, purging, the four humors – useless. In the 1800s, some began to realize that they were wrong, and became medical nihilists that improved outcomes by doing less. Some patients themselves came to this realization, as when Civil War casualties hid from the surgeons and had better outcomes. Sometime in the early 20th century, MDs reached break-even, and became an increasingly positive influence on human health. As Lewis Thomas said, medicine is the youngest science.

Nursing, on the other hand, has always been useful. Just making sure that a patient is warm and nourished when too sick to take care of himself has helped many survive. In fact, some of the truly crushing epidemics have been greatly exacerbated when there were too few healthy people to take care of the sick.

Nursing must be old, but it can’t have existed forever. Whenever it came into existence, it must have changed the selective forces acting on the human immune system. Before nursing, being sufficiently incapacitated would have been uniformly fatal – afterwards, immune responses that involved a period of incapacitation (with eventual recovery) could have been selectively favored.

when MDs broke even: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/#comment-58981

I’d guess the 1930s. Lewis Thomas thought that he was living through big changes. They had a working serum therapy for lobar pneumonia ( antibody-based). They had many new vaccines ( diphtheria in 1923, whopping cough in 1926, BCG and tetanus in 1927, yellow fever in 1935, typhus in 1937.) Vitamins had been mostly worked out. Insulin was discovered in 1929. Blood transfusions. The sulfa drugs, first broad-spectrum antibiotics, showed up in 1935.

DALYs per doctor: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/

The disability-adjusted life year (DALY) is a measure of overall disease burden – the number of years lost. I’m wondering just much harm premodern medicine did, per doctor. How many healthy years of life did a typical doctor destroy (net) in past times?

...

It looks as if the average doctor (in Western medicine) killed a bunch of people over his career ( when contrasted with doing nothing). In the Charles Manson class.

Eventually the market saw through this illusion. Only took a couple of thousand years.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100741

That a very large part of healthcare spending is done for non-health reasons. He has a chapter on this in his new book, also check out his paper “Showing That You Care: The Evolution of Health Altruism” http://mason.gmu.edu/~rhanson/showcare.pdf

--

I ran into too much stupidity to finish the article. Hanson’s a loon. For example when he talks about the paradox of blacks being more sentenced on drug offenses than whites although they use drugs at similar rate. No paradox: guys go to the big house for dealing, not for using. Where does he live – Mars?

I had the same reaction when Hanson parroted some dipshit anthropologist arguing that the stupid things people do while drunk are due to social expectations, not really the alcohol.

Horseshit.

I don’t think that being totally unable to understand everybody around you necessarily leads to deep insights.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100744

What I’ve wondered is if there was anything that doctors did that actually was helpful and if perhaps that little bit of success helped them fool people into thinking the rest of it helped.

--

Setting bones. extracting arrows: spoon of Diocles. Colchicine for gout. Extracting the Guinea worm. Sometimes they got away with removing the stone. There must be others.

--

Quinine is relatively recent: post-1500. Obstetrical forceps also. Caesarean deliveries were almost always fatal to the mother until fairly recently.

Opium has been around for a long while : it works.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100839

If pre-modern medicine was indeed worse than useless – how do you explain no one noticing that patients who get expensive treatments are worse off than those who didn’t?

--

were worse off. People are kinda dumb – you’ve noticed?

--

My impression is that while people may be “kinda dumb”, ancient customs typically aren’t.

Even if we assume that all people who lived prior to the 19th century were too dumb to make the rational observation, wouldn’t you expect this ancient practice to be subject to selective pressure?

--

Your impression is wrong. Do you think that there some slick reason for Carthaginians incinerating their first-born?

Theodoric of York, bloodletting: https://www.youtube.com/watch?v=yvff3TViXmY

details on blood-letting and hemochromatosis: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100746

Starting Over: https://westhunt.wordpress.com/2018/01/23/starting-over/

Looking back on it, human health would have … [more]

august 2017 by nhaliday

Garett Jones on Twitter: "For each university outside the top 4, School Rank X School Endowment ~ $100B https://t.co/52vCzCJnF8"

august 2017 by nhaliday

Zipf law

econotariat
garett-jones
twitter
social
discussion
heuristic
street-fighting
distribution
money
wealth
higher-ed
data
objektbuch
identity
stylized-facts
scale
magnitude
estimate
power-law
august 2017 by nhaliday

Constitutive equation - Wikipedia

august 2017 by nhaliday

In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance, and approximates the response of that material to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or forces to strains or deformations.

Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior.[1] See the article Linear response function.

nibble
wiki
reference
article
physics
mechanics
electromag
identity
estimate
approximation
empirical
stylized-facts
list
dirty-hands
fluid
logos
Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior.[1] See the article Linear response function.

august 2017 by nhaliday

How do archaeologists estimate the size of ancient populations?

august 2017 by nhaliday

Estimating The Population Sizes of Cities: http://irows.ucr.edu/research/citemp/estcit/estcit.htm

Estimating the population size of ancient settlements: http://www.parkdatabase.org/documents/download/2000_estimating_the_population_size_of_ancient_settlements.pdf

news
org:lite
archaeology
methodology
measurement
volo-avolo
history
antiquity
iron-age
medieval
demographics
population
density
explanation
sapiens
lens
multi
org:junk
org:edu
article
roots
accuracy
trivia
cocktail
estimate
approximation
magnitude
scale
evidence
Estimating the population size of ancient settlements: http://www.parkdatabase.org/documents/download/2000_estimating_the_population_size_of_ancient_settlements.pdf

august 2017 by nhaliday

Demography of the Roman Empire - Wikipedia

august 2017 by nhaliday

There are few recorded population numbers for the whole of antiquity, and those that exist are often rhetorical or symbolic. Unlike the contemporaneous Han Dynasty, no general census survives for the Roman Empire. The late period of the Roman Republic provides a small exception to this general rule: serial statistics for Roman citizen numbers, taken from census returns, survive for the early Republic through the 1st century CE.[41] Only the figures for periods after the mid-3rd century BCE are reliable, however. Fourteen figures are available for the 2nd century BCE (from 258,318 to 394,736). Only four figures are available for the 1st century BCE, and are feature a large break between 70/69 BCE (910,000) and 28 BCE (4,063,000). The interpretation of the later figures—the Augustan censuses of 28 BCE, 8 BCE, and 14 CE—is therefore controversial.[42] Alternate interpretations of the Augustan censuses (such as those of E. Lo Cascio[43]) produce divergent population histories across the whole imperial period.[44]

Roman population size: the logic of the debate: https://www.princeton.edu/~pswpc/pdfs/scheidel/070706.pdf

- Walter Scheidel (cited in book by Vaclav Smil, "Why America is Not a New Rome")

Our ignorance of ancient population numbers is one of the biggest obstacles to our understanding of Roman history. After generations of prolific scholarship, we still do not know how many people inhabited Roman Italy and the Mediterranean at any given point in time. When I say ‘we do not know’ I do not simply mean that we lack numbers that are both precise and safely known to be accurate: that would surely be an unreasonably high standard to apply to any pre-modern society. What I mean is that even the appropriate order of magnitude remains a matter of intense dispute.

Historical urban community sizes: https://en.wikipedia.org/wiki/Historical_urban_community_sizes

World population estimates: https://en.wikipedia.org/wiki/World_population_estimates

As a general rule, the confidence of estimates on historical world population decreases for the more distant past. Robust population data only exists for the last two or three centuries. Until the late 18th century, few governments had ever performed an accurate census. In many early attempts, such as in Ancient Egypt and the Persian Empire, the focus was on counting merely a subset of the population for purposes of taxation or military service.[3] Published estimates for the 1st century ("AD 1") suggest an uncertainty of the order of 50% (estimates range between 150 and 330 million). Some estimates extend their timeline into deep prehistory, to "10,000 BC", i.e. the early Holocene, when world population estimates range roughly between one and ten million (with an uncertainty of up to an order of magnitude).[4][5]

Estimates for yet deeper prehistory, into the Paleolithic, are of a different nature. At this time human populations consisted entirely of non-sedentary hunter-gatherer populations, with anatomically modern humans existing alongside archaic human varieties, some of which are still ancestral to the modern human population due to interbreeding with modern humans during the Upper Paleolithic. Estimates of the size of these populations are a topic of paleoanthropology. A late human population bottleneck is postulated by some scholars at approximately 70,000 years ago, during the Toba catastrophe, when Homo sapiens population may have dropped to as low as between 1,000 and 10,000 individuals.[6][7] For the time of speciation of Homo sapiens, some 200,000 years ago, an effective population size of the order of 10,000 to 30,000 individuals has been estimated, with an actual "census population" of early Homo sapiens of roughly 100,000 to 300,000 individuals.[8]

history
iron-age
mediterranean
the-classics
demographics
fertility
data
europe
population
measurement
volo-avolo
estimate
wiki
reference
article
conquest-empire
migration
canon
scale
archaeology
multi
broad-econ
pdf
study
survey
debate
uncertainty
walter-scheidel
vaclav-smil
urban
military
economics
labor
time-series
embodied
health
density
malthus
letters
urban-rural
database
list
antiquity
medieval
early-modern
mostly-modern
time
sequential
MENA
the-great-west-whale
china
asia
sinosphere
occident
orient
japan
britain
germanic
gallic
summary
big-picture
objektbuch
confidence
sapiens
anthropology
methodology
farmers-and-foragers
genetics
genomics
chart
Roman population size: the logic of the debate: https://www.princeton.edu/~pswpc/pdfs/scheidel/070706.pdf

- Walter Scheidel (cited in book by Vaclav Smil, "Why America is Not a New Rome")

Our ignorance of ancient population numbers is one of the biggest obstacles to our understanding of Roman history. After generations of prolific scholarship, we still do not know how many people inhabited Roman Italy and the Mediterranean at any given point in time. When I say ‘we do not know’ I do not simply mean that we lack numbers that are both precise and safely known to be accurate: that would surely be an unreasonably high standard to apply to any pre-modern society. What I mean is that even the appropriate order of magnitude remains a matter of intense dispute.

Historical urban community sizes: https://en.wikipedia.org/wiki/Historical_urban_community_sizes

World population estimates: https://en.wikipedia.org/wiki/World_population_estimates

As a general rule, the confidence of estimates on historical world population decreases for the more distant past. Robust population data only exists for the last two or three centuries. Until the late 18th century, few governments had ever performed an accurate census. In many early attempts, such as in Ancient Egypt and the Persian Empire, the focus was on counting merely a subset of the population for purposes of taxation or military service.[3] Published estimates for the 1st century ("AD 1") suggest an uncertainty of the order of 50% (estimates range between 150 and 330 million). Some estimates extend their timeline into deep prehistory, to "10,000 BC", i.e. the early Holocene, when world population estimates range roughly between one and ten million (with an uncertainty of up to an order of magnitude).[4][5]

Estimates for yet deeper prehistory, into the Paleolithic, are of a different nature. At this time human populations consisted entirely of non-sedentary hunter-gatherer populations, with anatomically modern humans existing alongside archaic human varieties, some of which are still ancestral to the modern human population due to interbreeding with modern humans during the Upper Paleolithic. Estimates of the size of these populations are a topic of paleoanthropology. A late human population bottleneck is postulated by some scholars at approximately 70,000 years ago, during the Toba catastrophe, when Homo sapiens population may have dropped to as low as between 1,000 and 10,000 individuals.[6][7] For the time of speciation of Homo sapiens, some 200,000 years ago, an effective population size of the order of 10,000 to 30,000 individuals has been estimated, with an actual "census population" of early Homo sapiens of roughly 100,000 to 300,000 individuals.[8]

august 2017 by nhaliday

Introduction to Scaling Laws

august 2017 by nhaliday

https://betadecay.wordpress.com/2009/10/02/the-physics-of-scaling-laws-and-dimensional-analysis/

http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf

Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.

nibble
org:junk
exposition
lecture-notes
physics
mechanics
street-fighting
problem-solving
scale
magnitude
estimate
fermi
mental-math
calculation
nitty-gritty
multi
scitariat
org:bleg
lens
tutorial
guide
ground-up
tricki
skeleton
list
cheatsheet
identity
levers
hi-order-bits
yoga
metabuch
pdf
article
essay
history
early-modern
europe
the-great-west-whale
science
the-trenches
discovery
fluid
architecture
oceans
giants
tidbits
elegance
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf

Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.

august 2017 by nhaliday

Diophantine approximation - Wikipedia

august 2017 by nhaliday

- rationals perfectly approximated by themselves, badly approximated (eps>1/bq) by other rationals

- irrationals well-approximated (eps~1/q^2) by rationals:

https://en.wikipedia.org/wiki/Dirichlet%27s_approximation_theorem

nibble
wiki
reference
math
math.NT
approximation
accuracy
levers
pigeonhole-markov
multi
tidbits
discrete
rounding
estimate
tightness
algebra
- irrationals well-approximated (eps~1/q^2) by rationals:

https://en.wikipedia.org/wiki/Dirichlet%27s_approximation_theorem

august 2017 by nhaliday

Subgradients - S. Boyd and L. Vandenberghe

august 2017 by nhaliday

If f is convex and x ∈ int dom f, then ∂f(x) is nonempty and bounded. To establish that ∂f(x) ≠ ∅, we apply the supporting hyperplane theorem to the convex set epi f at the boundary point (x, f(x)), ...

pdf
nibble
lecture-notes
acm
optimization
curvature
math.CA
estimate
linearity
differential
existence
proofs
exposition
atoms
math
marginal
convexity-curvature
august 2017 by nhaliday

How to estimate distance using your finger | Outdoor Herbivore Blog

august 2017 by nhaliday

1. Hold your right arm out directly in front of you, elbow straight, thumb upright.

2. Align your thumb with one eye closed so that it covers (or aligns) the distant object. Point marked X in the drawing.

3. Do not move your head, arm or thumb, but switch eyes, so that your open eye is now closed and the other eye is open. Observe closely where the object now appears with the other open eye. Your thumb should appear to have moved to some other point: no longer in front of the object. This new point is marked as Y in the drawing.

4. Estimate this displacement XY, by equating it to the estimated size of something you are familiar with (height of tree, building width, length of a car, power line poles, distance between nearby objects). In this case, the distant barn is estimated to be 100′ wide. It appears 5 barn widths could fit this displacement, or 500 feet. Now multiply that figure by 10 (the ratio of the length of your arm to the distance between your eyes), and you get the distance between you and the thicket of blueberry bushes — 5000 feet away(about 1 mile).

- Basically uses parallax (similar triangles) with each eye.

- When they say to compare apparent shift to known distance, won't that scale with the unknown distance? The example uses width of an object at the point whose distance is being estimated.

per here: https://www.trails.com/how_26316_estimate-distances-outdoors.html

Select a distant object that the width can be accurately determined. For example, use a large rock outcropping. Estimate the width of the rock. Use 200 feet wide as an example here.

outdoors
human-bean
embodied
embodied-pack
visuo
spatial
measurement
lifehack
howto
navigation
prepping
survival
objektbuch
multi
measure
estimate
2. Align your thumb with one eye closed so that it covers (or aligns) the distant object. Point marked X in the drawing.

3. Do not move your head, arm or thumb, but switch eyes, so that your open eye is now closed and the other eye is open. Observe closely where the object now appears with the other open eye. Your thumb should appear to have moved to some other point: no longer in front of the object. This new point is marked as Y in the drawing.

4. Estimate this displacement XY, by equating it to the estimated size of something you are familiar with (height of tree, building width, length of a car, power line poles, distance between nearby objects). In this case, the distant barn is estimated to be 100′ wide. It appears 5 barn widths could fit this displacement, or 500 feet. Now multiply that figure by 10 (the ratio of the length of your arm to the distance between your eyes), and you get the distance between you and the thicket of blueberry bushes — 5000 feet away(about 1 mile).

- Basically uses parallax (similar triangles) with each eye.

- When they say to compare apparent shift to known distance, won't that scale with the unknown distance? The example uses width of an object at the point whose distance is being estimated.

per here: https://www.trails.com/how_26316_estimate-distances-outdoors.html

Select a distant object that the width can be accurately determined. For example, use a large rock outcropping. Estimate the width of the rock. Use 200 feet wide as an example here.

august 2017 by nhaliday

How accurate are population forecasts?

july 2017 by nhaliday

2 The Accuracy of Past Projections: https://www.nap.edu/read/9828/chapter/4

good ebook:

Beyond Six Billion: Forecasting the World's Population (2000)

https://www.nap.edu/read/9828/chapter/2

Appendix A: Computer Software Packages for Projecting Population

https://www.nap.edu/read/9828/chapter/12

PDE Population Projections looks most relevant for my interests but it's also *ancient*

https://applieddemogtoolbox.github.io/Toolbox/

This Applied Demography Toolbox is a collection of applied demography computer programs, scripts, spreadsheets, databases and texts.

How Accurate Are the United Nations World Population Projections?: http://pages.stern.nyu.edu/~dbackus/BCH/demography/Keilman_JDR_98.pdf

cf. Razib on this: https://pinboard.in/u:nhaliday/b:d63e6df859e8

news
org:lite
prediction
meta:prediction
tetlock
demographics
population
demographic-transition
fertility
islam
world
developing-world
africa
europe
multi
track-record
accuracy
org:ngo
pdf
study
sociology
measurement
volo-avolo
methodology
estimate
data-science
error
wire-guided
priors-posteriors
books
guide
howto
software
tools
recommendations
libraries
gnxp
scitariat
good ebook:

Beyond Six Billion: Forecasting the World's Population (2000)

https://www.nap.edu/read/9828/chapter/2

Appendix A: Computer Software Packages for Projecting Population

https://www.nap.edu/read/9828/chapter/12

PDE Population Projections looks most relevant for my interests but it's also *ancient*

https://applieddemogtoolbox.github.io/Toolbox/

This Applied Demography Toolbox is a collection of applied demography computer programs, scripts, spreadsheets, databases and texts.

How Accurate Are the United Nations World Population Projections?: http://pages.stern.nyu.edu/~dbackus/BCH/demography/Keilman_JDR_98.pdf

cf. Razib on this: https://pinboard.in/u:nhaliday/b:d63e6df859e8

july 2017 by nhaliday

Harmonic mean - Wikipedia

july 2017 by nhaliday

The harmonic mean is a Schur-concave function, and dominated by the minimum of its arguments, in the sense that for any positive set of arguments, {\displaystyle \min(x_{1}\ldots x_{n})\leq H(x_{1}\ldots x_{n})\leq n\min(x_{1}\ldots x_{n})} . Thus, the harmonic mean cannot be made arbitrarily large by changing some values to bigger ones (while having at least one value unchanged).

more generally, for the weighted mean w/ Pr(x_i)=t_i, H(x1,...,xn) <= x_i/t_i

nibble
math
properties
estimate
concept
definition
wiki
reference
extrema
magnitude
expectancy
metrics
ground-up
more generally, for the weighted mean w/ Pr(x_i)=t_i, H(x1,...,xn) <= x_i/t_i

july 2017 by nhaliday

Are the global benefits of open borders a fallacy of composition? - Three examples

june 2017 by nhaliday

- Garett Jones (preprint to go with at some pt)

- The migrant may benefit while the planet gains nothing.

- Jensen’s inequality is a nudge toward smaller nations.

- If lower-skilled migration weakens OECD R&D, any benefits may be temporary.

https://www.dropbox.com/s/xb18weuk6v2wam5/FallacyOfCompositionGarettJonesDraft.pdf?dl=0

https://www.dropbox.com/s/41io6539y09c4ns/MeasuringTheSacrificeOfOpenBordersJones.pdf?dl=0

https://marginalrevolution.com/marginalrevolution/2019/11/garett-jones-on-open-borders.html

pdf
slides
spearhead
econotariat
garett-jones
economics
growth-econ
migration
policy
wonkish
article
critique
debate
intricacy
econ-productivity
labor
hive-mind
institutions
human-capital
nationalism-globalism
models
wealth
wealth-of-nations
contrarianism
rhetoric
🎩
s:*
technology
frontier
usa
china
japan
asia
europe
EU
korea
unintended-consequences
innovation
long-short-run
econ-metrics
curvature
polis
world
developing-world
zero-positive-sum
cracker-econ
multi
dropbox
study
estimate
street-fighting
methodology
path-dependence
individualism-collectivism
magnitude
flux-stasis
public-goodish
convexity-curvature
preprint
stagnation
cost-benefit
branches
money
compensation
data
externalities
marginal-rev
commentary
- The migrant may benefit while the planet gains nothing.

- Jensen’s inequality is a nudge toward smaller nations.

- If lower-skilled migration weakens OECD R&D, any benefits may be temporary.

https://www.dropbox.com/s/xb18weuk6v2wam5/FallacyOfCompositionGarettJonesDraft.pdf?dl=0

https://www.dropbox.com/s/41io6539y09c4ns/MeasuringTheSacrificeOfOpenBordersJones.pdf?dl=0

https://marginalrevolution.com/marginalrevolution/2019/11/garett-jones-on-open-borders.html

june 2017 by nhaliday

Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer?

pdf ratty bostrom article ideas enhancement iq futurism technology frontier 🔬 genetics genomics biotech scaling-up extrema tails distribution estimate magnitude selection behavioral-gen iteration-recursion society prediction impact human-capital elite scale policy morality ethics curvature speedometer white-paper convexity-curvature nonlinearity biodet hard-tech skunkworks data singularity abortion-contraception-embryo

may 2017 by nhaliday

pdf ratty bostrom article ideas enhancement iq futurism technology frontier 🔬 genetics genomics biotech scaling-up extrema tails distribution estimate magnitude selection behavioral-gen iteration-recursion society prediction impact human-capital elite scale policy morality ethics curvature speedometer white-paper convexity-curvature nonlinearity biodet hard-tech skunkworks data singularity abortion-contraception-embryo

may 2017 by nhaliday

Chapter 2: Asymptotic Expansions

april 2017 by nhaliday

includes complementary error function

pdf
nibble
exposition
math
acm
math.CA
approximation
limits
integral
magnitude
AMT
yoga
estimate
lecture-notes
april 2017 by nhaliday

Educational Romanticism & Economic Development | pseudoerasmus

april 2017 by nhaliday

https://twitter.com/GarettJones/status/852339296358940672

deleeted

https://twitter.com/GarettJones/status/943238170312929280

https://archive.is/p5hRA

Did Nations that Boosted Education Grow Faster?: http://econlog.econlib.org/archives/2012/10/did_nations_tha.html

On average, no relationship. The trendline points down slightly, but for the time being let's just call it a draw. It's a well-known fact that countries that started the 1960's with high education levels grew faster (example), but this graph is about something different. This graph shows that countries that increased their education levels did not grow faster.

Where has all the education gone?: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1016.2704&rep=rep1&type=pdf

https://twitter.com/GarettJones/status/948052794681966593

https://archive.is/kjxqp

https://twitter.com/GarettJones/status/950952412503822337

https://archive.is/3YPic

https://twitter.com/pseudoerasmus/status/862961420065001472

http://hanushek.stanford.edu/publications/schooling-educational-achievement-and-latin-american-growth-puzzle

The Case Against Education: What's Taking So Long, Bryan Caplan: http://econlog.econlib.org/archives/2015/03/the_case_agains_9.html

The World Might Be Better Off Without College for Everyone: https://www.theatlantic.com/magazine/archive/2018/01/whats-college-good-for/546590/

Students don't seem to be getting much out of higher education.

- Bryan Caplan

College: Capital or Signal?: http://www.economicmanblog.com/2017/02/25/college-capital-or-signal/

After his review of the literature, Caplan concludes that roughly 80% of the earnings effect from college comes from signalling, with only 20% the result of skill building. Put this together with his earlier observations about the private returns to college education, along with its exploding cost, and Caplan thinks that the social returns are negative. The policy implications of this will come as very bitter medicine for friends of Bernie Sanders.

Doubting the Null Hypothesis: http://www.arnoldkling.com/blog/doubting-the-null-hypothesis/

Is higher education/college in the US more about skill-building or about signaling?: https://www.quora.com/Is-higher-education-college-in-the-US-more-about-skill-building-or-about-signaling

ballpark: 50% signaling, 30% selection, 20% addition to human capital

more signaling in art history, more human capital in engineering, more selection in philosophy

Econ Duel! Is Education Signaling or Skill Building?: http://marginalrevolution.com/marginalrevolution/2016/03/econ-duel-is-education-signaling-or-skill-building.html

Marginal Revolution University has a brand new feature, Econ Duel! Our first Econ Duel features Tyler and me debating the question, Is education more about signaling or skill building?

Against Tulip Subsidies: https://slatestarcodex.com/2015/06/06/against-tulip-subsidies/

https://www.overcomingbias.com/2018/01/read-the-case-against-education.html

https://nintil.com/2018/02/05/notes-on-the-case-against-education/

https://www.nationalreview.com/magazine/2018-02-19-0000/bryan-caplan-case-against-education-review

https://spottedtoad.wordpress.com/2018/02/12/the-case-against-education/

Most American public school kids are low-income; about half are non-white; most are fairly low skilled academically. For most American kids, the majority of the waking hours they spend not engaged with electronic media are at school; the majority of their in-person relationships are at school; the most important relationships they have with an adult who is not their parent is with their teacher. For their parents, the most important in-person source of community is also their kids’ school. Young people need adult mirrors, models, mentors, and in an earlier era these might have been provided by extended families, but in our own era this all falls upon schools.

Caplan gestures towards work and earlier labor force participation as alternatives to school for many if not all kids. And I empathize: the years that I would point to as making me who I am were ones where I was working, not studying. But they were years spent working in schools, as a teacher or assistant. If schools did not exist, is there an alternative that we genuinely believe would arise to draw young people into the life of their community?

...

It is not an accident that the state that spends the least on education is Utah, where the LDS church can take up some of the slack for schools, while next door Wyoming spends almost the most of any state at $16,000 per student. Education is now the one surviving binding principle of the society as a whole, the one black box everyone will agree to, and so while you can press for less subsidization of education by government, and for privatization of costs, as Caplan does, there’s really nothing people can substitute for it. This is partially about signaling, sure, but it’s also because outside of schools and a few religious enclaves our society is but a darkling plain beset by winds.

This doesn’t mean that we should leave Caplan’s critique on the shelf. Much of education is focused on an insane, zero-sum race for finite rewards. Much of schooling does push kids, parents, schools, and school systems towards a solution ad absurdum, where anything less than 100 percent of kids headed to a doctorate and the big coding job in the sky is a sign of failure of everyone concerned.

But let’s approach this with an eye towards the limits of the possible and the reality of diminishing returns.

https://westhunt.wordpress.com/2018/01/27/poison-ivy-halls/

https://westhunt.wordpress.com/2018/01/27/poison-ivy-halls/#comment-101293

The real reason the left would support Moander: the usual reason. because he’s an enemy.

https://westhunt.wordpress.com/2018/02/01/bright-college-days-part-i/

I have a problem in thinking about education, since my preferences and personal educational experience are atypical, so I can’t just gut it out. On the other hand, knowing that puts me ahead of a lot of people that seem convinced that all real people, including all Arab cabdrivers, think and feel just as they do.

One important fact, relevant to this review. I don’t like Caplan. I think he doesn’t understand – can’t understand – human nature, and although that sometimes confers a different and interesting perspective, it’s not a royal road to truth. Nor would I want to share a foxhole with him: I don’t trust him. So if I say that I agree with some parts of this book, you should believe me.

...

Caplan doesn’t talk about possible ways of improving knowledge acquisition and retention. Maybe he thinks that’s impossible, and he may be right, at least within a conventional universe of possibilities. That’s a bit outside of his thesis, anyhow. Me it interests.

He dismisses objections from educational psychologists who claim that studying a subject improves you in subtle ways even after you forget all of it. I too find that hard to believe. On the other hand, it looks to me as if poorly-digested fragments of information picked up in college have some effect on public policy later in life: it is no coincidence that most prominent people in public life (at a given moment) share a lot of the same ideas. People are vaguely remembering the same crap from the same sources, or related sources. It’s correlated crap, which has a much stronger effect than random crap.

These widespread new ideas are usually wrong. They come from somewhere – in part, from higher education. Along this line, Caplan thinks that college has only a weak ideological effect on students. I don’t believe he is correct. In part, this is because most people use a shifting standard: what’s liberal or conservative gets redefined over time. At any given time a population is roughly half left and half right – but the content of those labels changes a lot. There’s a shift.

https://westhunt.wordpress.com/2018/02/01/bright-college-days-part-i/#comment-101492

I put it this way, a while ago: “When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”

--

You just explained the Credo quia absurdum doctrine. I always wondered if it was nonsense. It is not.

--

Someone on twitter caught it first – got all the way to “sliding down the razor blade of life”. Which I explained is now called “transitioning”

What Catholics believe: https://theweek.com/articles/781925/what-catholics-believe

We believe all of these things, fantastical as they may sound, and we believe them for what we consider good reasons, well attested by history, consistent with the most exacting standards of logic. We will profess them in this place of wrath and tears until the extraordinary event referenced above, for which men and women have hoped and prayed for nearly 2,000 years, comes to pass.

https://westhunt.wordpress.com/2018/02/05/bright-college-days-part-ii/

According to Caplan, employers are looking for conformity, conscientiousness, and intelligence. They use completion of high school, or completion of college as a sign of conformity and conscientiousness. College certainly looks as if it’s mostly signaling, and it’s hugely expensive signaling, in terms of college costs and foregone earnings.

But inserting conformity into the merit function is tricky: things become important signals… because they’re important signals. Otherwise useful actions are contraindicated because they’re “not done”. For example, test scores convey useful information. They could help show that an applicant is smart even though he attended a mediocre school – the same role they play in college admissions. But employers seldom request test scores, and although applicants may provide them, few do. Caplan says ” The word on the street… [more]

econotariat
pseudoE
broad-econ
economics
econometrics
growth-econ
education
human-capital
labor
correlation
null-result
world
developing-world
commentary
spearhead
garett-jones
twitter
social
pic
discussion
econ-metrics
rindermann-thompson
causation
endo-exo
biodet
data
chart
knowledge
article
wealth-of-nations
latin-america
study
path-dependence
divergence
🎩
curvature
microfoundations
multi
convexity-curvature
nonlinearity
hanushek
volo-avolo
endogenous-exogenous
backup
pdf
people
policy
monetary-fiscal
wonkish
cracker-econ
news
org:mag
local-global
higher-ed
impetus
signaling
rhetoric
contrarianism
domestication
propaganda
ratty
hanson
books
review
recommendations
distribution
externalities
cost-benefit
summary
natural-experiment
critique
rent-seeking
mobility
supply-demand
intervention
shift
social-choice
government
incentives
interests
q-n-a
street-fighting
objektbuch
X-not-about-Y
marginal-rev
c:***
qra
info-econ
info-dynamics
org:econlib
yvain
ssc
politics
medicine
stories
deleeted

https://twitter.com/GarettJones/status/943238170312929280

https://archive.is/p5hRA

Did Nations that Boosted Education Grow Faster?: http://econlog.econlib.org/archives/2012/10/did_nations_tha.html

On average, no relationship. The trendline points down slightly, but for the time being let's just call it a draw. It's a well-known fact that countries that started the 1960's with high education levels grew faster (example), but this graph is about something different. This graph shows that countries that increased their education levels did not grow faster.

Where has all the education gone?: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1016.2704&rep=rep1&type=pdf

https://twitter.com/GarettJones/status/948052794681966593

https://archive.is/kjxqp

https://twitter.com/GarettJones/status/950952412503822337

https://archive.is/3YPic

https://twitter.com/pseudoerasmus/status/862961420065001472

http://hanushek.stanford.edu/publications/schooling-educational-achievement-and-latin-american-growth-puzzle

The Case Against Education: What's Taking So Long, Bryan Caplan: http://econlog.econlib.org/archives/2015/03/the_case_agains_9.html

The World Might Be Better Off Without College for Everyone: https://www.theatlantic.com/magazine/archive/2018/01/whats-college-good-for/546590/

Students don't seem to be getting much out of higher education.

- Bryan Caplan

College: Capital or Signal?: http://www.economicmanblog.com/2017/02/25/college-capital-or-signal/

After his review of the literature, Caplan concludes that roughly 80% of the earnings effect from college comes from signalling, with only 20% the result of skill building. Put this together with his earlier observations about the private returns to college education, along with its exploding cost, and Caplan thinks that the social returns are negative. The policy implications of this will come as very bitter medicine for friends of Bernie Sanders.

Doubting the Null Hypothesis: http://www.arnoldkling.com/blog/doubting-the-null-hypothesis/

Is higher education/college in the US more about skill-building or about signaling?: https://www.quora.com/Is-higher-education-college-in-the-US-more-about-skill-building-or-about-signaling

ballpark: 50% signaling, 30% selection, 20% addition to human capital

more signaling in art history, more human capital in engineering, more selection in philosophy

Econ Duel! Is Education Signaling or Skill Building?: http://marginalrevolution.com/marginalrevolution/2016/03/econ-duel-is-education-signaling-or-skill-building.html

Marginal Revolution University has a brand new feature, Econ Duel! Our first Econ Duel features Tyler and me debating the question, Is education more about signaling or skill building?

Against Tulip Subsidies: https://slatestarcodex.com/2015/06/06/against-tulip-subsidies/

https://www.overcomingbias.com/2018/01/read-the-case-against-education.html

https://nintil.com/2018/02/05/notes-on-the-case-against-education/

https://www.nationalreview.com/magazine/2018-02-19-0000/bryan-caplan-case-against-education-review

https://spottedtoad.wordpress.com/2018/02/12/the-case-against-education/

Most American public school kids are low-income; about half are non-white; most are fairly low skilled academically. For most American kids, the majority of the waking hours they spend not engaged with electronic media are at school; the majority of their in-person relationships are at school; the most important relationships they have with an adult who is not their parent is with their teacher. For their parents, the most important in-person source of community is also their kids’ school. Young people need adult mirrors, models, mentors, and in an earlier era these might have been provided by extended families, but in our own era this all falls upon schools.

Caplan gestures towards work and earlier labor force participation as alternatives to school for many if not all kids. And I empathize: the years that I would point to as making me who I am were ones where I was working, not studying. But they were years spent working in schools, as a teacher or assistant. If schools did not exist, is there an alternative that we genuinely believe would arise to draw young people into the life of their community?

...

It is not an accident that the state that spends the least on education is Utah, where the LDS church can take up some of the slack for schools, while next door Wyoming spends almost the most of any state at $16,000 per student. Education is now the one surviving binding principle of the society as a whole, the one black box everyone will agree to, and so while you can press for less subsidization of education by government, and for privatization of costs, as Caplan does, there’s really nothing people can substitute for it. This is partially about signaling, sure, but it’s also because outside of schools and a few religious enclaves our society is but a darkling plain beset by winds.

This doesn’t mean that we should leave Caplan’s critique on the shelf. Much of education is focused on an insane, zero-sum race for finite rewards. Much of schooling does push kids, parents, schools, and school systems towards a solution ad absurdum, where anything less than 100 percent of kids headed to a doctorate and the big coding job in the sky is a sign of failure of everyone concerned.

But let’s approach this with an eye towards the limits of the possible and the reality of diminishing returns.

https://westhunt.wordpress.com/2018/01/27/poison-ivy-halls/

https://westhunt.wordpress.com/2018/01/27/poison-ivy-halls/#comment-101293

The real reason the left would support Moander: the usual reason. because he’s an enemy.

https://westhunt.wordpress.com/2018/02/01/bright-college-days-part-i/

I have a problem in thinking about education, since my preferences and personal educational experience are atypical, so I can’t just gut it out. On the other hand, knowing that puts me ahead of a lot of people that seem convinced that all real people, including all Arab cabdrivers, think and feel just as they do.

One important fact, relevant to this review. I don’t like Caplan. I think he doesn’t understand – can’t understand – human nature, and although that sometimes confers a different and interesting perspective, it’s not a royal road to truth. Nor would I want to share a foxhole with him: I don’t trust him. So if I say that I agree with some parts of this book, you should believe me.

...

Caplan doesn’t talk about possible ways of improving knowledge acquisition and retention. Maybe he thinks that’s impossible, and he may be right, at least within a conventional universe of possibilities. That’s a bit outside of his thesis, anyhow. Me it interests.

He dismisses objections from educational psychologists who claim that studying a subject improves you in subtle ways even after you forget all of it. I too find that hard to believe. On the other hand, it looks to me as if poorly-digested fragments of information picked up in college have some effect on public policy later in life: it is no coincidence that most prominent people in public life (at a given moment) share a lot of the same ideas. People are vaguely remembering the same crap from the same sources, or related sources. It’s correlated crap, which has a much stronger effect than random crap.

These widespread new ideas are usually wrong. They come from somewhere – in part, from higher education. Along this line, Caplan thinks that college has only a weak ideological effect on students. I don’t believe he is correct. In part, this is because most people use a shifting standard: what’s liberal or conservative gets redefined over time. At any given time a population is roughly half left and half right – but the content of those labels changes a lot. There’s a shift.

https://westhunt.wordpress.com/2018/02/01/bright-college-days-part-i/#comment-101492

I put it this way, a while ago: “When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”

--

You just explained the Credo quia absurdum doctrine. I always wondered if it was nonsense. It is not.

--

Someone on twitter caught it first – got all the way to “sliding down the razor blade of life”. Which I explained is now called “transitioning”

What Catholics believe: https://theweek.com/articles/781925/what-catholics-believe

We believe all of these things, fantastical as they may sound, and we believe them for what we consider good reasons, well attested by history, consistent with the most exacting standards of logic. We will profess them in this place of wrath and tears until the extraordinary event referenced above, for which men and women have hoped and prayed for nearly 2,000 years, comes to pass.

https://westhunt.wordpress.com/2018/02/05/bright-college-days-part-ii/

According to Caplan, employers are looking for conformity, conscientiousness, and intelligence. They use completion of high school, or completion of college as a sign of conformity and conscientiousness. College certainly looks as if it’s mostly signaling, and it’s hugely expensive signaling, in terms of college costs and foregone earnings.

But inserting conformity into the merit function is tricky: things become important signals… because they’re important signals. Otherwise useful actions are contraindicated because they’re “not done”. For example, test scores convey useful information. They could help show that an applicant is smart even though he attended a mediocre school – the same role they play in college admissions. But employers seldom request test scores, and although applicants may provide them, few do. Caplan says ” The word on the street… [more]

april 2017 by nhaliday

Malthus in the Bedroom: Birth Spacing as Birth Control in Pre-Transition England | SpringerLink

april 2017 by nhaliday

Randomness in the Bedroom: There Is No Evidence for Fertility Control in Pre-Industrial England: https://link.springer.com/article/10.1007/s13524-019-00786-2

- Gregory Clark et al.

https://twitter.com/Schmidt_Erwin/status/1142740263569448961

https://archive.is/HUYPf

both cause and effect of England not being France , which lowered fertility significantly already in the 18th century, I believe largely through anal sex and coitus interruptus

- Spotted Toad

--

Is there a source I can check on that? That's almost too French to be true. Lol.

study
anthropology
sociology
britain
history
early-modern
demographics
fertility
demographic-transition
sex
class
s-factor
spearhead
gregory-clark
malthus
broad-econ
multi
critique
methodology
gotchas
intricacy
stats
estimate
ratty
unaffiliated
twitter
social
commentary
backup
europe
gallic
idk
sexuality
- Gregory Clark et al.

https://twitter.com/Schmidt_Erwin/status/1142740263569448961

https://archive.is/HUYPf

both cause and effect of England not being France , which lowered fertility significantly already in the 18th century, I believe largely through anal sex and coitus interruptus

- Spotted Toad

--

Is there a source I can check on that? That's almost too French to be true. Lol.

april 2017 by nhaliday

Statistician Proves Gaussian Correlation Inequality | Quanta Magazine

news org:mag org:sci popsci math probability stats geometry math.MG research multi pdf papers preprint nibble profile stories AMT intersection estimate measure curvature convexity-curvature org:inst org:mat intersection-connectedness

march 2017 by nhaliday

news org:mag org:sci popsci math probability stats geometry math.MG research multi pdf papers preprint nibble profile stories AMT intersection estimate measure curvature convexity-curvature org:inst org:mat intersection-connectedness

march 2017 by nhaliday

Sets with Small Intersection | Academically Interesting

march 2017 by nhaliday

- nice application of LLL lemma

- reference for the "not hard to show" claim: https://homes.cs.washington.edu/~anuprao/pubs/CSE599sExtremal/lecture3.pdf

I think there are some typos, eg, inequality in last line should be reversed (we already have an upper bound on sum d(x), want a lower bound)

cf also Exercise 2.14 and Section 13.6 in Jukna's Extremal Combinatorics, and the Erdös–Ko–Rado Theorem in Alon-Spencer

more:

https://arxiv.org/abs/1404.4622

https://mathoverflow.net/questions/64596/minimal-intersecting-subsets

https://mathoverflow.net/questions/175969/an-upper-bound-on-families-of-subsets-with-a-small-pairwise-intersection

https://mathoverflow.net/questions/21245/pairwise-intersecting-sets-of-fixed-size

ratty
clever-rats
acmtariat
org:bleg
nibble
math
math.CO
tidbits
probability
probabilistic-method
intersection
rigidity
magnitude
combo-optimization
intersection-connectedness
multi
pdf
proofs
extrema
estimate
bare-hands
q-n-a
overflow
preprint
papers
pseudorandomness
tcs
complexity
- reference for the "not hard to show" claim: https://homes.cs.washington.edu/~anuprao/pubs/CSE599sExtremal/lecture3.pdf

I think there are some typos, eg, inequality in last line should be reversed (we already have an upper bound on sum d(x), want a lower bound)

cf also Exercise 2.14 and Section 13.6 in Jukna's Extremal Combinatorics, and the Erdös–Ko–Rado Theorem in Alon-Spencer

more:

https://arxiv.org/abs/1404.4622

https://mathoverflow.net/questions/64596/minimal-intersecting-subsets

https://mathoverflow.net/questions/175969/an-upper-bound-on-families-of-subsets-with-a-small-pairwise-intersection

https://mathoverflow.net/questions/21245/pairwise-intersecting-sets-of-fixed-size

march 2017 by nhaliday

Mars is Hard - Casey Handmer

people speculation prediction engineering physics space papers analysis electromag gravity technology frontier links reading caltech spock nitty-gritty 2016 the-world-is-just-atoms new-religion fermi applications wild-ideas 🔬 definite-planning ideas article mechanics white-paper dirty-hands expansionism intricacy advanced allodium heavy-industry gedanken geoengineering books magnitude estimate population scale nordic energy-resources food agriculture scitariat

february 2017 by nhaliday

people speculation prediction engineering physics space papers analysis electromag gravity technology frontier links reading caltech spock nitty-gritty 2016 the-world-is-just-atoms new-religion fermi applications wild-ideas 🔬 definite-planning ideas article mechanics white-paper dirty-hands expansionism intricacy advanced allodium heavy-industry gedanken geoengineering books magnitude estimate population scale nordic energy-resources food agriculture scitariat

february 2017 by nhaliday

Hoeffding’s Inequality

february 2017 by nhaliday

basic idea of standard pf: bound e^{tX} by line segment (convexity) then use Taylor expansion (in p = b/(b-a), the fraction of range to right of 0) of logarithm

pdf
lecture-notes
exposition
nibble
concentration-of-measure
estimate
proofs
ground-up
acm
probability
series
s:null
february 2017 by nhaliday

CS229 Supplemental Lecture notes: Hoeffding’s inequality

february 2017 by nhaliday

- weaker by a constant factor

- uses symmetrization instead of Taylor series

pdf
lecture-notes
exposition
nibble
proofs
concentration-of-measure
estimate
machine-learning
acm
probability
math
moments
ground-up
stanford
symmetry
curvature
convexity-curvature
- uses symmetrization instead of Taylor series

february 2017 by nhaliday

An Introduction to Measure Theory - Terence Tao

books draft unit math gowers mathtariat measure math.CA probability yoga problem-solving pdf tricki local-global counterexample visual-understanding lifts-projections oscillation limits estimate quantifiers-sums synthesis coarse-fine p:someday s:** heavyweights

february 2017 by nhaliday

books draft unit math gowers mathtariat measure math.CA probability yoga problem-solving pdf tricki local-global counterexample visual-understanding lifts-projections oscillation limits estimate quantifiers-sums synthesis coarse-fine p:someday s:** heavyweights

february 2017 by nhaliday

st.statistics - Lower bound for sum of binomial coefficients? - MathOverflow

february 2017 by nhaliday

- basically approximate w/ geometric sum (which scales as final term) and you can get it up to O(1) factor

- not good enough for many applications (want 1+o(1) approx.)

- Stirling can also give bound to constant factor precision w/ more calculation I believe

- tighter bound at Section 7.3 here: http://webbuild.knu.ac.kr/~trj/Combin/matousek-vondrak-prob-ln.pdf

q-n-a
overflow
nibble
math
math.CO
estimate
tidbits
magnitude
concentration-of-measure
stirling
binomial
metabuch
tricki
multi
tightness
pdf
lecture-notes
exposition
probability
probabilistic-method
yoga
- not good enough for many applications (want 1+o(1) approx.)

- Stirling can also give bound to constant factor precision w/ more calculation I believe

- tighter bound at Section 7.3 here: http://webbuild.knu.ac.kr/~trj/Combin/matousek-vondrak-prob-ln.pdf

february 2017 by nhaliday

The tensor power trick | Tricki

february 2017 by nhaliday

- Fubini's for integrals of tensored extension

- entropy digression is interesting

nibble
tricki
exposition
problem-solving
yoga
estimate
magnitude
tensors
levers
algebraic-complexity
wiki
reference
metabuch
hi-order-bits
synthesis
tidbits
tightness
quantifiers-sums
integral
information-theory
entropy-like
stirling
binomial
concentration-of-measure
limits
stat-mech
additive-combo
math.CV
math.CA
math.FA
fourier
s:*
better-explained
org:mat
elegance
- entropy digression is interesting

february 2017 by nhaliday

pr.probability - Identities and inequalities in analysis and probability - MathOverflow

february 2017 by nhaliday

interesting approach to proving Cauchy-Schwarz (symmetry+sum of squares)

q-n-a
overflow
math
math.CA
math.FA
probability
list
big-list
estimate
yoga
synthesis
structure
examples
identity
nibble
sum-of-squares
positivity
tricki
inner-product
wisdom
integral
quantifiers-sums
tidbits
p:whenever
s:null
signum
elegance
february 2017 by nhaliday

Prékopa–Leindler inequality | Academically Interesting

february 2017 by nhaliday

Consider the following statements:

1. The shape with the largest volume enclosed by a given surface area is the n-dimensional sphere.

2. A marginal or sum of log-concave distributions is log-concave.

3. Any Lipschitz function of a standard n-dimensional Gaussian distribution concentrates around its mean.

What do these all have in common? Despite being fairly non-trivial and deep results, they all can be proved in less than half of a page using the Prékopa–Leindler inequality.

ie, Brunn-Minkowski

acmtariat
clever-rats
ratty
math
acm
geometry
measure
math.MG
estimate
distribution
concentration-of-measure
smoothness
regularity
org:bleg
nibble
brunn-minkowski
curvature
convexity-curvature
1. The shape with the largest volume enclosed by a given surface area is the n-dimensional sphere.

2. A marginal or sum of log-concave distributions is log-concave.

3. Any Lipschitz function of a standard n-dimensional Gaussian distribution concentrates around its mean.

What do these all have in common? Despite being fairly non-trivial and deep results, they all can be proved in less than half of a page using the Prékopa–Leindler inequality.

ie, Brunn-Minkowski

february 2017 by nhaliday

bounds - What is the variance of the maximum of a sample? - Cross Validated

february 2017 by nhaliday

- sum of variances is always a bound

- can't do better even for iid Bernoulli

- looks like nice argument from well-known probabilist (using E[(X-Y)^2] = 2Var X), but not clear to me how he gets to sum_i instead of sum_{i,j} in the union bound?

edit: argument is that, for j = argmax_k Y_k, we have r < X_i - Y_j <= X_i - Y_i for all i, including i = argmax_k X_k

- different proof here (later pages): http://www.ism.ac.jp/editsec/aism/pdf/047_1_0185.pdf

Var(X_n:n) <= sum Var(X_k:n) + 2 sum_{i < j} Cov(X_i:n, X_j:n) = Var(sum X_k:n) = Var(sum X_k) = nσ^2

why are the covariances nonnegative? (are they?). intuitively seems true.

- for that, see https://pinboard.in/u:nhaliday/b:ed4466204bb1

- note that this proof shows more generally that sum Var(X_k:n) <= sum Var(X_k)

- apparently that holds for dependent X_k too? http://mathoverflow.net/a/96943/20644

q-n-a
overflow
stats
acm
distribution
tails
bias-variance
moments
estimate
magnitude
probability
iidness
tidbits
concentration-of-measure
multi
orders
levers
extrema
nibble
bonferroni
coarse-fine
expert
symmetry
s:*
expert-experience
proofs
- can't do better even for iid Bernoulli

- looks like nice argument from well-known probabilist (using E[(X-Y)^2] = 2Var X), but not clear to me how he gets to sum_i instead of sum_{i,j} in the union bound?

edit: argument is that, for j = argmax_k Y_k, we have r < X_i - Y_j <= X_i - Y_i for all i, including i = argmax_k X_k

- different proof here (later pages): http://www.ism.ac.jp/editsec/aism/pdf/047_1_0185.pdf

Var(X_n:n) <= sum Var(X_k:n) + 2 sum_{i < j} Cov(X_i:n, X_j:n) = Var(sum X_k:n) = Var(sum X_k) = nσ^2

why are the covariances nonnegative? (are they?). intuitively seems true.

- for that, see https://pinboard.in/u:nhaliday/b:ed4466204bb1

- note that this proof shows more generally that sum Var(X_k:n) <= sum Var(X_k)

- apparently that holds for dependent X_k too? http://mathoverflow.net/a/96943/20644

february 2017 by nhaliday

Calculating The Expected Maximum of a Gaussian Sample using Order Statistics - Gwern.net

gwern magnitude calculation stats probability monte-carlo distribution tails estimate acm enhancement iidness tidbits orders concentration-of-measure extrema nibble tightness outliers expectancy faq ideas article scaling-up behavioral-gen selection biodet data analysis benchmarks

february 2017 by nhaliday

gwern magnitude calculation stats probability monte-carlo distribution tails estimate acm enhancement iidness tidbits orders concentration-of-measure extrema nibble tightness outliers expectancy faq ideas article scaling-up behavioral-gen selection biodet data analysis benchmarks

february 2017 by nhaliday

Energy of Seawater Desalination

february 2017 by nhaliday

0.66 kcal / liter is the minimum energy required to desalination of one liter of seawater, regardless of the technology applied to the process.

infrastructure
explanation
physics
thermo
objektbuch
data
lower-bounds
chemistry
the-world-is-just-atoms
geoengineering
phys-energy
nibble
oceans
h2o
applications
estimate
🔬
energy-resources
biophysical-econ
stylized-facts
ideas
fluid
volo-avolo
february 2017 by nhaliday

Embryo editing for intelligence - Gwern.net

february 2017 by nhaliday

https://twitter.com/pnin1957/status/917693229608337408

My hunch is CRISPR/Cas9 will not play a big role in intelligence enhancement. You'd have to edit so many loci b/c of small effect sizes, increasing errors. Embryo selection is much more promising. Peoples with high avg genetic values, of course, have an in-built advantage there.

ratty
gwern
enhancement
scaling-up
genetics
genomics
iq
🌞
CRISPR
futurism
biodet
new-religion
nibble
intervention
🔬
behavioral-gen
faq
chart
ideas
article
multi
twitter
social
commentary
gnon
unaffiliated
prediction
accuracy
technology
QTL
biotech
selection
comparison
scale
magnitude
hard-tech
skunkworks
speedometer
abortion-contraception-embryo
estimate
My hunch is CRISPR/Cas9 will not play a big role in intelligence enhancement. You'd have to edit so many loci b/c of small effect sizes, increasing errors. Embryo selection is much more promising. Peoples with high avg genetic values, of course, have an in-built advantage there.

february 2017 by nhaliday

The Brunn-Minkowski Inequality | The n-Category Café

february 2017 by nhaliday

For instance, this happens in the plane when A is a horizontal line segment and B is a vertical line segment. There’s obviously no hope of getting an equation for Vol(A+B) in terms of Vol(A) and Vol(B). But this example suggests that we might be able to get an inequality, stating that Vol(A+B) is at least as big as some function of Vol(A) and Vol(B).

The Brunn-Minkowski inequality does this, but it’s really about linearized volume, Vol^{1/n}, rather than volume itself. If length is measured in metres then so is Vol^{1/n}.

...

Nice post, Tom. To readers whose background isn’t in certain areas of geometry and analysis, it’s not obvious that the Brunn–Minkowski inequality is more than a curiosity, the proof of the isoperimetric inequality notwithstanding. So let me add that Brunn–Minkowski is an absolutely vital tool in many parts of geometry, analysis, and probability theory, with extremely diverse applications. Gardner’s survey is a great place to start, but by no means exhaustive.

I’ll also add a couple remarks about regularity issues. You point out that Brunn–Minkowski holds “in the vast generality of measurable sets”, but it may not be initially obvious that this needs to be interpreted as “when A, B, and A+B are all Lebesgue measurable”, since A+B need not be measurable when A and B are (although you can modify the definition of A+B to work for arbitrary measurable A and B; this is discussed by Gardner).

mathtariat
math
estimate
exposition
geometry
math.MG
measure
links
regularity
survey
papers
org:bleg
nibble
homogeneity
brunn-minkowski
curvature
convexity-curvature
The Brunn-Minkowski inequality does this, but it’s really about linearized volume, Vol^{1/n}, rather than volume itself. If length is measured in metres then so is Vol^{1/n}.

...

Nice post, Tom. To readers whose background isn’t in certain areas of geometry and analysis, it’s not obvious that the Brunn–Minkowski inequality is more than a curiosity, the proof of the isoperimetric inequality notwithstanding. So let me add that Brunn–Minkowski is an absolutely vital tool in many parts of geometry, analysis, and probability theory, with extremely diverse applications. Gardner’s survey is a great place to start, but by no means exhaustive.

I’ll also add a couple remarks about regularity issues. You point out that Brunn–Minkowski holds “in the vast generality of measurable sets”, but it may not be initially obvious that this needs to be interpreted as “when A, B, and A+B are all Lebesgue measurable”, since A+B need not be measurable when A and B are (although you can modify the definition of A+B to work for arbitrary measurable A and B; this is discussed by Gardner).

february 2017 by nhaliday

bundles : math ‧ meta ‧ problem-solving

**related tags**

Copy this bookmark: