references - Mathematician wants the equivalent knowledge to a quality stats degree - Cross Validated

nibble q-n-a overflow lens acm stats hypothesis-testing limits confluence books recommendations list top-n accretion data-science roadmap p:whenever p:someday reading quixotic

november 2017 by nhaliday

nibble q-n-a overflow lens acm stats hypothesis-testing limits confluence books recommendations list top-n accretion data-science roadmap p:whenever p:someday reading quixotic

november 2017 by nhaliday

Two-Sample Hypothesis Tests for Differences in ... - Data @ Quora - Quora

techtariat quora qra project data-science engineering methodology stats hypothesis-testing distribution expectancy limits concentration-of-measure probability orders acm comparison magnitude time-complexity performance parametric nonparametric

november 2017 by nhaliday

techtariat quora qra project data-science engineering methodology stats hypothesis-testing distribution expectancy limits concentration-of-measure probability orders acm comparison magnitude time-complexity performance parametric nonparametric

november 2017 by nhaliday

Expected Value of Random Walk - Mathematics Stack Exchange

october 2017 by nhaliday

cf Section 3.10 in Grimmett-Stirzaker or Section III.3 in Feller, Vol 1

nibble
q-n-a
overflow
math
probability
stochastic-processes
extrema
expectancy
limits
identity
tidbits
magnitude
october 2017 by nhaliday

multivariate analysis - Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? - Cross Validated

october 2017 by nhaliday

The bivariate normal distribution is the exception, not the rule!

It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided.

Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications.

note: there is a multivariate central limit theorem, so those such applications have no problem

nibble
q-n-a
overflow
stats
math
acm
probability
distribution
gotchas
intricacy
characterization
structure
composition-decomposition
counterexample
limits
concentration-of-measure
It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided.

Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications.

note: there is a multivariate central limit theorem, so those such applications have no problem

october 2017 by nhaliday

Karl Pearson and the Chi-squared Test

october 2017 by nhaliday

Pearson's paper of 1900 introduced what subsequently became known as the chi-squared test of goodness of fit. The terminology and allusions of 80 years ago create a barrier for the modern reader, who finds that the interpretation of Pearson's test procedure and the assessment of what he achieved are less than straightforward, notwithstanding the technical advances made since then. An attempt is made here to surmount these difficulties by exploring Pearson's relevant activities during the first decade of his statistical career, and by describing the work by his contemporaries and predecessors which seem to have influenced his approach to the problem. Not all the questions are answered, and others remain for further study.

original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic

He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]

nibble
papers
acm
stats
hypothesis-testing
methodology
history
mostly-modern
pre-ww2
old-anglo
giants
science
the-trenches
stories
multi
q-n-a
overflow
explanation
summary
innovation
discovery
distribution
degrees-of-freedom
limits
original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic

He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]

october 2017 by nhaliday

Section 10 Chi-squared goodness-of-fit test.

october 2017 by nhaliday

- pf that chi-squared statistic for Pearson's test (multinomial goodness-of-fit) actually has chi-squared distribution asymptotically

- the gotcha: terms Z_j in sum aren't independent

- solution:

- compute the covariance matrix of the terms to be E[Z_iZ_j] = -sqrt(p_ip_j)

- note that an equivalent way of sampling the Z_j is to take a random standard Gaussian and project onto the plane orthogonal to (sqrt(p_1), sqrt(p_2), ..., sqrt(p_r))

- that is equivalent to just sampling a Gaussian w/ 1 less dimension (hence df=r-1)

QED

pdf
nibble
lecture-notes
mit
stats
hypothesis-testing
acm
probability
methodology
proofs
iidness
distribution
limits
identity
direction
lifts-projections
- the gotcha: terms Z_j in sum aren't independent

- solution:

- compute the covariance matrix of the terms to be E[Z_iZ_j] = -sqrt(p_ip_j)

- note that an equivalent way of sampling the Z_j is to take a random standard Gaussian and project onto the plane orthogonal to (sqrt(p_1), sqrt(p_2), ..., sqrt(p_r))

- that is equivalent to just sampling a Gaussian w/ 1 less dimension (hence df=r-1)

QED

october 2017 by nhaliday

Genetics: CHROMOSOMAL MAPS AND MAPPING FUNCTIONS

october 2017 by nhaliday

Any particular gene has a specific location (its "locus") on a particular chromosome. For any two genes (or loci) alpha and beta, we can ask "What is the recombination frequency between them?" If the genes are on different chromosomes, the answer is 50% (independent assortment). If the two genes are on the same chromosome, the recombination frequency will be somewhere in the range from 0 to 50%. The "map unit" (1 cM) is the genetic map distance that corresponds to a recombination frequency of 1%. In large chromosomes, the cumulative map distance may be much greater than 50cM, but the maximum recombination frequency is 50%. Why? In large chromosomes, there is enough length to allow for multiple cross-overs, so we have to ask what result we expect for random multiple cross-overs.

1. How is it that random multiple cross-overs give the same result as independent assortment?

Figure 5.12 shows how the various double cross-over possibilities add up, resulting in gamete genotype percentages that are indistinguisable from independent assortment (50% parental type, 50% non-parental type). This is a very important figure. It provides the explanation for why genes that are far apart on a very large chromosome sort out in crosses just as if they were on separate chromosomes.

2. Is there a way to measure how close together two crossovers can occur involving the same two chromatids? That is, how could we measure whether there is spacial "interference"?

Figure 5.13 shows how a measurement of the gamete frequencies resulting from a "three point cross" can answer this question. If we would get a "lower than expected" occurrence of recombinant genotypes aCb and AcB, it would suggest that there is some hindrance to the two cross-overs occurring this close together. Crosses of this type in Drosophila have shown that, in this organism, double cross-overs do not occur at distances of less than about 10 cM between the two cross-over sites. ( Textbook, page 196. )

3. How does all of this lead to the "mapping function", the mathematical (graphical) relation between the observed recombination frequency (percent non-parental gametes) and the cumulative genetic distance in map units?

Figure 5.14 shows the result for the two extremes of "complete interference" and "no interference". The situation for real chromosomes in real organisms is somewhere between these extremes, such as the curve labelled "interference decreasing with distance".

org:junk
org:edu
explanation
faq
nibble
genetics
genomics
bio
ground-up
magnitude
data
flux-stasis
homo-hetero
measure
orders
metric-space
limits
measurement
1. How is it that random multiple cross-overs give the same result as independent assortment?

Figure 5.12 shows how the various double cross-over possibilities add up, resulting in gamete genotype percentages that are indistinguisable from independent assortment (50% parental type, 50% non-parental type). This is a very important figure. It provides the explanation for why genes that are far apart on a very large chromosome sort out in crosses just as if they were on separate chromosomes.

2. Is there a way to measure how close together two crossovers can occur involving the same two chromatids? That is, how could we measure whether there is spacial "interference"?

Figure 5.13 shows how a measurement of the gamete frequencies resulting from a "three point cross" can answer this question. If we would get a "lower than expected" occurrence of recombinant genotypes aCb and AcB, it would suggest that there is some hindrance to the two cross-overs occurring this close together. Crosses of this type in Drosophila have shown that, in this organism, double cross-overs do not occur at distances of less than about 10 cM between the two cross-over sites. ( Textbook, page 196. )

3. How does all of this lead to the "mapping function", the mathematical (graphical) relation between the observed recombination frequency (percent non-parental gametes) and the cumulative genetic distance in map units?

Figure 5.14 shows the result for the two extremes of "complete interference" and "no interference". The situation for real chromosomes in real organisms is somewhere between these extremes, such as the curve labelled "interference decreasing with distance".

october 2017 by nhaliday

Lecture 14: When's that meteor arriving

september 2017 by nhaliday

- Meteors as a random process

- Limiting approximations

- Derivation of the Exponential distribution

- Derivation of the Poisson distribution

- A "Poisson process"

nibble
org:junk
org:edu
exposition
lecture-notes
physics
mechanics
space
earth
probability
stats
distribution
stochastic-processes
closure
additive
limits
approximation
tidbits
acm
binomial
multiplicative
- Limiting approximations

- Derivation of the Exponential distribution

- Derivation of the Poisson distribution

- A "Poisson process"

september 2017 by nhaliday

Is it possible to recover Classical Mechanics from Schrödinger's equation? - Physics Stack Exchange

august 2017 by nhaliday

Classical limit of quantum mechanics: https://physics.stackexchange.com/questions/32112/classical-limit-of-quantum-mechanics

https://physics.stackexchange.com/questions/108222/from-quantum-mechanics-to-classical-mechanics

Classical Limit of Quantum Mechanics: https://mathoverflow.net/questions/102313/classical-limit-of-quantum-mechanics

How/when does quantum mechanics become classical mechanics?: https://www.quora.com/How-when-does-quantum-mechanics-become-classical-mechanics

Remarks concerning the status & some ramifications of EHRENFEST’S THEOREM: http://www.reed.edu/physics/faculty/wheeler/documents/Quantum%20Mechanics/Miscellaneous%20Essays/Ehrenfest's%20Theorem.pdf

nibble
q-n-a
overflow
physics
mechanics
quantum
scale
approximation
lens
limits
multi
synthesis
hi-order-bits
big-picture
ground-up
qra
magnitude
pdf
essay
papers
https://physics.stackexchange.com/questions/108222/from-quantum-mechanics-to-classical-mechanics

Classical Limit of Quantum Mechanics: https://mathoverflow.net/questions/102313/classical-limit-of-quantum-mechanics

How/when does quantum mechanics become classical mechanics?: https://www.quora.com/How-when-does-quantum-mechanics-become-classical-mechanics

Remarks concerning the status & some ramifications of EHRENFEST’S THEOREM: http://www.reed.edu/physics/faculty/wheeler/documents/Quantum%20Mechanics/Miscellaneous%20Essays/Ehrenfest's%20Theorem.pdf

august 2017 by nhaliday

Lucio Russo - Wikipedia

may 2017 by nhaliday

In The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had to Be Reborn (Italian: La rivoluzione dimenticata), Russo promotes the belief that Hellenistic science in the period 320-144 BC reached heights not achieved by Classical age science, and proposes that it went further than ordinarily thought, in multiple fields not normally associated with ancient science.

La Rivoluzione Dimenticata (The Forgotten Revolution), Reviewed by Sandro Graffi: http://www.ams.org/notices/199805/review-graffi.pdf

Before turning to the question of the decline of Hellenistic science, I come back to the new light shed by the book on Euclid’s Elements and on pre-Ptolemaic astronomy. Euclid’s definitions of the elementary geometric entities—point, straight line, plane—at the beginning of the Elements have long presented a problem.7 Their nature is in sharp contrast with the approach taken in the rest of the book, and continued by mathematicians ever since, of refraining from defining the fundamental entities explicitly but limiting themselves to postulating the properties which they enjoy. Why should Euclid be so hopelessly obscure right at the beginning and so smooth just after? The answer is: the definitions are not Euclid’s. Toward the beginning of the second century A.D. Heron of Alexandria found it convenient to introduce definitions of the elementary objects (a sign of decadence!) in his commentary on Euclid’s Elements, which had been written at least 400 years before. All manuscripts of the Elements copied ever since included Heron’s definitions without mention, whence their attribution to Euclid himself. The philological evidence leading to this conclusion is quite convincing.8

...

What about the general and steady (on the average) impoverishment of Hellenistic science under the Roman empire? This is a major historical problem, strongly tied to the even bigger one of the decline and fall of the antique civilization itself. I would summarize the author’s argument by saying that it basically represents an application to science of a widely accepted general theory on decadence of antique civilization going back to Max Weber. Roman society, mainly based on slave labor, underwent an ultimately unrecoverable crisis as the traditional sources of that labor force, essentially wars, progressively dried up. To save basic farming, the remaining slaves were promoted to be serfs, and poor free peasants reduced to serfdom, but this made trade disappear. A society in which production is almost entirely based on serfdom and with no trade clearly has very little need of culture, including science and technology. As Max Weber pointed out, when trade vanished, so did the marble splendor of the ancient towns, as well as the spiritual assets that went with it: art, literature, science, and sophisticated commercial laws. The recovery of Hellenistic science then had to wait until the disappearance of serfdom at the end of the Middle Ages. To quote Max Weber: “Only then with renewed vigor did the old giant rise up again.”

...

The epilogue contains the (rather pessimistic) views of the author on the future of science, threatened by the apparent triumph of today’s vogue of irrationality even in leading institutions (e.g., an astrology professorship at the Sorbonne). He looks at today’s ever-increasing tendency to teach science more on a fideistic than on a deductive or experimental basis as the first sign of a decline which could be analogous to the post-Hellenistic one.

Praising Alexandrians to excess: https://sci-hub.tw/10.1088/2058-7058/17/4/35

The Economic Record review: https://sci-hub.tw/10.1111/j.1475-4932.2004.00203.x

listed here: https://pinboard.in/u:nhaliday/b:c5c09f2687c1

Was Roman Science in Decline? (Excerpt from My New Book): https://www.richardcarrier.info/archives/13477

people
trivia
cocktail
history
iron-age
mediterranean
the-classics
speculation
west-hunter
scitariat
knowledge
wiki
ideas
wild-ideas
technology
innovation
contrarianism
multi
pdf
org:mat
books
review
critique
regularizer
todo
piracy
physics
canon
science
the-trenches
the-great-west-whale
broad-econ
the-world-is-just-atoms
frontier
speedometer
🔬
conquest-empire
giants
economics
article
growth-econ
cjones-like
industrial-revolution
empirical
absolute-relative
truth
rot
zeitgeist
gibbon
big-peeps
civilization
malthus
roots
old-anglo
britain
early-modern
medieval
social-structure
limits
quantitative-qualitative
rigor
lens
systematic-ad-hoc
analytical-holistic
cycles
space
mechanics
math
geometry
gravity
revolution
novelty
meta:science
is-ought
flexibility
trends
reason
applicability-prereqs
theory-practice
traces
evidence
La Rivoluzione Dimenticata (The Forgotten Revolution), Reviewed by Sandro Graffi: http://www.ams.org/notices/199805/review-graffi.pdf

Before turning to the question of the decline of Hellenistic science, I come back to the new light shed by the book on Euclid’s Elements and on pre-Ptolemaic astronomy. Euclid’s definitions of the elementary geometric entities—point, straight line, plane—at the beginning of the Elements have long presented a problem.7 Their nature is in sharp contrast with the approach taken in the rest of the book, and continued by mathematicians ever since, of refraining from defining the fundamental entities explicitly but limiting themselves to postulating the properties which they enjoy. Why should Euclid be so hopelessly obscure right at the beginning and so smooth just after? The answer is: the definitions are not Euclid’s. Toward the beginning of the second century A.D. Heron of Alexandria found it convenient to introduce definitions of the elementary objects (a sign of decadence!) in his commentary on Euclid’s Elements, which had been written at least 400 years before. All manuscripts of the Elements copied ever since included Heron’s definitions without mention, whence their attribution to Euclid himself. The philological evidence leading to this conclusion is quite convincing.8

...

What about the general and steady (on the average) impoverishment of Hellenistic science under the Roman empire? This is a major historical problem, strongly tied to the even bigger one of the decline and fall of the antique civilization itself. I would summarize the author’s argument by saying that it basically represents an application to science of a widely accepted general theory on decadence of antique civilization going back to Max Weber. Roman society, mainly based on slave labor, underwent an ultimately unrecoverable crisis as the traditional sources of that labor force, essentially wars, progressively dried up. To save basic farming, the remaining slaves were promoted to be serfs, and poor free peasants reduced to serfdom, but this made trade disappear. A society in which production is almost entirely based on serfdom and with no trade clearly has very little need of culture, including science and technology. As Max Weber pointed out, when trade vanished, so did the marble splendor of the ancient towns, as well as the spiritual assets that went with it: art, literature, science, and sophisticated commercial laws. The recovery of Hellenistic science then had to wait until the disappearance of serfdom at the end of the Middle Ages. To quote Max Weber: “Only then with renewed vigor did the old giant rise up again.”

...

The epilogue contains the (rather pessimistic) views of the author on the future of science, threatened by the apparent triumph of today’s vogue of irrationality even in leading institutions (e.g., an astrology professorship at the Sorbonne). He looks at today’s ever-increasing tendency to teach science more on a fideistic than on a deductive or experimental basis as the first sign of a decline which could be analogous to the post-Hellenistic one.

Praising Alexandrians to excess: https://sci-hub.tw/10.1088/2058-7058/17/4/35

The Economic Record review: https://sci-hub.tw/10.1111/j.1475-4932.2004.00203.x

listed here: https://pinboard.in/u:nhaliday/b:c5c09f2687c1

Was Roman Science in Decline? (Excerpt from My New Book): https://www.richardcarrier.info/archives/13477

may 2017 by nhaliday

Chapter 2: Asymptotic Expansions

april 2017 by nhaliday

includes complementary error function

pdf
nibble
exposition
math
acm
math.CA
approximation
limits
integral
magnitude
AMT
yoga
estimate
lecture-notes
april 2017 by nhaliday

A cube, a starfish, a thin shell, and the central limit theorem – Libres pensées d'un mathématicien ordinaire

mathtariat org:bleg nibble math acm probability concentration-of-measure high-dimension cartoons limits dimensionality measure yoga hi-order-bits synthesis exposition spatial geometry math.MG curvature convexity-curvature

february 2017 by nhaliday

mathtariat org:bleg nibble math acm probability concentration-of-measure high-dimension cartoons limits dimensionality measure yoga hi-order-bits synthesis exposition spatial geometry math.MG curvature convexity-curvature

february 2017 by nhaliday

Mixing (mathematics) - Wikipedia

february 2017 by nhaliday

One way to describe this is that strong mixing implies that for any two possible states of the system (realizations of the random variable), when given a sufficient amount of time between the two states, the occurrence of the states is independent.

Mixing coefficient is

α(n) = sup{|P(A∪B) - P(A)P(B)| : A in σ(X_0, ..., X_{t-1}), B in σ(X_{t+n}, ...), t >= 0}

for σ(...) the sigma algebra generated by those r.v.s.

So it's a notion of total variational distance between the true distribution and the product distribution.

concept
math
acm
physics
probability
stochastic-processes
definition
mixing
iidness
wiki
reference
nibble
limits
ergodic
math.DS
measure
dependence-independence
Mixing coefficient is

α(n) = sup{|P(A∪B) - P(A)P(B)| : A in σ(X_0, ..., X_{t-1}), B in σ(X_{t+n}, ...), t >= 0}

for σ(...) the sigma algebra generated by those r.v.s.

So it's a notion of total variational distance between the true distribution and the product distribution.

february 2017 by nhaliday

An Introduction to Measure Theory - Terence Tao

books draft unit math gowers mathtariat measure math.CA probability yoga problem-solving pdf tricki local-global counterexample visual-understanding lifts-projections oscillation limits estimate quantifiers-sums synthesis coarse-fine p:someday s:**

february 2017 by nhaliday

books draft unit math gowers mathtariat measure math.CA probability yoga problem-solving pdf tricki local-global counterexample visual-understanding lifts-projections oscillation limits estimate quantifiers-sums synthesis coarse-fine p:someday s:**

february 2017 by nhaliday

The tensor power trick | Tricki

february 2017 by nhaliday

- Fubini's for integrals of tensored extension

- entropy digression is interesting

nibble
tricki
exposition
problem-solving
yoga
estimate
magnitude
tensors
levers
algebraic-complexity
wiki
reference
metabuch
hi-order-bits
synthesis
tidbits
tightness
quantifiers-sums
integral
information-theory
entropy-like
stirling
binomial
concentration-of-measure
limits
stat-mech
additive-combo
math.CV
math.CA
math.FA
fourier
s:*
better-explained
org:mat
elegance
- entropy digression is interesting

february 2017 by nhaliday

Do grad school students remember everything they were taught in college all the time? - Quora

q-n-a qra grad-school learning synthesis hi-order-bits neurons physics lens analogy cartoons links 🎓 scholar gowers mathtariat feynman giants quotes games nibble thinking zooming retention meta:research big-picture skeleton s:** p:whenever wire-guided narrative intuition lesswrong commentary ground-up limits examples problem-solving info-dynamics knowledge studying ideas the-trenches chart

february 2017 by nhaliday

q-n-a qra grad-school learning synthesis hi-order-bits neurons physics lens analogy cartoons links 🎓 scholar gowers mathtariat feynman giants quotes games nibble thinking zooming retention meta:research big-picture skeleton s:** p:whenever wire-guided narrative intuition lesswrong commentary ground-up limits examples problem-solving info-dynamics knowledge studying ideas the-trenches chart

february 2017 by nhaliday

Superconcentration and Related Topics

february 2017 by nhaliday

when Var X_n = o(n) instead of Var X_n = O(n)

pdf
lecture-notes
math
probability
boolean-analysis
concentration-of-measure
limits
magnitude
concept
yoga
👳
unit
discrete
phase-transition
stat-mech
percolation
ising
p:*
quixotic
february 2017 by nhaliday

Kolmogorov's zero–one law - Wikipedia

february 2017 by nhaliday

In probability theory, Kolmogorov's zero–one law, named in honor of Andrey Nikolaevich Kolmogorov, specifies that a certain type of event, called a tail event, will either almost surely happen or almost surely not happen; that is, the probability of such an event occurring is zero or one.

tail events include limsup E_i

math
probability
levers
limits
discrete
wiki
reference
nibble
tail events include limsup E_i

february 2017 by nhaliday

What is the relationship between information theory and Coding theory? - Quora

february 2017 by nhaliday

basically:

- finite vs. asymptotic

- combinatorial vs. probabilistic (lotsa overlap their)

- worst-case (Hamming) vs. distributional (Shannon)

Information and coding theory most often appear together in the subject of error correction over noisy channels. Historically, they were born at almost exactly the same time - both Richard Hamming and Claude Shannon were working at Bell Labs when this happened. Information theory tends to heavily use tools from probability theory (together with an "asymptotic" way of thinking about the world), while traditional "algebraic" coding theory tends to employ mathematics that are much more finite sequence length/combinatorial in nature, including linear algebra over Galois Fields. The emergence in the late 90s and first decade of 2000 of codes over graphs blurred this distinction though, as code classes such as low density parity check codes employ both asymptotic analysis and random code selection techniques which have counterparts in information theory.

They do not subsume each other. Information theory touches on many other aspects that coding theory does not, and vice-versa. Information theory also touches on compression (lossy & lossless), statistics (e.g. large deviations), modeling (e.g. Minimum Description Length). Coding theory pays a lot of attention to sphere packing and coverings for finite length sequences - information theory addresses these problems (channel & lossy source coding) only in an asymptotic/approximate sense.

q-n-a
qra
math
acm
tcs
information-theory
coding-theory
big-picture
comparison
confusion
explanation
linear-algebra
polynomials
limits
finiteness
math.CO
hi-order-bits
synthesis
probability
bits
hamming
shannon
intricacy
nibble
s:null
signal-noise
- finite vs. asymptotic

- combinatorial vs. probabilistic (lotsa overlap their)

- worst-case (Hamming) vs. distributional (Shannon)

Information and coding theory most often appear together in the subject of error correction over noisy channels. Historically, they were born at almost exactly the same time - both Richard Hamming and Claude Shannon were working at Bell Labs when this happened. Information theory tends to heavily use tools from probability theory (together with an "asymptotic" way of thinking about the world), while traditional "algebraic" coding theory tends to employ mathematics that are much more finite sequence length/combinatorial in nature, including linear algebra over Galois Fields. The emergence in the late 90s and first decade of 2000 of codes over graphs blurred this distinction though, as code classes such as low density parity check codes employ both asymptotic analysis and random code selection techniques which have counterparts in information theory.

They do not subsume each other. Information theory touches on many other aspects that coding theory does not, and vice-versa. Information theory also touches on compression (lossy & lossless), statistics (e.g. large deviations), modeling (e.g. Minimum Description Length). Coding theory pays a lot of attention to sphere packing and coverings for finite length sequences - information theory addresses these problems (channel & lossy source coding) only in an asymptotic/approximate sense.

february 2017 by nhaliday

Information Processing: Epistasis vs additivity

february 2017 by nhaliday

On epistasis: why it is unimportant in polygenic directional selection: http://rstb.royalsocietypublishing.org/content/365/1544/1241.short

- James F. Crow

The Evolution of Multilocus Systems Under Weak Selection: http://www.genetics.org/content/genetics/134/2/627.full.pdf

- Thomas Nagylaki

Data and Theory Point to Mainly Additive Genetic Variance for Complex Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1000008

The relative proportion of additive and non-additive variation for complex traits is important in evolutionary biology, medicine, and agriculture. We address a long-standing controversy and paradox about the contribution of non-additive genetic variation, namely that knowledge about biological pathways and gene networks imply that epistasis is important. Yet empirical data across a range of traits and species imply that most genetic variance is additive. We evaluate the evidence from empirical studies of genetic variance components and find that additive variance typically accounts for over half, and often close to 100%, of the total genetic variance. We present new theoretical results, based upon the distribution of allele frequencies under neutral and other population genetic models, that show why this is the case even if there are non-additive effects at the level of gene action. We conclude that interactions at the level of genes are not likely to generate much interaction at the level of variance.

hsu
scitariat
commentary
links
study
list
evolution
population-genetics
genetics
methodology
linearity
nonlinearity
comparison
scaling-up
nibble
lens
bounded-cognition
ideas
bio
occam
parsimony
🌞
summary
quotes
multi
org:nat
QTL
stylized-facts
article
explanans
sapiens
biodet
selection
variance-components
metabuch
thinking
models
data
deep-materialism
chart
behavioral-gen
evidence-based
empirical
mutation
spearhead
model-organism
bioinformatics
linear-models
math
magnitude
limits
physics
interdisciplinary
stat-mech
- James F. Crow

The Evolution of Multilocus Systems Under Weak Selection: http://www.genetics.org/content/genetics/134/2/627.full.pdf

- Thomas Nagylaki

Data and Theory Point to Mainly Additive Genetic Variance for Complex Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1000008

The relative proportion of additive and non-additive variation for complex traits is important in evolutionary biology, medicine, and agriculture. We address a long-standing controversy and paradox about the contribution of non-additive genetic variation, namely that knowledge about biological pathways and gene networks imply that epistasis is important. Yet empirical data across a range of traits and species imply that most genetic variance is additive. We evaluate the evidence from empirical studies of genetic variance components and find that additive variance typically accounts for over half, and often close to 100%, of the total genetic variance. We present new theoretical results, based upon the distribution of allele frequencies under neutral and other population genetic models, that show why this is the case even if there are non-additive effects at the level of gene action. We conclude that interactions at the level of genes are not likely to generate much interaction at the level of variance.

february 2017 by nhaliday

general topology - What should be the intuition when working with compactness? - Mathematics Stack Exchange

january 2017 by nhaliday

http://math.stackexchange.com/questions/485822/why-is-compactness-so-important

The situation with compactness is sort of like the above. It turns out that finiteness, which you think of as one concept (in the same way that you think of "Foo" as one concept above), is really two concepts: discreteness and compactness. You've never seen these concepts separated before, though. When people say that compactness is like finiteness, they mean that compactness captures part of what it means to be finite in the same way that shortness captures part of what it means to be Foo.

--

As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors.

--

Compactness does for continuous functions what finiteness does for functions in general.

If a set A is finite then every function f:A→R has a max and a min, and every function f:A→R^n is bounded. If A is compact, the every continuous function from A to R has a max and a min and every continuous function from A to R^n is bounded.

If A is finite then every sequence of members of A has a subsequence that is eventually constant, and "eventually constant" is the only kind of convergence you can talk about without talking about a topology on the set. If A is compact, then every sequence of members of A has a convergent subsequence.

q-n-a
overflow
math
topology
math.GN
concept
finiteness
atoms
intuition
oly
mathtariat
multi
discrete
gowers
motivation
synthesis
hi-order-bits
soft-question
limits
things
nibble
definition
convergence
abstraction
span-cover
The situation with compactness is sort of like the above. It turns out that finiteness, which you think of as one concept (in the same way that you think of "Foo" as one concept above), is really two concepts: discreteness and compactness. You've never seen these concepts separated before, though. When people say that compactness is like finiteness, they mean that compactness captures part of what it means to be finite in the same way that shortness captures part of what it means to be Foo.

--

As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors.

--

Compactness does for continuous functions what finiteness does for functions in general.

If a set A is finite then every function f:A→R has a max and a min, and every function f:A→R^n is bounded. If A is compact, the every continuous function from A to R has a max and a min and every continuous function from A to R^n is bounded.

If A is finite then every sequence of members of A has a subsequence that is eventually constant, and "eventually constant" is the only kind of convergence you can talk about without talking about a topology on the set. If A is compact, then every sequence of members of A has a convergent subsequence.

january 2017 by nhaliday

ho.history overview - History of the high-dimensional volume paradox - MathOverflow

q-n-a overflow math math.MG geometry spatial dimensionality limits measure concentration-of-measure history stories giants cartoons soft-question nibble paradox novelty high-dimension examples gotchas recruiting

january 2017 by nhaliday

q-n-a overflow math math.MG geometry spatial dimensionality limits measure concentration-of-measure history stories giants cartoons soft-question nibble paradox novelty high-dimension examples gotchas recruiting

january 2017 by nhaliday

Information Processing: Flipping DNA switches

january 2017 by nhaliday

N >> sqrt(N), so lots of SDs are up for grabs!

hsu
scitariat
GWAS
study
summary
commentary
QTL
iq
education
concentration-of-measure
limits
magnitude
enhancement
scaling-up
aphorism
genetics
genomics
methodology
gwern
discussion
moments
street-fighting
models
biodet
nibble
behavioral-gen
ideas
january 2017 by nhaliday

pr.probability - "Entropy" proof of Brunn-Minkowski Inequality? - MathOverflow

q-n-a overflow math information-theory wormholes proofs geometry math.MG estimate gowers mathtariat dimensionality limits intuition insight stat-mech concentration-of-measure 👳 cartoons math.FA additive-combo measure entropy-like nibble tensors coarse-fine brunn-minkowski boltzmann high-dimension curvature convexity-curvature

january 2017 by nhaliday

q-n-a overflow math information-theory wormholes proofs geometry math.MG estimate gowers mathtariat dimensionality limits intuition insight stat-mech concentration-of-measure 👳 cartoons math.FA additive-combo measure entropy-like nibble tensors coarse-fine brunn-minkowski boltzmann high-dimension curvature convexity-curvature

january 2017 by nhaliday

Mikhail Leonidovich Gromov - Wikipedia

january 2017 by nhaliday

Gromov's style of geometry often features a "coarse" or "soft" viewpoint, analyzing asymptotic or large-scale properties.

Gromov is also interested in mathematical biology,[11] the structure of the brain and the thinking process, and the way scientific ideas evolve.[8]

math
people
giants
russia
differential
geometry
topology
math.GR
wiki
structure
meta:math
meta:science
interdisciplinary
bio
neuro
magnitude
limits
science
nibble
coarse-fine
wild-ideas
convergence
info-dynamics
ideas
Gromov is also interested in mathematical biology,[11] the structure of the brain and the thinking process, and the way scientific ideas evolve.[8]

january 2017 by nhaliday

Information Processing: Is science self-correcting?

january 2017 by nhaliday

A toy model of the dynamics of scientific research, with probability distributions for accuracy of experimental results, mechanisms for updating of beliefs by individual scientists, crowd behavior, bounded cognition, etc. can easily exhibit parameter regions where progress is limited (one could even find equilibria in which most beliefs held by individual scientists are false!). Obviously the complexity of the systems under study and the quality of human capital in a particular field are important determinants of the rate of progress and its character.

hsu
scitariat
ioannidis
science
meta:science
error
commentary
physics
limits
oscillation
models
equilibrium
bounded-cognition
complex-systems
being-right
info-dynamics
the-trenches
truth
january 2017 by nhaliday

Soft analysis, hard analysis, and the finite convergence principle | What's new

january 2017 by nhaliday

It is fairly well known that the results obtained by hard and soft analysis respectively can be connected to each other by various “correspondence principles” or “compactness principles”. It is however my belief that the relationship between the two types of analysis is in fact much closer[3] than just this; in many cases, qualitative analysis can be viewed as a convenient abstraction of quantitative analysis, in which the precise dependencies between various finite quantities has been efficiently concealed from view by use of infinitary notation. Conversely, quantitative analysis can often be viewed as a more precise and detailed refinement of qualitative analysis. Furthermore, a method from hard analysis often has some analogue in soft analysis and vice versa, though the language and notation of the analogue may look completely different from that of the original. I therefore feel that it is often profitable for a practitioner of one type of analysis to learn about the other, as they both offer their own strengths, weaknesses, and intuition, and knowledge of one gives more insight[4] into the workings of the other. I wish to illustrate this point here using a simple but not terribly well known result, which I shall call the “finite convergence principle” (thanks to Ben Green for suggesting this name; Jennifer Chayes has also suggested the “metastability principle”). It is the finitary analogue of an utterly trivial infinitary result – namely, that every bounded monotone sequence converges – but sometimes, a careful analysis of a trivial result can be surprisingly revealing, as I hope to demonstrate here.

gowers
mathtariat
math
math.CA
expert
reflection
philosophy
meta:math
logic
math.CO
lens
big-picture
symmetry
limits
finiteness
nibble
org:bleg
coarse-fine
metameta
convergence
expert-experience
january 2017 by nhaliday

st.statistics - Why is it so cool to square numbers (in terms of finding the standard deviation)? - MathOverflow

q-n-a overflow math stats concept motivation curiosity oly mathtariat probability soft-question acm moments nibble definition limits concentration-of-measure s:* characterization

january 2017 by nhaliday

q-n-a overflow math stats concept motivation curiosity oly mathtariat probability soft-question acm moments nibble definition limits concentration-of-measure s:* characterization

january 2017 by nhaliday

The infinitesimal model | bioRxiv

january 2017 by nhaliday

Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. We first review the long history of the infinitesimal model in quantitative genetics. Then we provide a definition of the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, ... We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. In each case, by conditioning on the pedigree relating individuals in the population, we incorporate arbitrary selection and population structure. We suppose that we can observe the pedigree up to the present generation, together with all the ancestral traits, and we show, in particular, that the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order M^{-1/2}. Simulations suggest that in particular cases the convergence may be as fast as 1/M.

published version:

The infinitesimal model: Definition, derivation, and implications: https://sci-hub.tw/10.1016/j.tpb.2017.06.001

Commentary: Fisher’s infinitesimal model: A story for the ages: http://www.sciencedirect.com/science/article/pii/S0040580917301508?via%3Dihub

This commentary distinguishes three nested approximations, referred to as “infinitesimal genetics,” “Gaussian descendants” and “Gaussian population,” each plausibly called “the infinitesimal model.” The first and most basic is Fisher’s “infinitesimal” approximation of the underlying genetics – namely, many loci, each making a small contribution to the total variance. As Barton et al. (2017) show, in the limit as the number of loci increases (with enough additivity), the distribution of genotypic values for descendants approaches a multivariate Gaussian, whose variance–covariance structure depends only on the relatedness, not the phenotypes, of the parents (or whether their population experiences selection or other processes such as mutation and migration). Barton et al. (2017) call this rigorously defensible “Gaussian descendants” approximation “the infinitesimal model.” However, it is widely assumed that Fisher’s genetic assumptions yield another Gaussian approximation, in which the distribution of breeding values in a population follows a Gaussian — even if the population is subject to non-Gaussian selection. This third “Gaussian population” approximation, is also described as the “infinitesimal model.” Unlike the “Gaussian descendants” approximation, this third approximation cannot be rigorously justified, except in a weak-selection limit, even for a purely additive model. Nevertheless, it underlies the two most widely used descriptions of selection-induced changes in trait means and genetic variances, the “breeder’s equation” and the “Bulmer effect.” Future generations may understand why the “infinitesimal model” provides such useful approximations in the face of epistasis, linkage, linkage disequilibrium and strong selection.

study
exposition
bio
evolution
population-genetics
genetics
methodology
QTL
preprint
models
unit
len:long
nibble
linearity
nonlinearity
concentration-of-measure
limits
applications
🌞
biodet
oscillation
fisher
perturbation
stylized-facts
chart
ideas
article
pop-structure
multi
pdf
piracy
intricacy
map-territory
kinship
distribution
simulation
ground-up
linear-models
applicability-prereqs
bioinformatics
published version:

The infinitesimal model: Definition, derivation, and implications: https://sci-hub.tw/10.1016/j.tpb.2017.06.001

Commentary: Fisher’s infinitesimal model: A story for the ages: http://www.sciencedirect.com/science/article/pii/S0040580917301508?via%3Dihub

This commentary distinguishes three nested approximations, referred to as “infinitesimal genetics,” “Gaussian descendants” and “Gaussian population,” each plausibly called “the infinitesimal model.” The first and most basic is Fisher’s “infinitesimal” approximation of the underlying genetics – namely, many loci, each making a small contribution to the total variance. As Barton et al. (2017) show, in the limit as the number of loci increases (with enough additivity), the distribution of genotypic values for descendants approaches a multivariate Gaussian, whose variance–covariance structure depends only on the relatedness, not the phenotypes, of the parents (or whether their population experiences selection or other processes such as mutation and migration). Barton et al. (2017) call this rigorously defensible “Gaussian descendants” approximation “the infinitesimal model.” However, it is widely assumed that Fisher’s genetic assumptions yield another Gaussian approximation, in which the distribution of breeding values in a population follows a Gaussian — even if the population is subject to non-Gaussian selection. This third “Gaussian population” approximation, is also described as the “infinitesimal model.” Unlike the “Gaussian descendants” approximation, this third approximation cannot be rigorously justified, except in a weak-selection limit, even for a purely additive model. Nevertheless, it underlies the two most widely used descriptions of selection-induced changes in trait means and genetic variances, the “breeder’s equation” and the “Bulmer effect.” Future generations may understand why the “infinitesimal model” provides such useful approximations in the face of epistasis, linkage, linkage disequilibrium and strong selection.

january 2017 by nhaliday

ho.history overview - Proofs that require fundamentally new ways of thinking - MathOverflow

january 2017 by nhaliday

my favorite:

Although this has already been said elsewhere on MathOverflow, I think it's worth repeating that Gromov is someone who has arguably introduced more radical thoughts into mathematics than anyone else. Examples involving groups with polynomial growth and holomorphic curves have already been cited in other answers to this question. I have two other obvious ones but there are many more.

I don't remember where I first learned about convergence of Riemannian manifolds, but I had to laugh because there's no way I would have ever conceived of a notion. To be fair, all of the groundwork for this was laid out in Cheeger's thesis, but it was Gromov who reformulated everything as a convergence theorem and recognized its power.

Another time Gromov made me laugh was when I was reading what little I could understand of his book Partial Differential Relations. This book is probably full of radical ideas that I don't understand. The one I did was his approach to solving the linearized isometric embedding equation. His radical, absurd, but elementary idea was that if the system is sufficiently underdetermined, then the linear partial differential operator could be inverted by another linear partial differential operator. Both the statement and proof are for me the funniest in mathematics. Most of us view solving PDE's as something that requires hard work, involving analysis and estimates, and Gromov manages to do it using only elementary linear algebra. This then allows him to establish the existence of isometric embedding of Riemannian manifolds in a wide variety of settings.

q-n-a
overflow
soft-question
big-list
math
meta:math
history
insight
synthesis
gowers
mathtariat
hi-order-bits
frontier
proofs
magnitude
giants
differential
geometry
limits
flexibility
nibble
degrees-of-freedom
big-picture
novelty
zooming
big-surf
wild-ideas
metameta
courage
convergence
ideas
innovation
the-trenches
discovery
creative
elegance
Although this has already been said elsewhere on MathOverflow, I think it's worth repeating that Gromov is someone who has arguably introduced more radical thoughts into mathematics than anyone else. Examples involving groups with polynomial growth and holomorphic curves have already been cited in other answers to this question. I have two other obvious ones but there are many more.

I don't remember where I first learned about convergence of Riemannian manifolds, but I had to laugh because there's no way I would have ever conceived of a notion. To be fair, all of the groundwork for this was laid out in Cheeger's thesis, but it was Gromov who reformulated everything as a convergence theorem and recognized its power.

Another time Gromov made me laugh was when I was reading what little I could understand of his book Partial Differential Relations. This book is probably full of radical ideas that I don't understand. The one I did was his approach to solving the linearized isometric embedding equation. His radical, absurd, but elementary idea was that if the system is sufficiently underdetermined, then the linear partial differential operator could be inverted by another linear partial differential operator. Both the statement and proof are for me the funniest in mathematics. Most of us view solving PDE's as something that requires hard work, involving analysis and estimates, and Gromov manages to do it using only elementary linear algebra. This then allows him to establish the existence of isometric embedding of Riemannian manifolds in a wide variety of settings.

january 2017 by nhaliday

ca.analysis and odes - What's a nice argument that shows the volume of the unit ball in $mathbb R^n$ approaches 0? - MathOverflow

q-n-a overflow intuition math geometry spatial dimensionality limits tidbits math.MG measure magnitude visual-understanding oly concentration-of-measure pigeonhole-markov nibble fedja coarse-fine novelty high-dimension elegance

january 2017 by nhaliday

q-n-a overflow intuition math geometry spatial dimensionality limits tidbits math.MG measure magnitude visual-understanding oly concentration-of-measure pigeonhole-markov nibble fedja coarse-fine novelty high-dimension elegance

january 2017 by nhaliday

definition - Why square the difference instead of taking the absolute value in standard deviation? - Cross Validated

stats acm motivation synthesis q-n-a discussion probability tidbits overflow soft-question bias-variance curiosity moments robust comparison nibble s:* characterization limits concentration-of-measure

december 2016 by nhaliday

stats acm motivation synthesis q-n-a discussion probability tidbits overflow soft-question bias-variance curiosity moments robust comparison nibble s:* characterization limits concentration-of-measure

december 2016 by nhaliday

Breeding the breeder's equation - Gene Expression

december 2016 by nhaliday

- interesting fact about normal distribution: when thresholding Gaussian r.v. X ~ N(0, σ^2) at X > 0, the new mean μ_s satisfies μ_s = pdf(X,t)/(1-cdf(X,t)) σ^2

- follows from direct calculation (any deeper reason?)

- note (using Taylor/asymptotic expansion of complementary error function) that this is Θ(t) as t -> 0 or ∞ (w/ different constants)

- for X ~ N(0, 1), can calculate 0 = cdf(X, t)μ_<t + (1-cdf(X, t))μ_>t => μ_<t = -pdf(X, t)/cdf(X, t)

- this declines quickly w/ t (like e^{-t^2/2}). as t -> 0, it goes like -sqrt(2/pi) + higher-order terms ~ -0.8.

Average of a tail of a normal distribution: https://stats.stackexchange.com/questions/26805/average-of-a-tail-of-a-normal-distribution

Truncated normal distribution: https://en.wikipedia.org/wiki/Truncated_normal_distribution

gnxp
explanation
concept
bio
genetics
population-genetics
agri-mindset
analysis
scitariat
org:sci
nibble
methodology
distribution
tidbits
probability
stats
acm
AMT
limits
magnitude
identity
integral
street-fighting
symmetry
s:*
tails
multi
q-n-a
overflow
wiki
reference
objektbuch
proofs
- follows from direct calculation (any deeper reason?)

- note (using Taylor/asymptotic expansion of complementary error function) that this is Θ(t) as t -> 0 or ∞ (w/ different constants)

- for X ~ N(0, 1), can calculate 0 = cdf(X, t)μ_<t + (1-cdf(X, t))μ_>t => μ_<t = -pdf(X, t)/cdf(X, t)

- this declines quickly w/ t (like e^{-t^2/2}). as t -> 0, it goes like -sqrt(2/pi) + higher-order terms ~ -0.8.

Average of a tail of a normal distribution: https://stats.stackexchange.com/questions/26805/average-of-a-tail-of-a-normal-distribution

Truncated normal distribution: https://en.wikipedia.org/wiki/Truncated_normal_distribution

december 2016 by nhaliday

Borel–Cantelli lemma - Wikipedia

november 2016 by nhaliday

- sum of probabilities finite => a.s. only finitely many occur

- "<=" w/ some assumptions (pairwise independence)

- classic result from CS 150 (problem set 1)

wiki
reference
estimate
probability
math
acm
concept
levers
probabilistic-method
limits
nibble
borel-cantelli
- "<=" w/ some assumptions (pairwise independence)

- classic result from CS 150 (problem set 1)

november 2016 by nhaliday

Fiction: Missile Gap by Charles Stross — Subterranean Press

october 2016 by nhaliday

- flat-earth scifi

- little tidbit from fictional Carl Sagan: behavior of gravity on very large (near-infinite) disk

in limit, no inverse square law, constant downward force: ∫ G/(a^2+r^2) a/sqrt(a^2+r^2) σ rdr dθ = 2πGσ independent of a

for large but finite radius R, asymptotically inverse square but near-constant for a << R (check via Maclaurin expansion around a and x=1/a)

- interesting depiction of war between eusocial species and humans (humans lose)

fiction
space
len:long
physics
mechanics
magnitude
limits
gravity
🔬
individualism-collectivism
xenobio
scifi-fantasy
- little tidbit from fictional Carl Sagan: behavior of gravity on very large (near-infinite) disk

in limit, no inverse square law, constant downward force: ∫ G/(a^2+r^2) a/sqrt(a^2+r^2) σ rdr dθ = 2πGσ independent of a

for large but finite radius R, asymptotically inverse square but near-constant for a << R (check via Maclaurin expansion around a and x=1/a)

- interesting depiction of war between eusocial species and humans (humans lose)

october 2016 by nhaliday

real analysis - How do people apply the Lebesgue integration theory? - Mathematics Stack Exchange

october 2016 by nhaliday

main reason for using Lebesgue measure: completeness of L^p spaces

motivation
math.CA
math
q-n-a
explanation
overflow
soft-question
hi-order-bits
math.FA
curiosity
measure
nibble
integral
convergence
limits
october 2016 by nhaliday

Why Information Grows – Paul Romer

september 2016 by nhaliday

thinking like a physicist:

The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/

books
summary
review
economics
growth-econ
interdisciplinary
hmm
physics
thinking
feynman
tradeoffs
paul-romer
econotariat
🎩
🎓
scholar
aphorism
lens
signal-noise
cartoons
skeleton
s:**
giants
electromag
mutation
genetics
genomics
bits
nibble
stories
models
metameta
metabuch
problem-solving
composition-decomposition
structure
abstraction
zooming
examples
knowledge
human-capital
behavioral-econ
network-structure
info-econ
communication
learning
information-theory
applications
volo-avolo
map-territory
externalities
duplication
spreading
property-rights
lattice
multi
government
polisci
policy
counterfactual
insight
paradox
parallax
reduction
empirical
detail-architecture
methodology
crux
visual-understanding
theory-practice
matching
analytical-holistic
branches
complement-substitute
local-global
internet
technology
cost-benefit
investing
micro
signaling
limits
public-goodish
interpretation
elegance
meta:reading
The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/

september 2016 by nhaliday

The Rapacious Hardscrapple Frontier - Robin Hanson

august 2016 by nhaliday

http://biblehub.com/nasb/ecclesiastes/1.htm

9 That which has been is that which will be,

And that which has been done is that which will be done.

So there is nothing new under the sun.

Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization: http://mason.gmu.edu/~rhanson/filluniv.pdf

futurism
hanson
speculation
pdf
ratty
frontier
study
space
long-short-run
evolution
selection
competition
essay
equilibrium
coordination
GT-101
game-theory
adversarial
prediction
models
migration
allodium
outcome-risk
info-econ
info-dynamics
spreading
expansionism
conquest-empire
cooperate-defect
moloch
limits
local-global
spatial
magnitude
density
hi-order-bits
economics
gray-econ
flux-stasis
technology
innovation
novelty
malthus
farmers-and-foragers
multi
religion
christianity
theos
quotes
speed
strategy
uncertainty
expectancy
concentration-of-measure
iidness
egalitarianism-hierarchy
status
time
random
signal-noise
vitality
poetry
literature
canon
growth-econ
EGT
volo-avolo
degrees-of-freedom
truth
is-ought
analysis
methodology
applicability-prereqs
axelrod
the-basilisk
singularity
ideas
ecology
alignment
property-rights
values
9 That which has been is that which will be,

And that which has been done is that which will be done.

So there is nothing new under the sun.

Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization: http://mason.gmu.edu/~rhanson/filluniv.pdf

august 2016 by nhaliday

Shtetl-Optimized » Blog Archive » My Favorite Growth Rates

may 2016 by nhaliday

Scott's Aaronson's favorite runtimes

yoga
complexity
list
tcs
aaronson
hmm
street-fighting
objektbuch
atoms
tcstariat
limits
polynomials
magnitude
nibble
org:bleg
time-complexity
space-complexity
tricki
scale
may 2016 by nhaliday

Too much of a good thing | The Economist

march 2016 by nhaliday

None of these accounts, though, explain the most troubling aspect of America’s profit problem: its persistence. Business theory holds that firms can at best enjoy only temporary periods of “competitive advantage” during which they can rake in cash. After that new companies, inspired by these rich pickings, will pile in to compete away those fat margins, bringing prices down and increasing both employment and investment. It’s the mechanism behind Adam Smith’s invisible hand.

In America that hand seems oddly idle. An American firm that was very profitable in 2003 (one with post-tax returns on capital of 15-25%, excluding goodwill) had an 83% chance of still being very profitable in 2013; the same was true for firms with returns of over 25%, according to McKinsey, a consulting firm. In the previous decade the odds were about 50%. The obvious conclusion is that the American economy is too cosy for incumbents.

Corporations Are Raking In Record Profits, But Workers Aren’t Seeing Much of It: http://www.motherjones.com/kevin-drum/2017/07/corporations-are-raking-in-record-profits-but-workers-arent-seeing-much-of-it/

Even Goldman Sachs thinks monopolies are pillaging American consumers: http://theweek.com/articles/633101/even-goldman-sachs-thinks-monopolies-are-pillaging-american-consumers

Schumpeter: The University of Chicago worries about a lack of competition: http://www.economist.com/news/business/21720657-its-economists-used-champion-big-firms-mood-has-shifted-university-chicago

Some radicals argue that the government is now so rotten that America is condemned to perpetual oligarchy and inequality. Political support for more competition is worryingly hard to find. Donald Trump has a cabinet of tycoons and likes to be chummy with bosses. The Republicans have become the party of incumbent firms, not of free markets or consumers. Too many Democrats, meanwhile, don’t trust markets and want the state to smother them in red tape, which hurts new entrants.

The Rise of Market Power and the Decline of Labor’s Share: https://promarket.org/rise-market-power-decline-labors-share/

A new paper by Jan De Loecker (of KU Leuven and Princeton University) and Jan Eeckhout (of the Barcelona Graduate School of Economics UPF and University College London) echoes these results, arguing that the decline of both the labor and capital shares, as well as the decline in low-skilled wages and other economic trends, have been aided by a significant increase in markups and market power.

...

Measuring markups, De Loecker explained in a conversation with ProMarket, is notoriously difficult due to the scarcity of data. In attempting to track markups across a wide set of firms and industries, De Loecker and Eeckhout diverged from the standard way in which Industrial Organization economists look at markups, the so-called “demand approach,” which requires a lot of data on consumer demand (prices, quantities, characteristics of products) and models of how firms compete. The standard approach, explains De Loecker, works when it is tailor-made for particular markets, but is “not feasible” when studying markups across many markets and over a long period of time.

To do that, De Loecker and Eeckhout use another approach, the “production approach,” which relies on standard, publicly-available balance sheet data and an assumption that firms will try to minimize costs, and does not require other assumptions regarding demand and market competition.

...

Markups, De Loecker and Eeckhout note, do not necessarily imply market power—but profits do. The enormous increase in profits over the past 35 years, they argue, is consistent with an increase in market power. “In perfect competition, your costs and total sales are identical, because there’s no difference between price and marginal costs. The extent to which these two numbers—the sales-to-wage bill and total-costs-to-wage bill—start differing is going to be immediately indicative of the market power,” says De Loecker.

Markup increases, De Loecker and Eeckhout find, became more pronounced following the 2000 and 2008 recessions. Curiously, they find that economy-wide it is mainly smaller firms that have the higher markups, which according to De Loecker is indicative of widely different characteristics between various industries. Within narrowly defined industries, however, the standard prediction holds: firms with larger market shares have higher markups as well. “Most of the action happens within industries, where we see the big guys getting bigger and their markups increase,” De Loecker explains.

http://www.janeeckhout.com/wp-content/uploads/RMP.pdf

http://www.overcomingbias.com/2017/08/marching-markups.html

The authors are correct that this can easily account for the apparent US productivity slowdown. Holding real productivity constant, if firms move up their demand curves to sell less at a higher prices, then total output, and measured GDP, get smaller. Their numerical estimates suggest that, correcting for this effect, there has been no decline in US productivity growth since 1965. That’s a pretty big deal.

Accepting the main result that markups have been marching upward, the obvious question to ask is: why? But first, let’s review some clues from the paper. First, while industries with smaller firms tend to have higher markups, within each small industry, bigger firms have larger markups, and firms with higher markups pay higher dividends.

There has been little change in output elasticity, i.e., the rate at which variable costs change with the quantity of units produced. (So this isn’t about new scale economies.) There has also been little change in the bottom half of the distribution of markups; the big change has been a big stretching in the upper half. Markups have increased more in larger industries, and the main change has been within industries, rather than a changing mix of industries in the economy. The fractions of income going to labor and to tangible capital have fallen, and firms respond less than they once did to wage changes. Firm accounting profits as a fraction of total income have risen four fold since 1980.

...

If, like me, you buy the standard “free entry” argument for zero expected economic profits of early entrants, then the only remaining possible explanation is an increase in fixed costs relative to variable costs. Now as the paper notes, the fall in tangible capital spending and the rise in accounting profits suggests that this isn’t so much about short-term tangible fixed costs, like the cost to buy machines. But that still leaves a lot of other possible fixed costs, including real estate, innovation, advertising, firm culture, brand loyalty and prestige, regulatory compliance, and context specific training. These all require long term investments, and most of them aren’t tracked well by standard accounting systems.

I can’t tell well which of these fixed costs have risen more, though hopefully folks will collect enough data on these to see which ones correlate strongest with the industries and firms where markups have most risen. But I will invoke a simple hypothesis that I’ve discussed many times, which predicts a general rise of fixed costs: increasing wealth leading to stronger tastes for product variety. Simple models of product differentiation say that as customers care more about getting products nearer to their ideal point, more products are created and fixed costs become a larger fraction of total costs.

Note that increasing product variety is consistent with increasing concentration in a smaller number of firms, if each firm offers many more products and services than before.

https://niskanencenter.org/blog/markups-market-power/

http://marginalrevolution.com/marginalrevolution/2017/08/robin-hansons-take-rising-margins-debate.html

https://growthecon.com/blog/Markups/

Variable costs approach zero: http://www.arnoldkling.com/blog/variable-costs-approach-zero/

4. My guess is that, if anything, the two-Jan’s paper understates the trend toward high markups. That is because my guess is that most corporate data allocates more labor to variable cost than really belongs there. Garett Jones pointed out that these days most workers do not produce widgets. Instead, they produce organizational capital. Garett Jones workers are part of overhead, not variable cost.

Intangible investment and monopoly profits: http://marginalrevolution.com/marginalrevolution/2017/09/intangible-investment-monopoly-profits.html

I’ve been reading the forthcoming Capitalism Without Capital: The Rise of the Intangible Economy, by Jonathan Haskel and Stian Westlake, which is one of this year’s most important and stimulating economic reads (I can’t say it is Freakonomics-style fun, but it is well-written relative to the nature of its subject matter.)

The book offers many valuable theoretical points and also observations about data. And note that intangible capital used to be below 30 percent of the S&P 500 in the 70s, now it is about 84 percent. That’s a big increase, and yet the topic just isn’t discussed that much (I cover it a bit in The Complacent Class, as a possible source of increase in business risk-aversion).

...

Now, I’ve put that all into my language and framing, rather than theirs. In any case, I suspect that many of the recent puzzles about mark-ups and monopoly power are in some way tied to the nature of intangible capital, and the rising value of intangible capital.

The one-sentence summary of my takeaway might be: Cross-business technology externalities help explain the mark-up, market power, and profitability puzzles.

Why has investment been weak?: http://marginalrevolution.com/marginalrevolution/2017/12/why-has-investment-been-weak.html

We analyze private fixed investment in the U.S. over the past 30 years. We show that investment is weak relative to measures of profitability and valuation — particularly Tobin’s Q, and that this weakness starts in the early 2000’s. There are two … [more]

finance
business
economics
prediction
macro
news
trends
org:rec
org:biz
org:anglo
winner-take-all
wonkish
market-power
industrial-org
competition
current-events
madisonian
scale
rent-seeking
usa
class-warfare
multi
org:mag
left-wing
compensation
corporation
rhetoric
policy
regulation
org:ngo
stagnation
white-paper
politics
government
chicago
tech
anomie
crooked
rot
malaise
chart
study
summary
capital
labor
distribution
innovation
correlation
flux-stasis
pdf
ratty
hanson
commentary
cracker-econ
gray-econ
diversity
farmers-and-foragers
roots
marginal-rev
supply-demand
marginal
randy-ayndy
nl-and-so-can-you
nationalism-globalism
trade
homo-hetero
econotariat
broad-econ
zeitgeist
the-bones
🎩
empirical
limits
garett-jones
management
heavy-industry
books
review
externalities
free-riding
top-n
list
investing
software
planning
career
programming
endogenous-exogenous
econometrics
In America that hand seems oddly idle. An American firm that was very profitable in 2003 (one with post-tax returns on capital of 15-25%, excluding goodwill) had an 83% chance of still being very profitable in 2013; the same was true for firms with returns of over 25%, according to McKinsey, a consulting firm. In the previous decade the odds were about 50%. The obvious conclusion is that the American economy is too cosy for incumbents.

Corporations Are Raking In Record Profits, But Workers Aren’t Seeing Much of It: http://www.motherjones.com/kevin-drum/2017/07/corporations-are-raking-in-record-profits-but-workers-arent-seeing-much-of-it/

Even Goldman Sachs thinks monopolies are pillaging American consumers: http://theweek.com/articles/633101/even-goldman-sachs-thinks-monopolies-are-pillaging-american-consumers

Schumpeter: The University of Chicago worries about a lack of competition: http://www.economist.com/news/business/21720657-its-economists-used-champion-big-firms-mood-has-shifted-university-chicago

Some radicals argue that the government is now so rotten that America is condemned to perpetual oligarchy and inequality. Political support for more competition is worryingly hard to find. Donald Trump has a cabinet of tycoons and likes to be chummy with bosses. The Republicans have become the party of incumbent firms, not of free markets or consumers. Too many Democrats, meanwhile, don’t trust markets and want the state to smother them in red tape, which hurts new entrants.

The Rise of Market Power and the Decline of Labor’s Share: https://promarket.org/rise-market-power-decline-labors-share/

A new paper by Jan De Loecker (of KU Leuven and Princeton University) and Jan Eeckhout (of the Barcelona Graduate School of Economics UPF and University College London) echoes these results, arguing that the decline of both the labor and capital shares, as well as the decline in low-skilled wages and other economic trends, have been aided by a significant increase in markups and market power.

...

Measuring markups, De Loecker explained in a conversation with ProMarket, is notoriously difficult due to the scarcity of data. In attempting to track markups across a wide set of firms and industries, De Loecker and Eeckhout diverged from the standard way in which Industrial Organization economists look at markups, the so-called “demand approach,” which requires a lot of data on consumer demand (prices, quantities, characteristics of products) and models of how firms compete. The standard approach, explains De Loecker, works when it is tailor-made for particular markets, but is “not feasible” when studying markups across many markets and over a long period of time.

To do that, De Loecker and Eeckhout use another approach, the “production approach,” which relies on standard, publicly-available balance sheet data and an assumption that firms will try to minimize costs, and does not require other assumptions regarding demand and market competition.

...

Markups, De Loecker and Eeckhout note, do not necessarily imply market power—but profits do. The enormous increase in profits over the past 35 years, they argue, is consistent with an increase in market power. “In perfect competition, your costs and total sales are identical, because there’s no difference between price and marginal costs. The extent to which these two numbers—the sales-to-wage bill and total-costs-to-wage bill—start differing is going to be immediately indicative of the market power,” says De Loecker.

Markup increases, De Loecker and Eeckhout find, became more pronounced following the 2000 and 2008 recessions. Curiously, they find that economy-wide it is mainly smaller firms that have the higher markups, which according to De Loecker is indicative of widely different characteristics between various industries. Within narrowly defined industries, however, the standard prediction holds: firms with larger market shares have higher markups as well. “Most of the action happens within industries, where we see the big guys getting bigger and their markups increase,” De Loecker explains.

http://www.janeeckhout.com/wp-content/uploads/RMP.pdf

http://www.overcomingbias.com/2017/08/marching-markups.html

The authors are correct that this can easily account for the apparent US productivity slowdown. Holding real productivity constant, if firms move up their demand curves to sell less at a higher prices, then total output, and measured GDP, get smaller. Their numerical estimates suggest that, correcting for this effect, there has been no decline in US productivity growth since 1965. That’s a pretty big deal.

Accepting the main result that markups have been marching upward, the obvious question to ask is: why? But first, let’s review some clues from the paper. First, while industries with smaller firms tend to have higher markups, within each small industry, bigger firms have larger markups, and firms with higher markups pay higher dividends.

There has been little change in output elasticity, i.e., the rate at which variable costs change with the quantity of units produced. (So this isn’t about new scale economies.) There has also been little change in the bottom half of the distribution of markups; the big change has been a big stretching in the upper half. Markups have increased more in larger industries, and the main change has been within industries, rather than a changing mix of industries in the economy. The fractions of income going to labor and to tangible capital have fallen, and firms respond less than they once did to wage changes. Firm accounting profits as a fraction of total income have risen four fold since 1980.

...

If, like me, you buy the standard “free entry” argument for zero expected economic profits of early entrants, then the only remaining possible explanation is an increase in fixed costs relative to variable costs. Now as the paper notes, the fall in tangible capital spending and the rise in accounting profits suggests that this isn’t so much about short-term tangible fixed costs, like the cost to buy machines. But that still leaves a lot of other possible fixed costs, including real estate, innovation, advertising, firm culture, brand loyalty and prestige, regulatory compliance, and context specific training. These all require long term investments, and most of them aren’t tracked well by standard accounting systems.

I can’t tell well which of these fixed costs have risen more, though hopefully folks will collect enough data on these to see which ones correlate strongest with the industries and firms where markups have most risen. But I will invoke a simple hypothesis that I’ve discussed many times, which predicts a general rise of fixed costs: increasing wealth leading to stronger tastes for product variety. Simple models of product differentiation say that as customers care more about getting products nearer to their ideal point, more products are created and fixed costs become a larger fraction of total costs.

Note that increasing product variety is consistent with increasing concentration in a smaller number of firms, if each firm offers many more products and services than before.

https://niskanencenter.org/blog/markups-market-power/

http://marginalrevolution.com/marginalrevolution/2017/08/robin-hansons-take-rising-margins-debate.html

https://growthecon.com/blog/Markups/

Variable costs approach zero: http://www.arnoldkling.com/blog/variable-costs-approach-zero/

4. My guess is that, if anything, the two-Jan’s paper understates the trend toward high markups. That is because my guess is that most corporate data allocates more labor to variable cost than really belongs there. Garett Jones pointed out that these days most workers do not produce widgets. Instead, they produce organizational capital. Garett Jones workers are part of overhead, not variable cost.

Intangible investment and monopoly profits: http://marginalrevolution.com/marginalrevolution/2017/09/intangible-investment-monopoly-profits.html

I’ve been reading the forthcoming Capitalism Without Capital: The Rise of the Intangible Economy, by Jonathan Haskel and Stian Westlake, which is one of this year’s most important and stimulating economic reads (I can’t say it is Freakonomics-style fun, but it is well-written relative to the nature of its subject matter.)

The book offers many valuable theoretical points and also observations about data. And note that intangible capital used to be below 30 percent of the S&P 500 in the 70s, now it is about 84 percent. That’s a big increase, and yet the topic just isn’t discussed that much (I cover it a bit in The Complacent Class, as a possible source of increase in business risk-aversion).

...

Now, I’ve put that all into my language and framing, rather than theirs. In any case, I suspect that many of the recent puzzles about mark-ups and monopoly power are in some way tied to the nature of intangible capital, and the rising value of intangible capital.

The one-sentence summary of my takeaway might be: Cross-business technology externalities help explain the mark-up, market power, and profitability puzzles.

Why has investment been weak?: http://marginalrevolution.com/marginalrevolution/2017/12/why-has-investment-been-weak.html

We analyze private fixed investment in the U.S. over the past 30 years. We show that investment is weak relative to measures of profitability and valuation — particularly Tobin’s Q, and that this weakness starts in the early 2000’s. There are two … [more]

march 2016 by nhaliday

Notes Essays—Peter Thiel’s CS183: Startup—Stanford, Spring 2012

business startups strategy course thiel contrarianism barons definite-planning entrepreneurialism lecture-notes skunkworks innovation competition market-power winner-take-all usa anglosphere duplication education higher-ed law ranking success envy stanford princeton harvard elite zero-positive-sum war truth realness capitalism markets darwinian rent-seeking google facebook apple microsoft amazon capital scale network-structure tech business-models twitter social media games frontier time rhythm space musk mobile ai transportation examples recruiting venture metabuch metameta skeleton crooked wisdom gnosis-logos thinking polarization synchrony allodium antidemos democracy things exploratory dimensionality nationalism-globalism trade technology distribution moments personality phalanges stereotypes tails plots visualization creative nietzschean thick-thin psych-architecture wealth class morality ethics status extra-introversion info-dynamics narrative stories fashun myth the-classics literature big-peeps crime

february 2016 by nhaliday

business startups strategy course thiel contrarianism barons definite-planning entrepreneurialism lecture-notes skunkworks innovation competition market-power winner-take-all usa anglosphere duplication education higher-ed law ranking success envy stanford princeton harvard elite zero-positive-sum war truth realness capitalism markets darwinian rent-seeking google facebook apple microsoft amazon capital scale network-structure tech business-models twitter social media games frontier time rhythm space musk mobile ai transportation examples recruiting venture metabuch metameta skeleton crooked wisdom gnosis-logos thinking polarization synchrony allodium antidemos democracy things exploratory dimensionality nationalism-globalism trade technology distribution moments personality phalanges stereotypes tails plots visualization creative nietzschean thick-thin psych-architecture wealth class morality ethics status extra-introversion info-dynamics narrative stories fashun myth the-classics literature big-peeps crime

february 2016 by nhaliday

**related tags**

Copy this bookmark: