nhaliday + tidbits   148

Shuffling - Wikipedia
The Gilbert–Shannon–Reeds model provides a mathematical model of the random outcomes of riffling, that has been shown experimentally to be a good fit to human shuffling[2] and that forms the basis for a recommendation that card decks be riffled seven times in order to randomize them thoroughly.[3] Later, mathematicians Lloyd M. Trefethen and Lloyd N. Trefethen authored a paper using a tweaked version of the Gilbert-Shannon-Reeds model showing that the minimum number of riffles for total randomization could also be 5, if the method of defining randomness is changed.[4][5]
nibble  tidbits  trivia  cocktail  wiki  reference  games  howto  random  models  math  applications  probability  math.CO  mixing  markov  sampling  best-practices  acm 
12 weeks ago by nhaliday
The Reason Why | West Hunter
There are odd things about the orbits of trans-Neptunian objects that suggest ( to some) that there might be an undiscovered super-Earth-sized planet  a few hundred AU from the Sun..

We haven’t seen it, but then it would be very hard to see. The underlying reason is simple enough, but I have never seen anyone mention it: the signal from such objects drops as the fourth power of distance from the Sun.   Not the second power, as is the case with luminous objects like stars, or faraway objects that are close to a star.  We can image close-in planets of other stars that are light-years distant, but it’s very difficult to see a fair-sized planet a few hundred AU out.
--
interesting little fun fact
west-hunter  scitariat  nibble  tidbits  scale  magnitude  visuo  electromag  spatial  space  measurement  paradox  physics 
july 2019 by nhaliday
Rational Sines of Rational Multiples of p
For which rational multiples of p is the sine rational? We have the three trivial cases
[0, pi/2, pi/6]
and we wish to show that these are essentially the only distinct rational sines of rational multiples of p.

The assertion about rational sines of rational multiples of p follows from two fundamental lemmas. The first is

Lemma 1: For any rational number q the value of sin(qp) is a root of a monic polynomial with integer coefficients.

[Pf uses some ideas unfamiliar to me: similarity parameter of Moebius (linear fraction) transformations, and finding a polynomial for a desired root by constructing a Moebius transformation with a finite period.]

...

Lemma 2: Any root of a monic polynomial f(x) with integer coefficients must either be an integer or irrational.

[Gauss's Lemma, cf Dummit-Foote.]

...
nibble  tidbits  org:junk  analysis  trivia  math  algebra  polynomials  fields  characterization  direction  math.CA  math.CV  ground-up 
july 2019 by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia 
may 2019 by nhaliday
gn.general topology - Pair of curves joining opposite corners of a square must intersect---proof? - MathOverflow
In his 'Ordinary Differential Equations' (sec. 1.2) V.I. Arnold says "... every pair of curves in the square joining different pairs of opposite corners must intersect".

This is obvious geometrically but I was wondering how one could go about proving this rigorously. I have thought of a proof using Brouwer's Fixed Point Theorem which I describe below. I would greatly appreciate the group's comments on whether this proof is right and if a simpler proof is possible.

...

Since the full Jordan curve theorem is quite subtle, it might be worth pointing out that theorem in question reduces to the Jordan curve theorem for polygons, which is easier.

Suppose on the contrary that the curves A,BA,B joining opposite corners do not meet. Since A,BA,B are closed sets, their minimum distance apart is some ε>0ε>0. By compactness, each of A,BA,B can be partitioned into finitely many arcs, each of which lies in a disk of diameter <ε/3<ε/3. Then, by a homotopy inside each disk we can replace A,BA,B by polygonal paths A′,B′A′,B′ that join the opposite corners of the square and are still disjoint.

Also, we can replace A′,B′A′,B′ by simple polygonal paths A″,B″A″,B″ by omitting loops. Now we can close A″A″ to a polygon, and B″B″ goes from its "inside" to "outside" without meeting it, contrary to the Jordan curve theorem for polygons.

- John Stillwell
nibble  q-n-a  overflow  math  geometry  topology  tidbits  intricacy  intersection  proofs  gotchas  oly  mathtariat  fixed-point  math.AT  manifolds  intersection-connectedness 
october 2017 by nhaliday
Lecture 14: When's that meteor arriving
- Meteors as a random process
- Limiting approximations
- Derivation of the Exponential distribution
- Derivation of the Poisson distribution
- A "Poisson process"
nibble  org:junk  org:edu  exposition  lecture-notes  physics  mechanics  space  earth  probability  stats  distribution  stochastic-processes  closure  additive  limits  approximation  tidbits  acm  binomial  multiplicative 
september 2017 by nhaliday
Why is Earth's gravity stronger at the poles? - Physics Stack Exchange
The point is that if we approximate Earth with an oblate ellipsoid, then the surface of Earth is an equipotential surface,11 see e.g. this Phys.SE post.

Now, because the polar radius is smaller than the equatorial radius, the density of equipotential surfaces at the poles must be bigger than at the equator.

Or equivalently, the field strength22 gg at the poles must be bigger than at the equator.
nibble  q-n-a  overflow  physics  mechanics  gravity  earth  space  intricacy  explanation  tidbits  spatial  direction  nitty-gritty  geography 
september 2017 by nhaliday
diffusion - Surviving under water in air bubble - Physics Stack Exchange
I get d≈400md≈400m.

It's interesting to note that this is independent of pressure: I've neglected pressure dependence of DD and human resilience to carbon dioxide, and the maximum safe concentration of carbon dioxide is independent of pressure, just derived from measurements at STP.

Finally, a bubble this large will probably rapidly break up due to buoyancy and Plateau-Rayleigh instabilities.
nibble  q-n-a  overflow  physics  mechanics  h2o  safety  short-circuit  tidbits  gedanken  fluid  street-fighting 
august 2017 by nhaliday
rotational dynamics - Why do non-rigid bodies try to increase their moment of inertia? - Physics Stack Exchange
This happens to isolated rotating system that is not a rigid body.

Inside such a body (for example, steel chain in free fall) the parts move relatively to each other and there is internal friction that dissipates kinetic energy of the system, while angular momentum is conserved. The dissipation goes on until the parts stop moving with respect to each other, so body rotates as a rigid body, even if it is not rigid by constitution.

The rotating state of the body that has the lowest kinetic energy for given angular momentum is that in which the body has the greatest moment of inertia (with respect to center of mass). For example, a long chain thrown into free fall will twist and turn until it is all straight and rotating as rigid body.

...

If LL is constant (net torque of external forces acting on the system is zero) and the constitution and initial conditions allow it, the system's dissipation will work to diminish energy until it has the minimum value, which happens for maximum IaIa possible.
nibble  q-n-a  overflow  physics  mechanics  tidbits  spatial  rigidity  flexibility  invariance  direction  stylized-facts  dynamical  volo-avolo  street-fighting  yoga 
august 2017 by nhaliday
gravity - Gravitational collapse and free fall time (spherical, pressure-free) - Physics Stack Exchange
the parenthetical regarding Gauss's law just involves noting a shell of radius r + symmetry (so single parameter determines field along shell)
nibble  q-n-a  overflow  physics  mechanics  gravity  tidbits  time  phase-transition  symmetry  differential  identity  dynamical 
august 2017 by nhaliday
Introduction to Scaling Laws
https://betadecay.wordpress.com/2009/10/02/the-physics-of-scaling-laws-and-dimensional-analysis/
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits  elegance 
august 2017 by nhaliday
The Earth-Moon system
nice way of expressing Kepler's law (scaled by AU, solar mass, year, etc.) among other things

1. PHYSICAL PROPERTIES OF THE MOON
2. LUNAR PHASES
3. ECLIPSES
4. TIDES
nibble  org:junk  explanation  trivia  data  objektbuch  space  mechanics  spatial  visualization  earth  visual-understanding  navigation  experiment  measure  marginal  gravity  scale  physics  nitty-gritty  tidbits  identity  cycles  time  magnitude  street-fighting  calculation  oceans  pro-rata  rhythm  flux-stasis 
august 2017 by nhaliday
Diophantine approximation - Wikipedia
- rationals perfectly approximated by themselves, badly approximated (eps>1/bq) by other rationals
- irrationals well-approximated (eps~1/q^2) by rationals:
https://en.wikipedia.org/wiki/Dirichlet%27s_approximation_theorem
nibble  wiki  reference  math  math.NT  approximation  accuracy  levers  pigeonhole-markov  multi  tidbits  discrete  rounding  estimate  tightness  algebra 
august 2017 by nhaliday
Kelly criterion - Wikipedia
In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet, is a formula used to determine the optimal size of a series of bets. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run (that is, over a span of time in which the observed fraction of bets that are successful equals the probability that any given bet will be successful). It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956.[1] The practical use of the formula has been demonstrated.[2][3][4]

The Kelly Criterion is to bet a predetermined fraction of assets and can be counterintuitive. In one study,[5][6] each participant was given $25 and asked to bet on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. Behavior was far from optimal. "Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment." Using the Kelly criterion and based on the odds in the experiment, the right approach would be to bet 20% of the pot on each throw (see first example in Statement below). If losing, the size of the bet gets cut; if winning, the stake increases.
nibble  betting  investing  ORFE  acm  checklists  levers  probability  algorithms  wiki  reference  atoms  extrema  parsimony  tidbits  decision-theory  decision-making  street-fighting  mental-math  calculation 
august 2017 by nhaliday
Roche limit - Wikipedia
In celestial mechanics, the Roche limit (pronounced /ʁɔʃ/) or Roche radius, is the distance within which a celestial body, held together only by its own gravity, will disintegrate due to a second celestial body's tidal forces exceeding the first body's gravitational self-attraction.[1] Inside the Roche limit, orbiting material disperses and forms rings whereas outside the limit material tends to coalesce. The term is named after Édouard Roche, who is the French astronomer who first calculated this theoretical limit in 1848.[2]
space  physics  gravity  mechanics  wiki  reference  nibble  phase-transition  proofs  tidbits  identity  marginal 
july 2017 by nhaliday
Strings, periods, and borders
A border of x is any proper prefix of x that equals a suffix of x.

...overlapping borders of a string imply that the string is periodic...

In the border array ß[1..n] of x, entry ß[i] is the length
of the longest border of x[1..i].
pdf  nibble  slides  lectures  algorithms  strings  exposition  yoga  atoms  levers  tidbits  sequential  backup 
may 2017 by nhaliday
Main Page - Competitive Programming Algorithms: E-Maxx Algorithms in English
original russian version: http://e-maxx.ru/algo/

some notable stuff:
- O(N) factorization sieve
- discrete logarithm
- factorial N! (mod P) in O(P log N)
- flow algorithms
- enumerating submasks
- bridges, articulation points
- Ukkonen algorithm
- sqrt(N) trick, eg, for range mode query
explanation  programming  algorithms  russia  foreign-lang  oly  oly-programming  problem-solving  accretion  math.NT  graphs  graph-theory  optimization  data-structures  yoga  tidbits  multi  anglo  language  arrows  strings 
february 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum 
february 2017 by nhaliday
st.statistics - Lower bound for sum of binomial coefficients? - MathOverflow
- basically approximate w/ geometric sum (which scales as final term) and you can get it up to O(1) factor
- not good enough for many applications (want 1+o(1) approx.)
- Stirling can also give bound to constant factor precision w/ more calculation I believe
- tighter bound at Section 7.3 here: http://webbuild.knu.ac.kr/~trj/Combin/matousek-vondrak-prob-ln.pdf
q-n-a  overflow  nibble  math  math.CO  estimate  tidbits  magnitude  concentration-of-measure  stirling  binomial  metabuch  tricki  multi  tightness  pdf  lecture-notes  exposition  probability  probabilistic-method  yoga 
february 2017 by nhaliday
probability - Variance of maximum of Gaussian random variables - Cross Validated
In full generality it is rather hard to find the right order of magnitude of the variance of a Gaussien supremum since the tools from concentration theory are always suboptimal for the maximum function.

order ~ 1/log n
q-n-a  overflow  stats  probability  acm  orders  tails  bias-variance  moments  concentration-of-measure  magnitude  tidbits  distribution  yoga  structure  extrema  nibble 
february 2017 by nhaliday
bounds - What is the variance of the maximum of a sample? - Cross Validated
- sum of variances is always a bound
- can't do better even for iid Bernoulli
- looks like nice argument from well-known probabilist (using E[(X-Y)^2] = 2Var X), but not clear to me how he gets to sum_i instead of sum_{i,j} in the union bound?
edit: argument is that, for j = argmax_k Y_k, we have r < X_i - Y_j <= X_i - Y_i for all i, including i = argmax_k X_k
- different proof here (later pages): http://www.ism.ac.jp/editsec/aism/pdf/047_1_0185.pdf
Var(X_n:n) <= sum Var(X_k:n) + 2 sum_{i < j} Cov(X_i:n, X_j:n) = Var(sum X_k:n) = Var(sum X_k) = nσ^2
why are the covariances nonnegative? (are they?). intuitively seems true.
- for that, see https://pinboard.in/u:nhaliday/b:ed4466204bb1
- note that this proof shows more generally that sum Var(X_k:n) <= sum Var(X_k)
- apparently that holds for dependent X_k too? http://mathoverflow.net/a/96943/20644
q-n-a  overflow  stats  acm  distribution  tails  bias-variance  moments  estimate  magnitude  probability  iidness  tidbits  concentration-of-measure  multi  orders  levers  extrema  nibble  bonferroni  coarse-fine  expert  symmetry  s:*  expert-experience  proofs 
february 2017 by nhaliday
Cauchy-Schwarz inequality and Hölder's inequality - Mathematics Stack Exchange
- Cauchy-Schwarz (special case of Holder's inequality where p=q=1/2) implies Holder's inequality
- pith: define potential F(t) = int f^{pt} g^{q(1-t)}, show log F is midpoint-convex hence convex, then apply convexity between F(0) and F(1) for F(1/p) = ||fg||_1
q-n-a  overflow  math  estimate  proofs  ground-up  math.FA  inner-product  tidbits  norms  duality  nibble  integral 
january 2017 by nhaliday
5/8 bound in group theory - MathOverflow
very elegant proof (remember sum d_i^2 = |G| and # irreducible rep.s = # conjugacy classes)
q-n-a  overflow  math  tidbits  proofs  math.RT  math.GR  oly  commutativity  pigeonhole-markov  nibble  shift 
january 2017 by nhaliday
mg.metric geometry - Pushing convex bodies together - MathOverflow
- volume of intersection of colliding, constant-velocity convex bodies is unimodal
- pf by Brunn-Minkowski inequality
q-n-a  overflow  math  oly  tidbits  geometry  math.MG  monotonicity  measure  spatial  dynamical  nibble  brunn-minkowski  intersection  curvature  convexity-curvature  intersection-connectedness 
january 2017 by nhaliday
probability - How to prove Bonferroni inequalities? - Mathematics Stack Exchange
- integrated version of inequalities for alternating sums of (N choose j), where r.v. N = # of events occuring
- inequalities for alternating binomial coefficients follow from general property of unimodal (increasing then decreasing) sequences, which can be gotten w/ two cases for increasing and decreasing resp.
- the final alternating zero sum property follows for binomial coefficients from expanding (1 - 1)^N = 0
- The idea of proving inequality by integrating simpler inequality of r.v.s is nice. Proof from CS 150 was more brute force from what I remember.
q-n-a  overflow  math  probability  tcs  probabilistic-method  estimate  proofs  levers  yoga  multi  tidbits  metabuch  monotonicity  calculation  nibble  bonferroni  tricki  binomial  s:null  elegance 
january 2017 by nhaliday
reference request - The coupon collector's earworm - MathOverflow
I have a playlist with, say, N pieces of music. While using the shuffle option (each such piece is played randomly at each step), I realized that, generally speaking, I have to hear quite a lot of times the same piece before the last one appears. It makes me think of the following question:

At the moment the last non already heard piece is played, what is the max, in average, of number of times the same piece has already been played?

A: e log N + o(log N)
q-n-a  overflow  math  math.CO  tidbits  puzzles  probability  magnitude  oly  nibble  concentration-of-measure  binomial 
january 2017 by nhaliday
cv.complex variables - Absolute value inequality for complex numbers - MathOverflow
In general, once you've proven an inequality like this in R it holds automatically in any Euclidean space (including C) by averaging over projections. ("Inequality like this" = inequality where every term is the length of some linear combination of variable vectors in the space; here the vectors are a, b, c).

I learned this trick at MOP 30+ years ago, and don't know or remember who discovered it.
q-n-a  overflow  math  math.CV  estimate  tidbits  yoga  oly  mathtariat  math.FA  metabuch  inner-product  calculation  norms  nibble  tricki 
january 2017 by nhaliday
pr.probability - When are probability distributions completely determined by their moments? - MathOverflow
Roughly speaking, if the sequence of moments doesn't grow too quickly, then the distribution is determined by its moments. One sufficient condition is that if the moment generating function of a random variable has positive radius of convergence, then that random variable is determined by its moments.
q-n-a  overflow  math  acm  probability  characterization  tidbits  moments  rigidity  nibble  existence  convergence  series 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : mathmeta

related tags

2016-election  aaronson  abstraction  accretion  accuracy  acm  acmtariat  additive  additive-combo  advice  agri-mindset  alg-combo  algebra  algebraic-complexity  algorithms  AMT  analogy  analysis  anglo  aphorism  apollonian-dionysian  applications  approximation  architecture  arrows  article  atoms  average-case  backup  baez  bare-hands  behavioral-gen  benchmarks  best-practices  better-explained  betting  bias-variance  big-list  big-picture  big-surf  binomial  bio  biodet  bits  blowhards  boaz-barak  bonferroni  books  boolean-analysis  borel-cantelli  brunn-minkowski  calculation  caltech  cartoons  characterization  chart  cheatsheet  checklists  circuits  clarity  classic  clever-rats  closure  coarse-fine  cocktail  coding-theory  combo-optimization  communication-complexity  commutativity  comparison  complexity  computational-geometry  computer-memory  concentration-of-measure  concept  conceptual-vocab  concrete  confidence  confusion  constraint-satisfaction  contradiction  convergence  convexity-curvature  cool  correctness  correlation  counterexample  course  crypto  cs  curiosity  current-events  curvature  cycles  data  data-science  data-structures  decision-making  decision-theory  definition  degrees-of-freedom  dependence-independence  differential  dimensionality  direction  dirty-hands  discovery  discrete  discussion  distribution  divide-and-conquer  duality  duplication  dynamic  dynamical  early-modern  earth  elections  electromag  elegance  embeddings  encyclopedic  engineering  enhancement  entropy-like  erdos  error  essay  estimate  europe  examples  existence  expectancy  experiment  expert  expert-experience  explanans  explanation  exposition  extrema  faq  features  fedja  fermi  fiber  fields  film  fixed-point  flexibility  fluid  flux-stasis  foreign-lang  formal-methods  fourier  frequency  games  gedanken  gender  genetics  geography  geometry  giants  gnxp  google  gotchas  gowers  graph-theory  graphs  gravity  grokkability-clarity  ground-up  guessing  guide  gwern  h2o  hardware  hashing  heavyweights  heuristic  hi-order-bits  high-dimension  history  hmm  homo-hetero  howto  huge-data-the-biggest  ideas  identity  IEEE  iidness  induction  info-foraging  information-theory  inner-product  insight  integral  interdisciplinary  intersection  intersection-connectedness  intricacy  intuition  invariance  investing  iteration-recursion  janus  jargon  language  latent-variables  lattice  lecture-notes  lectures  lens  levers  lifts-projections  limits  linear-algebra  linearity  liner-notes  links  list  local-global  logic  lower-bounds  machine-learning  magnitude  manifolds  marginal  markov  martingale  matching  math  math.AC  math.AG  math.AT  math.CA  math.CO  math.CT  math.CV  math.DS  math.FA  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  measure  measurement  mechanics  memory-management  mental-math  meta:math  metabuch  metal-to-virtual  methodology  metric-space  metrics  minimum-viable  mixing  model-class  model-selection  models  moments  monotonicity  monte-carlo  motivation  msr  multi  multiplicative  naturality  navigation  neurons  nibble  nitty-gritty  norms  nostalgia  novelty  numerics  objektbuch  oceans  oly  oly-programming  online-learning  open-problems  operational  optics  optimization  orders  ORFE  org:bleg  org:edu  org:junk  org:mat  org:sci  orourke  oscillation  outliers  overflow  p:**  p:whenever  papers  paradox  parsimony  pdf  people  phase-transition  physics  pigeonhole-markov  plots  politics  polynomials  population-genetics  positivity  preimage  preprint  pro-rata  probabilistic-method  probability  problem-solving  programming  proofs  properties  pseudorandomness  puzzles  q-n-a  qra  quantifiers-sums  quantum  quantum-info  questions  quixotic  random  ratty  reading  rec-math  reddit  reduction  reference  reflection  regression  regularity  regularization  relaxation  research  retrofit  rhythm  rigidity  rigor  rigorous-crypto  robust  roots  rounding  rsc  russia  ryan-odonnell  s:*  s:**  s:***  s:null  safety  salil-vadhan  sampling  scale  scaling-up  sci-comp  science  scitariat  sebastien-bubeck  selection  sensitivity  separation  sequential  series  shannon  shift  short-circuit  signum  similarity  simulation  skeleton  slides  smoothness  social  social-choice  soft-question  space  space-complexity  sparsity  spatial  spectral  spock  stackex  stat-mech  stats  stirling  stochastic-processes  stream  street-fighting  strings  structure  stylized-facts  sublinear  sum-of-squares  summary  survey  symmetry  synthesis  systems  tails  tcs  tcstariat  teaching  techtariat  temperature  tensors  the-great-west-whale  the-trenches  the-world-is-just-atoms  thermo  thinking  thurston  tidbits  tightness  time  time-complexity  todo  topics  topology  trees  tricki  tricks  trivia  truth  tutorial  types  unit  valiant  visual-understanding  visualization  visuo  volo-avolo  water  waves  west-hunter  wigderson  wiki  wisdom  wormholes  worrydream  yak-shaving  yoga  zooming  👳  🔬 

Copy this bookmark:



description:


tags: