nhaliday + yoga   270

Unhappy Go Lucky!
- regularly publishes unofficial editorials for AtCoder
- also seems like an otaku >_>
oly  oly-programming  blog  stream  algorithms  problem-solving  accretion  puzzles  techtariat  japan  asia  quixotic  yoga 
8 weeks ago by nhaliday
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
1400-1900:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think that’s because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)

...

Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

1900-2200:
To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and don’t give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean “Virtual Participation” in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search “Codeforces Round #556 editorial” in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests

...

At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for “considering” and “typical”, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I don’t recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating won’t increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  🖥  👳  yoga 
august 2019 by nhaliday
Anti-hash test. - Codeforces
- Thue-Morse sequence
- nice paper: http://www.mii.lt/olympiads_in_informatics/pdf/INFOL119.pdf
In general, polynomial string hashing is a useful technique in construction of efficient string algorithms. One simply needs to remember to carefully select the modulus M and the variable of the polynomial p depending on the application. A good rule of thumb is to pick both values as prime numbers with M as large as possible so that no integer overflow occurs and p being at least the size of the alphabet.
2.2. Upper Bound on M
[stuff about 32- and 64-bit integers]
2.3. Lower Bound on M
On the other side Mis bounded due to the well-known birthday paradox: if we consider a collection of m keys with m ≥ 1.2√M then the chance of a collision to occur within this collection is at least 50% (assuming that the distribution of fingerprints is close to uniform on the set of all strings). Thus if the birthday paradox applies then one needs to choose M=ω(m^2)to have a fair chance to avoid a collision. However, one should note that not always the birthday paradox applies. As a benchmark consider the following two problems.

I generally prefer to use Schwartz-Zippel to reason about collision probabilities w/ this kind of thing, eg, https://people.eecs.berkeley.edu/~sinclair/cs271/n3.pdf.

A good way to get more accurate results: just use multiple primes and the Chinese remainder theorem to get as large an M as you need w/o going beyond 64-bit arithmetic.

more on this: https://codeforces.com/blog/entry/60442
oly  oly-programming  gotchas  howto  hashing  algorithms  strings  random  best-practices  counterexample  multi  pdf  papers  nibble  examples  fields  polynomials  lecture-notes  yoga  probability  estimate  magnitude  hacker  adversarial  CAS  lattice  discrete 
august 2019 by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia 
may 2019 by nhaliday
Power of a point - Wikipedia
The power of point P (see in Figure 1) can be defined equivalently as the product of distances from the point P to the two intersection points of any ray emanating from P.
nibble  math  geometry  spatial  ground-up  concept  metrics  invariance  identity  atoms  wiki  reference  measure  yoga  calculation 
september 2017 by nhaliday
rotational dynamics - Why do non-rigid bodies try to increase their moment of inertia? - Physics Stack Exchange
This happens to isolated rotating system that is not a rigid body.

Inside such a body (for example, steel chain in free fall) the parts move relatively to each other and there is internal friction that dissipates kinetic energy of the system, while angular momentum is conserved. The dissipation goes on until the parts stop moving with respect to each other, so body rotates as a rigid body, even if it is not rigid by constitution.

The rotating state of the body that has the lowest kinetic energy for given angular momentum is that in which the body has the greatest moment of inertia (with respect to center of mass). For example, a long chain thrown into free fall will twist and turn until it is all straight and rotating as rigid body.

...

If LL is constant (net torque of external forces acting on the system is zero) and the constitution and initial conditions allow it, the system's dissipation will work to diminish energy until it has the minimum value, which happens for maximum IaIa possible.
nibble  q-n-a  overflow  physics  mechanics  tidbits  spatial  rigidity  flexibility  invariance  direction  stylized-facts  dynamical  volo-avolo  street-fighting  yoga 
august 2017 by nhaliday
Introduction to Scaling Laws
https://betadecay.wordpress.com/2009/10/02/the-physics-of-scaling-laws-and-dimensional-analysis/
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits  elegance 
august 2017 by nhaliday
Inscribed angle - Wikipedia
pf:
- for triangle w/ one side = a diameter, draw isosceles triangle and use supplementary angle identities
- otherwise draw second triangle w/ side = a diameter, and use above result twice
nibble  math  geometry  spatial  ground-up  wiki  reference  proofs  identity  levers  yoga 
august 2017 by nhaliday
Analysis of variance - Wikipedia
Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences among group means and their associated procedures (such as "variation" among and between groups), developed by statistician and evolutionary biologist Ronald Fisher. In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups. ANOVAs are useful for comparing (testing) three or more means (groups or variables) for statistical significance. It is conceptually similar to multiple two-sample t-tests, but is more conservative (results in less type I error) and is therefore suited to a wide range of practical problems.

good pic: https://en.wikipedia.org/wiki/Analysis_of_variance#Motivating_example

tutorial by Gelman: http://www.stat.columbia.edu/~gelman/research/published/econanova3.pdf

so one way to think of partitioning the variance:
y_ij = alpha_i + beta_j + eps_ij
Var(y_ij) = Var(alpha_i) + Var(beta_j) + Cov(alpha_i, beta_j) + Var(eps_ij)
and alpha_i, beta_j are independent, so Cov(alpha_i, beta_j) = 0

can you make this work w/ interaction effects?
data-science  stats  methodology  hypothesis-testing  variance-components  concept  conceptual-vocab  thinking  wiki  reference  nibble  multi  visualization  visual-understanding  pic  pdf  exposition  lecture-notes  gelman  scitariat  tutorial  acm  ground-up  yoga 
july 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
https://en.wikipedia.org/wiki/Coefficient_of_determination
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric

https://en.wikipedia.org/wiki/Explained_variation

http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.

this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)

https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
stats  science  hypothesis-testing  correlation  metrics  plots  regression  wiki  reference  nibble  methodology  multi  twitter  social  discussion  best-practices  econotariat  garett-jones  concept  conceptual-vocab  accuracy  causation  acm  matrix-factorization  todo  explanation  yoga  hsu  street-fighting  levers  🌞  2014  scitariat  variance-components  meta:prediction  biodet  s:**  mental-math  reddit  commentary  ssc  poast  gwern  data-science  metric-space  similarity  measure  dependence-independence 
may 2017 by nhaliday
Strings, periods, and borders
A border of x is any proper prefix of x that equals a suffix of x.

...overlapping borders of a string imply that the string is periodic...

In the border array ß[1..n] of x, entry ß[i] is the length
of the longest border of x[1..i].
pdf  nibble  slides  lectures  algorithms  strings  exposition  yoga  atoms  levers  tidbits  sequential  backup 
may 2017 by nhaliday
Main Page - Competitive Programming Algorithms: E-Maxx Algorithms in English
original russian version: http://e-maxx.ru/algo/

some notable stuff:
- O(N) factorization sieve
- discrete logarithm
- factorial N! (mod P) in O(P log N)
- flow algorithms
- enumerating submasks
- bridges, articulation points
- Ukkonen algorithm
- sqrt(N) trick, eg, for range mode query
explanation  programming  algorithms  russia  foreign-lang  oly  oly-programming  problem-solving  accretion  math.NT  graphs  graph-theory  optimization  data-structures  yoga  tidbits  multi  anglo  language  arrows  strings 
february 2017 by nhaliday
st.statistics - Lower bound for sum of binomial coefficients? - MathOverflow
- basically approximate w/ geometric sum (which scales as final term) and you can get it up to O(1) factor
- not good enough for many applications (want 1+o(1) approx.)
- Stirling can also give bound to constant factor precision w/ more calculation I believe
- tighter bound at Section 7.3 here: http://webbuild.knu.ac.kr/~trj/Combin/matousek-vondrak-prob-ln.pdf
q-n-a  overflow  nibble  math  math.CO  estimate  tidbits  magnitude  concentration-of-measure  stirling  binomial  metabuch  tricki  multi  tightness  pdf  lecture-notes  exposition  probability  probabilistic-method  yoga 
february 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractacademegrowthproblem-solvingtkvague

related tags

aaronson  academia  accretion  accuracy  acm  acmtariat  additive  additive-combo  advanced  adversarial  advice  aggregator  alg-combo  algebra  algebraic-complexity  algorithmic-econ  algorithms  amortization-potential  AMT  analogy  analysis  anglo  ankur-moitra  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  arbitrage  architecture  arms  arrows  article  asia  atoms  average-case  backup  bandits  bare-hands  bayesian  ben-recht  benchmarks  berkeley  best-practices  better-explained  bias-variance  big-list  big-picture  big-surf  binomial  biodet  bioinformatics  bits  blog  boaz-barak  boltzmann  bonferroni  books  boolean-analysis  borel-cantelli  bounded-cognition  c(pp)  caching  calculation  caltech  cartoons  CAS  causation  certificates-recognition  chaining  characterization  chart  cheatsheet  checking  checklists  chicago  circuits  clarity  classic  clever-rats  cmu  coarse-fine  coding-theory  coloring  columbia  combo-optimization  commentary  communication  communication-complexity  commutativity  comparison  complexity  composition-decomposition  compressed-sensing  compression  computational-geometry  computer-memory  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confluence  confounding  confusion  constraint-satisfaction  contest  contiguity-proximity  contradiction  contrarianism  convergence  convexity-curvature  cool  cornell  correctness  correlation  cost-benefit  counterexample  counting  course  crypto  cs  curiosity  curvature  dana-moshkovitz  data  data-science  data-structures  database  decision-making  decision-theory  deep-learning  definition  degrees-of-freedom  dependence-independence  descriptive  differential  differential-privacy  dimensionality  direction  discovery  discrete  discussion  distributed  distribution  divide-and-conquer  DP  draft  duality  dumb-ML  duplication  dynamic  dynamical  early-modern  earth  economics  econotariat  electromag  elegance  embedding  embeddings  ems  encyclopedic  ends-means  engineering  ensembles  entanglement  entropy-like  ergodic  erik-demaine  error  error-handling  essay  estimate  europe  examples  existence  exocortex  expanders  expectancy  expert  expert-experience  explanans  explanation  exploratory  explore-exploit  exposition  extratricky  extrema  faq  features  fedja  fermi  feynman  fields  film  finance  finiteness  flexibility  fluid  flux-stasis  foreign-lang  form-design  formal-methods  fourier  frequentist  frontier  futurism  game-theory  games  garett-jones  gaussian-processes  gelman  generalization  generative  geography  geometry  georgia  giants  gotchas  gowers  gradient-descent  graph-theory  graphical-models  graphs  gravity  greedy  grokkability-clarity  ground-up  growth  guessing  guide  GWAS  gwern  h2o  hacker  hamming  hanson  hardness  hardware  harvard  hashing  heavyweights  heuristic  hi-order-bits  hierarchy  high-dimension  history  hmm  homepage  homo-hetero  homogeneity  howto  hsu  huge-data-the-biggest  hypothesis-testing  ideas  identity  idk  IEEE  iidness  impact  induction  inference  info-dynamics  info-foraging  information-theory  init  inner-product  insight  integral  interdisciplinary  intersection  intersection-connectedness  intricacy  intuition  invariance  investing  ising  isotropy  israel  iteration-recursion  iterative-methods  japan  jelani-nelson  kernels  knowledge  language  latency-throughput  latent-variables  lattice  learning  learning-theory  lecture-notes  lectures  lens  lesswrong  let-me-see  levers  libraries  lifts-projections  limits  linear-algebra  linear-programming  linearity  liner-notes  links  list  local-global  logic  lower-bounds  luca-trevisan  machine-learning  madhu-sudan  magnitude  manifolds  marginal  markov  martingale  matching  math  math.AG  math.AT  math.CA  math.CO  math.CT  math.CV  math.DS  math.FA  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  measure  mechanics  mechanism-design  mental-math  meta:math  meta:prediction  meta:research  meta:rhetoric  meta:war  metabuch  metal-to-virtual  metameta  methodology  metric-space  metrics  michael-nielsen  micro  mihai  military  mit  mixing  ML-MAP-E  model-class  models  moments  monotonicity  monte-carlo  motivation  mrtz  multi  multiplicative  narrative  naturality  necessity-sufficiency  neurons  new-religion  nibble  nitty-gritty  nlp  no-go  nonlinearity  nonparametric  norms  nostalgia  notetaking  novelty  nuclear  numerics  objektbuch  occam  oceans  ocw  off-convex  oly  oly-programming  online-learning  oop  open-problems  openai  operational  optics  optimization  orders  ORFE  org:bleg  org:edu  org:fin  org:inst  org:junk  org:mat  org:popup  organization  os  oscillation  overflow  oxbridge  p:*  p:**  p:***  p:someday  p:whenever  PAC  papers  paradox  pcp  pdf  people  percolation  performance  perturbation  phase-transition  philosophy  phys-energy  physics  pic  pigeonhole-markov  plots  pls  poast  polynomials  positivity  postrat  practice  pragmatic  pre-2013  preimage  preprint  presentation  princeton  prioritizing  priors-posteriors  pro-rata  probabilistic-method  probability  problem-solving  productivity  programming  proof-systems  proofs  properties  pseudorandomness  puzzles  python  q-n-a  qra  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  questions  quixotic  rand-approx  rand-complexity  random  random-matrices  random-networks  rat-pack  rationality  ratty  reading  realness  reason  rec-math  recommendations  reddit  reduction  reference  reflection  regression  regularization  reinforcement  relativity  relativization  relaxation  replication  research  research-program  retention  retrofit  review  rigidity  rigor  rigorous-crypto  roadmap  robust  roots  rounding  russia  ryan-odonnell  s:*  s:**  s:***  s:null  salil-vadhan  sample-complexity  sampling  sanjeev-arora  scale  scaling-tech  scholar  scholar-pack  sci-comp  science  scitariat  SDP  search  sebastien-bubeck  seminar  sensitivity  separation  sequential  series  shannon  signal-noise  signaling  signum  similarity  simulation  skeleton  skunkworks  sky  sleuthin  slides  smoothness  social  soft-question  space  space-complexity  sparsity  spatial  spectral  speculation  speed  spock  ssc  stackex  stanford  stat-mech  state  static-dynamic  stats  status  stirling  stochastic-processes  stock-flow  stories  strategy  stream  street-fighting  strings  structure  studying  stylized-facts  subjective-objective  sublinear  submodular  sum-of-squares  summary  survey  symmetry  synchrony  synthesis  systems  tails  talks  tcs  tcstariat  teaching  technical-writing  technology  techtariat  telos-atelos  temperature  tensors  texas  the-great-west-whale  the-prices  the-trenches  the-west  the-world-is-just-atoms  thermo  things  thinking  thurston  tidbits  tightness  tim-roughgarden  time  time-complexity  time-use  tip-of-tongue  todo  toolkit  tools  top-n  topics  topology  track-record  trees  tricki  tricks  truth  tutorial  twitter  types  UGC  uniqueness  unit  unsupervised  usa  usaco-ioi  vague  valiant  values  variance-components  vazirani  vc-dimension  video  virtu  visual-understanding  visualization  volo-avolo  von-neumann  washington  water  waves  web  weird-sun  white-paper  wigderson  wiki  winter-2017  wire-guided  wisdom  wordlessness  world-war  wormholes  worrydream  writing  yak-shaving  yoga  zooming  🌞  🎓  👳  🔬  🖥 

Copy this bookmark:



description:


tags: