nhaliday + yoga   250

Power of a point - Wikipedia
The power of point P (see in Figure 1) can be defined equivalently as the product of distances from the point P to the two intersection points of any ray emanating from P.
nibble  math  geometry  spatial  ground-up  concept  metrics  invariance  identity  atoms  wiki  reference  measure  yoga  calculation 
september 2017 by nhaliday
rotational dynamics - Why do non-rigid bodies try to increase their moment of inertia? - Physics Stack Exchange
This happens to isolated rotating system that is not a rigid body.

Inside such a body (for example, steel chain in free fall) the parts move relatively to each other and there is internal friction that dissipates kinetic energy of the system, while angular momentum is conserved. The dissipation goes on until the parts stop moving with respect to each other, so body rotates as a rigid body, even if it is not rigid by constitution.

The rotating state of the body that has the lowest kinetic energy for given angular momentum is that in which the body has the greatest moment of inertia (with respect to center of mass). For example, a long chain thrown into free fall will twist and turn until it is all straight and rotating as rigid body.

...

If LL is constant (net torque of external forces acting on the system is zero) and the constitution and initial conditions allow it, the system's dissipation will work to diminish energy until it has the minimum value, which happens for maximum IaIa possible.
nibble  q-n-a  overflow  physics  mechanics  tidbits  spatial  rigidity  flexibility  invariance  direction  stylized-facts  dynamical  volo-avolo  street-fighting  yoga 
august 2017 by nhaliday
Introduction to Scaling Laws
https://betadecay.wordpress.com/2009/10/02/the-physics-of-scaling-laws-and-dimensional-analysis/
http://galileo.phys.virginia.edu/classes/304/scaling.pdf

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits 
august 2017 by nhaliday
Inscribed angle - Wikipedia
pf:
- for triangle w/ one side = a diameter, draw isosceles triangle and use supplementary angle identities
- otherwise draw second triangle w/ side = a diameter, and use above result twice
nibble  math  geometry  spatial  ground-up  wiki  reference  proofs  identity  levers  yoga 
august 2017 by nhaliday
Analysis of variance - Wikipedia
Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences among group means and their associated procedures (such as "variation" among and between groups), developed by statistician and evolutionary biologist Ronald Fisher. In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups. ANOVAs are useful for comparing (testing) three or more means (groups or variables) for statistical significance. It is conceptually similar to multiple two-sample t-tests, but is more conservative (results in less type I error) and is therefore suited to a wide range of practical problems.

good pic: https://en.wikipedia.org/wiki/Analysis_of_variance#Motivating_example

tutorial by Gelman: http://www.stat.columbia.edu/~gelman/research/published/econanova3.pdf

so one way to think of partitioning the variance:
y_ij = alpha_i + beta_j + eps_ij
Var(y_ij) = Var(alpha_i) + Var(beta_j) + Cov(alpha_i, beta_j) + Var(eps_ij)
and alpha_i, beta_j are independent, so Cov(alpha_i, beta_j) = 0

can you make this work w/ interaction effects?
data-science  stats  methodology  hypothesis-testing  variance-components  concept  conceptual-vocab  thinking  wiki  reference  nibble  multi  visualization  visual-understanding  pic  pdf  exposition  lecture-notes  gelman  scitariat  tutorial  acm  ground-up  yoga 
july 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
https://en.wikipedia.org/wiki/Coefficient_of_determination
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric

https://en.wikipedia.org/wiki/Explained_variation

http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.

this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)

https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
stats  science  hypothesis-testing  correlation  metrics  plots  regression  wiki  reference  nibble  methodology  multi  twitter  social  discussion  best-practices  econotariat  garett-jones  concept  conceptual-vocab  accuracy  causation  acm  matrix-factorization  todo  explanation  yoga  hsu  street-fighting  levers  🌞  2014  scitariat  variance-components  meta:prediction  biodet  s:**  mental-math  reddit  commentary  ssc  poast  gwern  data-science  metric-space  similarity  measure 
may 2017 by nhaliday
Strings, periods, and borders
A border of x is any proper prefix of x that equals a suffix of x.

...overlapping borders of a string imply that the string is periodic...

In the border array ß[1..n] of x, entry ß[i] is the length
of the longest border of x[1..i].
pdf  nibble  slides  lectures  algorithms  strings  exposition  yoga  atoms  levers  tidbits  sequential 
may 2017 by nhaliday
st.statistics - Lower bound for sum of binomial coefficients? - MathOverflow
- basically approximate w/ geometric sum (which scales as final term) and you can get it up to O(1) factor
- not good enough for many applications (want 1+o(1) approx.)
- Stirling can also give bound to constant factor precision w/ more calculation I believe
- tighter bound at Section 7.3 here: http://webbuild.knu.ac.kr/~trj/Combin/matousek-vondrak-prob-ln.pdf
q-n-a  overflow  nibble  math  math.CO  estimate  tidbits  magnitude  concentration-of-measure  stirling  binomial  metabuch  tricki  multi  tightness  pdf  lecture-notes  exposition  probability  probabilistic-method  yoga 
february 2017 by nhaliday
probability - Variance of maximum of Gaussian random variables - Cross Validated
In full generality it is rather hard to find the right order of magnitude of the variance of a Gaussien supremum since the tools from concentration theory are always suboptimal for the maximum function.

order ~ 1/log n
q-n-a  overflow  stats  probability  acm  orders  tails  bias-variance  moments  concentration-of-measure  magnitude  tidbits  distribution  yoga  structure  extrema  nibble 
february 2017 by nhaliday
6.896: Essential Coding Theory
- probabilistic method and Chernoff bound for Shannon coding
- probabilistic method for asymptotically good Hamming codes (Gilbert coding)
- sparsity used for LDPC codes
mit  course  yoga  tcs  complexity  coding-theory  math.AG  fields  polynomials  pigeonhole-markov  linear-algebra  probabilistic-method  lecture-notes  bits  sparsity  concentration-of-measure  linear-programming  linearity  expanders  hamming  pseudorandomness  crypto  rigorous-crypto  communication-complexity  no-go  madhu-sudan  shannon  unit  p:** 
february 2017 by nhaliday
probability - How to prove Bonferroni inequalities? - Mathematics Stack Exchange
- integrated version of inequalities for alternating sums of (N choose j), where r.v. N = # of events occuring
- inequalities for alternating binomial coefficients follow from general property of unimodal (increasing then decreasing) sequences, which can be gotten w/ two cases for increasing and decreasing resp.
- the final alternating zero sum property follows for binomial coefficients from expanding (1 - 1)^N = 0
- The idea of proving inequality by integrating simpler inequality of r.v.s is nice. Proof from CS 150 was more brute force from what I remember.
q-n-a  overflow  math  probability  tcs  probabilistic-method  estimate  proofs  levers  yoga  multi  tidbits  metabuch  monotonicity  calculation  nibble  bonferroni  tricki  binomial  s:null 
january 2017 by nhaliday
Computational Complexity: Favorite Theorems: The Yao Principle
The Yao Principle applies when we don't consider the algorithmic complexity of the players. For example in communication complexity we have two players who each have a separate half of an input string and they want to compute some function of the input with the minimum amount of communication between them. The Yao principle states that the best probabilistic strategies for the players will achieve exactly the communication bounds as the best deterministic strategy over a worst-case distribution of inputs.

The Yao Principle plays a smaller role where we measure the running time of an algorithm since applying the Principle would require solving an extremely large linear program. But since so many of our bounds are in information-based models like communication and decision-tree complexity, the Yao Principle, though not particularly complicated, plays an important role in lower bounds in a large number of results in our field.
tcstariat  tcs  complexity  adversarial  rand-approx  algorithms  game-theory  yoga  levers  communication-complexity  random  lower-bounds  average-case  nibble  org:bleg 
january 2017 by nhaliday
CS 731 Advanced Artificial Intelligence - Spring 2011
- statistical machine learning
- sparsity in regression
- graphical models
- exponential families
- variational methods
- MCMC
- dimensionality reduction, eg, PCA
- Bayesian nonparametrics
- compressive sensing, matrix completion, and Johnson-Lindenstrauss
course  lecture-notes  yoga  acm  stats  machine-learning  graphical-models  graphs  model-class  bayesian  learning-theory  sparsity  embeddings  markov  monte-carlo  norms  unit  nonparametric  compressed-sensing  matrix-factorization  features 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractacademegrowthproblem-solvingtkvague

related tags

aaronson  academia  accretion  accuracy  acm  acmtariat  additive  additive-combo  adversarial  advice  alg-combo  algebra  algebraic-complexity  algorithmic-econ  algorithms  AMT  analogy  analysis  ankur-moitra  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  arbitrage  architecture  arms  arrows  article  atoms  average-case  bandits  bare-hands  bayesian  ben-recht  benchmarks  berkeley  best-practices  better-explained  bias-variance  big-list  big-picture  big-surf  binomial  biodet  bioinformatics  bits  blog  boaz-barak  boltzmann  bonferroni  books  boolean-analysis  borel-cantelli  bounded-cognition  caching  calculation  caltech  cartoons  causation  chaining  characterization  chart  cheatsheet  checklists  chicago  circuits  clarity  classic  clever-rats  cmu  coarse-fine  coding-theory  coloring  columbia  combo-optimization  commentary  communication  communication-complexity  commutativity  comparison  complexity  composition-decomposition  compressed-sensing  computational-geometry  concentration-of-measure  concept  conceptual-vocab  concrete  confluence  confounding  confusion  constraint-satisfaction  contradiction  contrarianism  convergence  convexity-curvature  cool  cornell  correlation  counterexample  counting  course  crypto  cs  curiosity  curvature  dana-moshkovitz  data  data-science  data-structures  database  decision-making  decision-theory  deep-learning  definition  degrees-of-freedom  descriptive  differential  dimensionality  direction  discovery  discrete  discussion  distributed  distribution  DP  draft  duality  duplication  dynamic  dynamical  early-modern  earth  economics  econotariat  electromag  embeddings  ems  encyclopedic  ends-means  engineering  ensembles  entanglement  entropy-like  ergodic  erik-demaine  essay  estimate  europe  examples  existence  exocortex  expanders  expectancy  expert  expert-experience  explanans  explanation  explore-exploit  exposition  extratricky  extrema  faq  features  fedja  fermi  feynman  fields  film  finance  finiteness  flexibility  fluid  fourier  frequentist  frontier  futurism  game-theory  garett-jones  gaussian-processes  gelman  generalization  generative  geography  geometry  georgia  giants  gowers  gradient-descent  graph-theory  graphical-models  graphs  gravity  greedy  ground-up  guide  GWAS  gwern  h2o  hamming  hanson  hardness  harvard  hashing  heuristic  hi-order-bits  hierarchy  high-dimension  history  hmm  homepage  homo-hetero  homogeneity  hsu  huge-data-the-biggest  hypothesis-testing  ideas  identity  idk  IEEE  iidness  impact  induction  inference  info-dynamics  info-foraging  information-theory  init  inner-product  insight  integral  interdisciplinary  intersection  intersection-connectedness  intricacy  intuition  invariance  investing  ising  isotropy  israel  iteration-recursion  iterative-methods  jelani-nelson  kernels  knowledge  latent-variables  learning  learning-theory  lecture-notes  lectures  lens  lesswrong  let-me-see  levers  lifts-projections  limits  linear-algebra  linear-programming  linearity  liner-notes  links  list  local-global  logic  lower-bounds  luca-trevisan  machine-learning  madhu-sudan  magnitude  manifolds  markov  martingale  matching  math  math.AG  math.AT  math.CA  math.CO  math.CV  math.DS  math.FA  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  measure  mechanics  mechanism-design  mental-math  meta:math  meta:prediction  meta:research  meta:rhetoric  meta:war  metabuch  metameta  methodology  metric-space  metrics  michael-nielsen  micro  mihai  military  mit  mixing  ML-MAP-E  model-class  models  moments  monotonicity  monte-carlo  motivation  mrtz  multi  multiplicative  narrative  naturality  neurons  new-religion  nibble  nitty-gritty  nlp  no-go  nonparametric  norms  notetaking  novelty  nuclear  numerics  objektbuch  occam  oceans  ocw  off-convex  oly  online-learning  open-problems  openai  operational  optics  optimization  orders  ORFE  org:bleg  org:edu  org:fin  org:inst  org:junk  org:mat  oscillation  overflow  oxbridge  p:*  p:**  p:***  p:someday  p:whenever  PAC  papers  paradox  pcp  pdf  people  percolation  perturbation  phase-transition  philosophy  phys-energy  physics  pic  pigeonhole-markov  plots  poast  polynomials  positivity  postrat  pragmatic  pre-2013  preimage  preprint  presentation  princeton  prioritizing  priors-posteriors  probabilistic-method  probability  problem-solving  productivity  proof-systems  proofs  properties  pseudorandomness  puzzles  q-n-a  qra  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  questions  rand-approx  rand-complexity  random  random-matrices  random-networks  rat-pack  rationality  ratty  reading  realness  reason  rec-math  recommendations  reddit  reference  reflection  regression  regularization  reinforcement  relativity  relativization  relaxation  replication  research  research-program  retention  retrofit  rigidity  rigor  rigorous-crypto  roadmap  robust  roots  rounding  ryan-odonnell  s:*  s:**  s:***  s:null  salil-vadhan  sample-complexity  sampling  sanjeev-arora  scale  scaling-tech  scholar  scholar-pack  science  scitariat  SDP  search  sebastien-bubeck  seminar  sensitivity  separation  sequential  series  shannon  signal-noise  signaling  signum  similarity  simulation  skeleton  sky  slides  smoothness  social  soft-question  space  space-complexity  sparsity  spatial  spectral  speculation  spock  ssc  stackex  stanford  stat-mech  state  stats  status  stirling  stochastic-processes  stories  strategy  street-fighting  strings  structure  studying  stylized-facts  subjective-objective  sublinear  submodular  sum-of-squares  summary  survey  symmetry  synchrony  synthesis  systems  tails  talks  tcs  tcstariat  teaching  technology  techtariat  telos-atelos  temperature  tensors  texas  the-great-west-whale  the-prices  the-trenches  the-west  the-world-is-just-atoms  thermo  things  thinking  thurston  tidbits  tightness  tim-roughgarden  time-complexity  tip-of-tongue  todo  toolkit  tools  top-n  topics  topology  track-record  trees  tricki  tricks  tutorial  twitter  UGC  uniqueness  unit  unsupervised  usa  vague  valiant  values  variance-components  vazirani  vc-dimension  video  virtu  visual-understanding  visualization  volo-avolo  von-neumann  washington  water  waves  web  weird-sun  white-paper  wigderson  wiki  winter-2017  wire-guided  wisdom  wordlessness  world-war  wormholes  worrydream  writing  yoga  zooming  🌞  🎓  👳  🔬 

Copy this bookmark:



description:


tags: