nhaliday + acm + stats   118

Stein's example - Wikipedia
Stein's example (or phenomenon or paradox), in decision theory and estimation theory, is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955.[1]

An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent; this occurs in channel estimation in telecommunications, for instance (different factors affect overall channel performance). On the other hand, if one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse.

...

Many simple, practical estimators achieve better performance than the ordinary estimator. The best-known example is the James–Stein estimator, which works by starting at X and moving towards a particular point (such as the origin) by an amount inversely proportional to the distance of X from that point.
nibble  concept  levers  wiki  reference  acm  stats  probability  decision-theory  estimate  distribution  atoms 
february 2018 by nhaliday
Fitting a Structural Equation Model
seems rather unrigorous: nonlinear optimization, possibility of nonconvergence, doesn't even mention local vs. global optimality...
pdf  slides  lectures  acm  stats  hypothesis-testing  graphs  graphical-models  latent-variables  model-class  optimization  nonlinearity  gotchas  nibble  ML-MAP-E  iteration-recursion  convergence 
november 2017 by nhaliday
multivariate analysis - Is it possible to have a pair of Gaussian random variables for which the joint distribution is not Gaussian? - Cross Validated
The bivariate normal distribution is the exception, not the rule!

It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided.

Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications.

note: there is a multivariate central limit theorem, so those such applications have no problem
nibble  q-n-a  overflow  stats  math  acm  probability  distribution  gotchas  intricacy  characterization  structure  composition-decomposition  counterexample  limits  concentration-of-measure 
october 2017 by nhaliday
Karl Pearson and the Chi-squared Test
Pearson's paper of 1900 introduced what subsequently became known as the chi-squared test of goodness of fit. The terminology and allusions of 80 years ago create a barrier for the modern reader, who finds that the interpretation of Pearson's test procedure and the assessment of what he achieved are less than straightforward, notwithstanding the technical advances made since then. An attempt is made here to surmount these difficulties by exploring Pearson's relevant activities during the first decade of his statistical career, and by describing the work by his contemporaries and predecessors which seem to have influenced his approach to the problem. Not all the questions are answered, and others remain for further study.

original paper: http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf

How did Karl Pearson come up with the chi-squared statistic?: https://stats.stackexchange.com/questions/97604/how-did-karl-pearson-come-up-with-the-chi-squared-statistic
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.

You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)

Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).

He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a χ212χ122 beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
nibble  papers  acm  stats  hypothesis-testing  methodology  history  mostly-modern  pre-ww2  old-anglo  giants  science  the-trenches  stories  multi  q-n-a  overflow  explanation  summary  innovation  discovery  distribution  degrees-of-freedom  limits 
october 2017 by nhaliday
Section 10 Chi-squared goodness-of-fit test.
- pf that chi-squared statistic for Pearson's test (multinomial goodness-of-fit) actually has chi-squared distribution asymptotically
- the gotcha: terms Z_j in sum aren't independent
- solution:
- compute the covariance matrix of the terms to be E[Z_iZ_j] = -sqrt(p_ip_j)
- note that an equivalent way of sampling the Z_j is to take a random standard Gaussian and project onto the plane orthogonal to (sqrt(p_1), sqrt(p_2), ..., sqrt(p_r))
- that is equivalent to just sampling a Gaussian w/ 1 less dimension (hence df=r-1)
QED
pdf  nibble  lecture-notes  mit  stats  hypothesis-testing  acm  probability  methodology  proofs  iidness  distribution  limits  identity  direction  lifts-projections 
october 2017 by nhaliday
Lecture 14: When's that meteor arriving
- Meteors as a random process
- Limiting approximations
- Derivation of the Exponential distribution
- Derivation of the Poisson distribution
- A "Poisson process"
nibble  org:junk  org:edu  exposition  lecture-notes  physics  mechanics  space  earth  probability  stats  distribution  stochastic-processes  closure  additive  limits  approximation  tidbits  acm  binomial  multiplicative 
september 2017 by nhaliday
Analysis of variance - Wikipedia
Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences among group means and their associated procedures (such as "variation" among and between groups), developed by statistician and evolutionary biologist Ronald Fisher. In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups. ANOVAs are useful for comparing (testing) three or more means (groups or variables) for statistical significance. It is conceptually similar to multiple two-sample t-tests, but is more conservative (results in less type I error) and is therefore suited to a wide range of practical problems.

good pic: https://en.wikipedia.org/wiki/Analysis_of_variance#Motivating_example

tutorial by Gelman: http://www.stat.columbia.edu/~gelman/research/published/econanova3.pdf

so one way to think of partitioning the variance:
y_ij = alpha_i + beta_j + eps_ij
Var(y_ij) = Var(alpha_i) + Var(beta_j) + Cov(alpha_i, beta_j) + Var(eps_ij)
and alpha_i, beta_j are independent, so Cov(alpha_i, beta_j) = 0

can you make this work w/ interaction effects?
data-science  stats  methodology  hypothesis-testing  variance-components  concept  conceptual-vocab  thinking  wiki  reference  nibble  multi  visualization  visual-understanding  pic  pdf  exposition  lecture-notes  gelman  scitariat  tutorial  acm  ground-up  yoga 
july 2017 by nhaliday
Stat 260/CS 294: Bayesian Modeling and Inference
Topics
- Priors (conjugate, noninformative, reference)
- Hierarchical models, spatial models, longitudinal models, dynamic models, survival models
- Testing
- Model choice
- Inference (importance sampling, MCMC, sequential Monte Carlo)
- Nonparametric models (Dirichlet processes, Gaussian processes, neutral-to-the-right processes, completely random measures)
- Decision theory and frequentist perspectives (complete class theorems, consistency, empirical Bayes)
- Experimental design
unit  course  berkeley  expert  michael-jordan  machine-learning  acm  bayesian  probability  stats  lecture-notes  priors-posteriors  markov  monte-carlo  frequentist  latent-variables  decision-theory  expert-experience  confidence  sampling 
july 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
https://en.wikipedia.org/wiki/Coefficient_of_determination
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric

https://en.wikipedia.org/wiki/Explained_variation

http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.

this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)

https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
stats  science  hypothesis-testing  correlation  metrics  plots  regression  wiki  reference  nibble  methodology  multi  twitter  social  discussion  best-practices  econotariat  garett-jones  concept  conceptual-vocab  accuracy  causation  acm  matrix-factorization  todo  explanation  yoga  hsu  street-fighting  levers  🌞  2014  scitariat  variance-components  meta:prediction  biodet  s:**  mental-math  reddit  commentary  ssc  poast  gwern  data-science  metric-space  similarity  measure  dependence-independence 
may 2017 by nhaliday
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes? - Cross Validated
- "Bonferroni approach" is just union bound
- so Pr(|hat p_i - p_i| > ε for any i) <= 2k e^{-ε^2 n} = δ
- ε = sqrt(ln(2k/δ)/n)
- Bonferroni approach should work for case of any dependent Bernoulli r.v.s
q-n-a  overflow  stats  moments  distribution  acm  hypothesis-testing  nibble  confidence  concentration-of-measure  bonferroni  parametric  synchrony 
february 2017 by nhaliday
probability - Variance of maximum of Gaussian random variables - Cross Validated
In full generality it is rather hard to find the right order of magnitude of the variance of a Gaussien supremum since the tools from concentration theory are always suboptimal for the maximum function.

order ~ 1/log n
q-n-a  overflow  stats  probability  acm  orders  tails  bias-variance  moments  concentration-of-measure  magnitude  tidbits  distribution  yoga  structure  extrema  nibble 
february 2017 by nhaliday
bounds - What is the variance of the maximum of a sample? - Cross Validated
- sum of variances is always a bound
- can't do better even for iid Bernoulli
- looks like nice argument from well-known probabilist (using E[(X-Y)^2] = 2Var X), but not clear to me how he gets to sum_i instead of sum_{i,j} in the union bound?
edit: argument is that, for j = argmax_k Y_k, we have r < X_i - Y_j <= X_i - Y_i for all i, including i = argmax_k X_k
- different proof here (later pages): http://www.ism.ac.jp/editsec/aism/pdf/047_1_0185.pdf
Var(X_n:n) <= sum Var(X_k:n) + 2 sum_{i < j} Cov(X_i:n, X_j:n) = Var(sum X_k:n) = Var(sum X_k) = nσ^2
why are the covariances nonnegative? (are they?). intuitively seems true.
- for that, see https://pinboard.in/u:nhaliday/b:ed4466204bb1
- note that this proof shows more generally that sum Var(X_k:n) <= sum Var(X_k)
- apparently that holds for dependent X_k too? http://mathoverflow.net/a/96943/20644
q-n-a  overflow  stats  acm  distribution  tails  bias-variance  moments  estimate  magnitude  probability  iidness  tidbits  concentration-of-measure  multi  orders  levers  extrema  nibble  bonferroni  coarse-fine  expert  symmetry  s:*  expert-experience  proofs 
february 2017 by nhaliday
Predicting with confidence: the best machine learning idea you never heard of | Locklin on science
The advantages of conformal prediction are many fold. These ideas assume very little about the thing you are trying to forecast, the tool you’re using to forecast or how the world works, and they still produce a pretty good confidence interval. Even if you’re an unrepentant Bayesian, using some of the machinery of conformal prediction, you can tell when things have gone wrong with your prior. The learners work online, and with some modifications and considerations, with batch learning. One of the nice things about calculating confidence intervals as a part of your learning process is they can actually lower error rates or use in semi-supervised learning as well. Honestly, I think this is the best bag of tricks since boosting; everyone should know about and use these ideas.

The essential idea is that a “conformity function” exists. Effectively you are constructing a sort of multivariate cumulative distribution function for your machine learning gizmo using the conformity function. Such CDFs exist for classical stuff like ARIMA and linear regression under the correct circumstances; CP brings the idea to machine learning in general, and to models like ARIMA when the standard parametric confidence intervals won’t work. Within the framework, the conformity function, whatever may be, when used correctly can be guaranteed to give confidence intervals to within a probabilistic tolerance. The original proofs and treatments of conformal prediction, defined for sequences, is extremely computationally inefficient. The conditions can be relaxed in many cases, and the conformity function is in principle arbitrary, though good ones will produce narrower confidence regions. Somewhat confusingly, these good conformity functions are referred to as “efficient” -though they may not be computationally efficient.
techtariat  acmtariat  acm  machine-learning  bayesian  stats  exposition  research  online-learning  probability  decision-theory  frontier  unsupervised  confidence 
february 2017 by nhaliday
What is the difference between inference and learning? - Quora
- basically boils down to latent variables vs. (hyper-)parameters
- so computing p(x_h|x_v,θ) vs. computing p(θ|X_v)
- from a completely Bayesian perspective, no real difference
- described in more detail in [Kevin Murphy, 10.4]
q-n-a  qra  jargon  machine-learning  stats  acm  bayesian  graphical-models  latent-variables  confusion  comparison  nibble 
january 2017 by nhaliday
teaching - Intuitive explanation for dividing by $n-1$ when calculating standard deviation? - Cross Validated
The standard deviation calculated with a divisor of n-1 is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn. Because the observed values fall, on average, closer to the sample mean than to the population mean, the standard deviation which is calculated using deviations from the sample mean underestimates the desired standard deviation of the population. Using n-1 instead of n as the divisor corrects for that by making the result a little bit bigger.

Note that the correction has a larger proportional effect when n is small than when it is large, which is what we want because when n is larger the sample mean is likely to be a good estimator of the population mean.

...

A common one is that the definition of variance (of a distribution) is the second moment recentered around a known, definite mean, whereas the estimator uses an estimated mean. This loss of a degree of freedom (given the mean, you can reconstitute the dataset with knowledge of just n−1 of the data values) requires the use of n−1 rather than nn to "adjust" the result.
q-n-a  overflow  stats  acm  intuition  explanation  bias-variance  methodology  moments  nibble  degrees-of-freedom  sampling-bias  generalization  dimensionality  ground-up  intricacy 
january 2017 by nhaliday
CS 731 Advanced Artificial Intelligence - Spring 2011
- statistical machine learning
- sparsity in regression
- graphical models
- exponential families
- variational methods
- MCMC
- dimensionality reduction, eg, PCA
- Bayesian nonparametrics
- compressive sensing, matrix completion, and Johnson-Lindenstrauss
course  lecture-notes  yoga  acm  stats  machine-learning  graphical-models  graphs  model-class  bayesian  learning-theory  sparsity  embeddings  markov  monte-carlo  norms  unit  nonparametric  compressed-sensing  matrix-factorization  features 
january 2017 by nhaliday
Galton–Watson process - Wikipedia
The Galton–Watson process is a branching stochastic process arising from Francis Galton's statistical investigation of the extinction of family names. The process models family names as patrilineal (passed from father to son), while offspring are randomly either male or female, and names become extinct if the family name line dies out (holders of the family name die without male descendants). This is an accurate description of Y chromosome transmission in genetics, and the model is thus useful for understanding human Y-chromosome DNA haplogroups, and is also of use in understanding other processes (as described below); but its application to actual extinction of family names is fraught. In practice, family names change for many other reasons, and dying out of name line is only one factor, as discussed in examples, below; the Galton–Watson process is thus of limited applicability in understanding actual family name distributions.
galton  history  stories  stats  stochastic-processes  acm  concept  wiki  reference  atoms  giants  early-modern  nibble  old-anglo  pre-ww2 
january 2017 by nhaliday
Existence of the moment generating function and variance - Cross Validated
This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).

In the answer below, we do the following:
1. Show that if the mgf is finite for at least one (strictly) positive value and one negative value, then all positive moments of X are finite (including nonintegral moments).
2. Prove that the condition in the first item above is equivalent to the distribution of X having exponentially bounded tails. In other words, the tails of X fall off at least as fast as those of an exponential random variable Z (up to a constant).
3. Provide a quick note on the characterization of the distribution by its mgf provided it satisfies the condition in item 1.
4. Explore some examples and counterexamples to aid our intuition and, particularly, to show that we should not read undue importance into the lack of finiteness of the mgf.
q-n-a  overflow  math  stats  acm  probability  characterization  concept  moments  distribution  examples  counterexample  tails  rigidity  nibble  existence  s:null  convergence  series 
january 2017 by nhaliday
Breeding the breeder's equation - Gene Expression
- interesting fact about normal distribution: when thresholding Gaussian r.v. X ~ N(0, σ^2) at X > 0, the new mean μ_s satisfies μ_s = pdf(X,t)/(1-cdf(X,t)) σ^2
- follows from direct calculation (any deeper reason?)
- note (using Taylor/asymptotic expansion of complementary error function) that this is Θ(t) as t -> 0 or ∞ (w/ different constants)
- for X ~ N(0, 1), can calculate 0 = cdf(X, t)μ_<t + (1-cdf(X, t))μ_>t => μ_<t = -pdf(X, t)/cdf(X, t)
- this declines quickly w/ t (like e^{-t^2/2}). as t -> 0, it goes like -sqrt(2/pi) + higher-order terms ~ -0.8.

Average of a tail of a normal distribution: https://stats.stackexchange.com/questions/26805/average-of-a-tail-of-a-normal-distribution

Truncated normal distribution: https://en.wikipedia.org/wiki/Truncated_normal_distribution
gnxp  explanation  concept  bio  genetics  population-genetics  agri-mindset  analysis  scitariat  org:sci  nibble  methodology  distribution  tidbits  probability  stats  acm  AMT  limits  magnitude  identity  integral  street-fighting  symmetry  s:*  tails  multi  q-n-a  overflow  wiki  reference  objektbuch  proofs 
december 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : academeacmframescitechie

related tags

academia  accretion  accuracy  acm  acmtariat  additive  advanced  adversarial  advice  agri-mindset  algebra  algorithms  AMT  analysis  aphorism  applications  approximation  arrows  article  assortative-mating  atoms  backup  bandits  bayesian  behavioral-gen  benchmarks  berkeley  best-practices  bias-variance  biases  big-picture  binomial  bio  biodet  bioinformatics  bits  blog  boltzmann  bonferroni  books  bounded-cognition  business  calculation  caltech  career  causation  characterization  chart  cheatsheet  chemistry  clarity  clever-rats  closure  cmu  coarse-fine  cog-psych  columbia  commentary  comparison  complex-systems  composition-decomposition  compressed-sensing  concentration-of-measure  concept  conceptual-vocab  conference  confidence  confluence  confounding  confusion  contrarianism  convergence  core-rats  correlation  counterexample  course  cracker-econ  criminal-justice  critique  culture  curiosity  cycles  data  data-science  dataviz  debate  debt  decision-making  decision-theory  deep-learning  definition  degrees-of-freedom  dependence-independence  differential  differential-privacy  dimensionality  direction  discovery  discussion  distribution  DP  draft  duality  dumb-ML  duplication  early-modern  earth  econometrics  economics  econotariat  efficiency  electromag  elegance  embeddings  encyclopedic  endo-exo  endogenous-exogenous  engineering  enhancement  ensembles  entropy-like  ergodic  error  essay  estimate  ethics  events  examples  existence  expectancy  expert  expert-experience  explanation  exploratory  explore-exploit  exposition  extrema  faq  features  finance  formal-values  frequentist  frontier  galton  garett-jones  GCTA  gelman  generalization  generative  genetics  genomics  giants  gnxp  gotchas  gowers  grad-school  gradient-descent  graphical-models  graphs  ground-up  growth  guide  GWAS  gwern  GxE  hacker  hanson  hi-order-bits  history  hmm  hn  housing  howto  hsu  human-ml  hypothesis-testing  ideas  identity  idk  IEEE  iidness  impro  info-dynamics  info-foraging  information-theory  init  innovation  insight  integral  interdisciplinary  intricacy  intuition  invariance  iq  iteration-recursion  jargon  journos-pundits  kernels  language  large-factor  latent-variables  learning-theory  lecture-notes  lectures  len:short  lens  lesswrong  letters  levers  libraries  lifts-projections  limits  linear-algebra  linear-models  linearity  liner-notes  links  list  logic  long-term  lower-bounds  machine-learning  macro  magnitude  manifolds  marginal  marginal-rev  market-failure  markets  markov  martingale  math  math.CA  math.CO  math.DS  math.GR  math.NT  math.RT  mathtariat  matrix-factorization  measure  mechanics  mental-math  meta:prediction  meta:science  metabuch  metameta  methodology  metric-space  metrics  michael-jordan  micro  mit  mixing  ML-MAP-E  model-class  model-selection  models  moments  monte-carlo  mostly-modern  motivation  mrtz  multi  multiplicative  music-theory  network-structure  neuro  nibble  nitty-gritty  no-go  nonlinearity  nonparametric  norms  numerics  objektbuch  occam  old-anglo  oly  online-learning  optimization  orders  ORFE  org:bleg  org:com  org:edu  org:junk  org:sci  outcome-risk  outliers  overflow  p:*  p:***  p:someday  p:whenever  papers  parametric  pareto  parsimony  pdf  performance  perturbation  phd  philosophy  physics  pic  planning  plots  pls  poast  population-genetics  positivity  postmortem  power-law  ppl  pre-2013  pre-ww2  presentation  princeton  prioritizing  priors-posteriors  probability  programming  progression  project  proofs  properties  psychology  puzzles  python  q-n-a  qra  quantum  quixotic  quora  rant  rat-pack  rationality  ratty  reading  recommendations  reddit  reduction  reference  reflection  regression  regression-to-mean  regularization  regularizer  regulation  relativity  research  research-program  review  rhetoric  rigidity  risk  roadmap  robust  roots  s:*  s:**  s:null  sampling  sampling-bias  scale  scaling-up  science  scitariat  selection  sensitivity  series  shalizi  signum  similarity  simplex  skeleton  slides  social  soft-question  space  sparsity  spatial  spearhead  spock  ssc  stackex  stanford  stat-mech  stat-power  stats  stochastic-processes  stories  strategy  stream  street-fighting  structure  study  subculture  subjective-objective  summary  symmetry  synchrony  synthesis  systems  tails  tcs  techtariat  tetlock  the-trenches  thermo  thick-thin  thinking  tidbits  tightness  time-complexity  tip-of-tongue  todo  top-n  topology  tutorial  twin-study  twitter  unit  unsupervised  usa  variance-components  vc-dimension  video  visual-understanding  visualization  volo-avolo  wiki  winter-2017  world-war  yoga  🌞  👳  🔬  🤖  🦉 

Copy this bookmark:



description:


tags: