nhaliday + acm + dimensionality   27

ON THE GEOMETRY OF NASH EQUILIBRIA AND CORRELATED EQUILIBRIA
Abstract: It is well known that the set of correlated equilibrium distributions of an n-player noncooperative game is a convex polytope that includes all the Nash equilibrium distributions. We demonstrate an elementary yet surprising result: the Nash equilibria all lie on the boundary of the polytope.
pdf  nibble  papers  ORFE  game-theory  optimization  geometry  dimensionality  linear-algebra  equilibrium  structure  differential  correlation  iidness  acm  linear-programming  spatial  characterization  levers 
may 2019 by nhaliday
teaching - Intuitive explanation for dividing by $n-1$ when calculating standard deviation? - Cross Validated
The standard deviation calculated with a divisor of n-1 is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn. Because the observed values fall, on average, closer to the sample mean than to the population mean, the standard deviation which is calculated using deviations from the sample mean underestimates the desired standard deviation of the population. Using n-1 instead of n as the divisor corrects for that by making the result a little bit bigger.

Note that the correction has a larger proportional effect when n is small than when it is large, which is what we want because when n is larger the sample mean is likely to be a good estimator of the population mean.

...

A common one is that the definition of variance (of a distribution) is the second moment recentered around a known, definite mean, whereas the estimator uses an estimated mean. This loss of a degree of freedom (given the mean, you can reconstitute the dataset with knowledge of just n−1 of the data values) requires the use of n−1 rather than nn to "adjust" the result.
q-n-a  overflow  stats  acm  intuition  explanation  bias-variance  methodology  moments  nibble  degrees-of-freedom  sampling-bias  generalization  dimensionality  ground-up  intricacy 
january 2017 by nhaliday
gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow
Terry Tao:
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:
Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)
2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)
3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!
4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp⁡(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.
5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:
This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".
q-n-a  intuition  math  visual-understanding  list  discussion  thurston  tidbits  aaronson  tcs  geometry  problem-solving  yoga  👳  big-list  metabuch  tcstariat  gowers  mathtariat  acm  overflow  soft-question  levers  dimensionality  hi-order-bits  insight  synthesis  thinking  models  cartoons  coding-theory  information-theory  probability  concentration-of-measure  magnitude  linear-algebra  boolean-analysis  analogy  arrows  lifts-projections  measure  markov  sampling  shannon  conceptual-vocab  nibble  degrees-of-freedom  worrydream  neurons  retrofit  oscillation  paradox  novelty  tricki  concrete  high-dimension  s:***  manifolds  direction  curvature  convexity-curvature  elegance  guessing 
december 2016 by nhaliday
CS229T/STATS231: Statistical Learning Theory
Course by Percy Liang covers a mix of statistics, computational learning theory, and some online learning. Also surveys the state-of-the-art in theoretical understanding of deep learning (not much to cover unfortunately).
yoga  stanford  course  machine-learning  stats  👳  lecture-notes  acm  kernels  learning-theory  deep-learning  frontier  init  ground-up  unit  dimensionality  vc-dimension  entropy-like  extrema  moments  online-learning  bandits  p:***  explore-exploit  advanced 
june 2016 by nhaliday

bundles : abstractacademeacmframepatternssp

related tags

aaronson  academia  accretion  acm  acmtariat  advanced  algorithms  amortization-potential  analogy  apollonian-dionysian  applications  approximation  arrows  atoms  bandits  bayesian  best-practices  bias-variance  big-list  big-picture  bioinformatics  bits  books  boolean-analysis  brunn-minkowski  cartoons  chaining  characterization  chart  checklists  classification  clever-rats  cmu  coding-theory  commentary  comparison  computational-geometry  concentration-of-measure  concept  conceptual-vocab  concrete  confluence  convexity-curvature  correlation  course  curiosity  curvature  data-science  deep-learning  degrees-of-freedom  dependence-independence  descriptive  differential  dimensionality  direction  discussion  distribution  draft  elegance  encyclopedic  endo-exo  endogenous-exogenous  ends-means  ensembles  entropy-like  equilibrium  ergodic  estimate  examples  expert  expert-experience  explanation  explore-exploit  exposition  extrema  fiber  finiteness  fixed-point  fourier  frontier  game-theory  gaussian-processes  generalization  geometry  gotchas  gowers  gradient-descent  graph-theory  graphical-models  graphs  ground-up  guessing  guide  GWAS  hashing  hi-order-bits  high-dimension  homepage  homogeneity  hsu  huge-data-the-biggest  hypothesis-testing  ideas  iidness  impact  information-theory  init  inner-product  insight  intricacy  intuition  kernels  knowledge  language  learning-theory  lecture-notes  levers  lifts-projections  limits  linear-algebra  linear-programming  linearity  liner-notes  list  local-global  low-hanging  machine-learning  magnitude  manifolds  markov  martingale  math  math.CA  math.CO  math.FA  math.GN  math.MG  mathtariat  matrix-factorization  measure  meta:math  metabuch  metameta  methodology  metric-space  metrics  mihai  model-class  models  moments  monte-carlo  neurons  nibble  nlp  norms  novelty  occam  off-convex  online-learning  optimization  ORFE  org:bleg  org:edu  org:mat  oscillation  overflow  p:*  p:***  PAC  papers  paradox  pdf  people  pigeonhole-markov  pragmatic  pre-2013  preimage  preprint  princeton  prioritizing  priors-posteriors  probability  problem-solving  prof  q-n-a  quixotic  ratty  reduction  reference  regression  regularization  research  research-program  retrofit  roadmap  s:*  s:***  sample-complexity  sampling  sampling-bias  sanjeev-arora  scholar-pack  scitariat  sebastien-bubeck  separation  series  shannon  similarity  skeleton  smoothness  soft-question  sparsity  spatial  spectral  stanford  stats  stochastic-processes  structure  studying  summary  synthesis  talks  tcs  tcstariat  telos-atelos  thinking  thurston  tidbits  toolkit  top-n  topology  track-record  tricki  unit  unsupervised  vc-dimension  video  visual-understanding  volo-avolo  wiki  worrydream  yoga  🎓  👳 

Copy this bookmark:



description:


tags: