nhaliday + acm + curvature   30

Subgradients - S. Boyd and L. Vandenberghe
If f is convex and x ∈ int dom f, then ∂f(x) is nonempty and bounded. To establish that ∂f(x) ≠ ∅, we apply the supporting hyperplane theorem to the convex set epi f at the boundary point (x, f(x)), ...
pdf  nibble  lecture-notes  acm  optimization  curvature  math.CA  estimate  linearity  differential  existence  proofs  exposition  atoms  math  marginal  convexity-curvature 
august 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum 
february 2017 by nhaliday
Prékopa–Leindler inequality | Academically Interesting
Consider the following statements:
1. The shape with the largest volume enclosed by a given surface area is the n-dimensional sphere.
2. A marginal or sum of log-concave distributions is log-concave.
3. Any Lipschitz function of a standard n-dimensional Gaussian distribution concentrates around its mean.
What do these all have in common? Despite being fairly non-trivial and deep results, they all can be proved in less than half of a page using the Prékopa–Leindler inequality.

ie, Brunn-Minkowski
acmtariat  clever-rats  ratty  math  acm  geometry  measure  math.MG  estimate  distribution  concentration-of-measure  smoothness  regularity  org:bleg  nibble  brunn-minkowski  curvature  convexity-curvature 
february 2017 by nhaliday
Carathéodory's theorem (convex hull) - Wikipedia
- any convex combination in R^d can be pared down to at most d+1 points
- eg, in R^2 you can always fit a point in convex hull in a triangle
tcs  acm  math.MG  geometry  levers  wiki  reference  optimization  linear-programming  math  linear-algebra  nibble  spatial  curvature  convexity-curvature 
january 2017 by nhaliday
Convex Optimization Applications
there was a problem in ACM113 related to this (the portfolio optimization SDP stuff)
pdf  slides  exposition  finance  investing  optimization  methodology  examples  IEEE  acm  ORFE  nibble  curvature  talks  convexity-curvature 
december 2016 by nhaliday
gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow
Terry Tao:
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:
Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)
2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)
3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!
4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp⁡(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.
5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:
This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".
q-n-a  intuition  math  visual-understanding  list  discussion  thurston  tidbits  aaronson  tcs  geometry  problem-solving  yoga  👳  big-list  metabuch  tcstariat  gowers  mathtariat  acm  overflow  soft-question  levers  dimensionality  hi-order-bits  insight  synthesis  thinking  models  cartoons  coding-theory  information-theory  probability  concentration-of-measure  magnitude  linear-algebra  boolean-analysis  analogy  arrows  lifts-projections  measure  markov  sampling  shannon  conceptual-vocab  nibble  degrees-of-freedom  worrydream  neurons  retrofit  oscillation  paradox  novelty  tricki  concrete  high-dimension  s:***  manifolds  direction  curvature  convexity-curvature  elegance  guessing 
december 2016 by nhaliday
real analysis - Proof of "every convex function is continuous" - Mathematics Stack Exchange
bound above by secant and below by tangent, so graph of function is constrained to a couple triangles w/ common vertex at (x, f(x))
tidbits  math  math.CA  q-n-a  visual-understanding  acm  overflow  proofs  smoothness  nibble  curvature  convexity-curvature 
november 2016 by nhaliday
Projections onto convex sets - Wikipedia, the free encyclopedia
straightforward method to find point in intersection of convex sets, w/ some convergence guarantees
yoga  acm  optimization  algorithms  wiki  concept  nibble  intersection  curvature  convexity-curvature  intersection-connectedness 
june 2016 by nhaliday

bundles : abstractacademeacmframemathpatterns

related tags

aaronson  academia  accretion  acm  acmtariat  advanced  algorithms  analogy  analysis  apollonian-dionysian  applications  approximation  arrows  atoms  automata-languages  bayesian  ben-recht  best-practices  better-explained  big-list  big-picture  bits  books  boolean-analysis  brunn-minkowski  calculation  calculator  caltech  cartoons  chart  checklists  classification  clever-rats  coding-theory  comparison  compressed-sensing  computational-geometry  concentration-of-measure  concept  conceptual-vocab  concrete  confluence  convexity-curvature  counterexample  course  cs  curvature  deep-learning  degrees-of-freedom  differential  dimensionality  direction  discussion  distribution  draft  duality  dumb-ML  dynamic  dynamical  elegance  encyclopedic  ends-means  entropy-like  estimate  examples  existence  exocortex  expert  expert-experience  explanation  exposition  finance  finiteness  fourier  geometry  gowers  gradient-descent  graph-theory  graphical-models  graphs  ground-up  guessing  harvard  hi-order-bits  high-dimension  homepage  homogeneity  ideas  idk  IEEE  impact  information-theory  init  insight  intersection  intersection-connectedness  intuition  investing  isotropy  iteration-recursion  iterative-methods  knowledge  language  latent-variables  learning-theory  lecture-notes  lectures  let-me-see  levers  lifts-projections  limits  linear-algebra  linear-programming  linearity  links  list  low-hanging  machine-learning  magnitude  manifolds  marginal  markov  martingale  math  math.CA  math.CO  math.FA  math.GN  math.MG  mathtariat  matrix-factorization  measure  meta:math  metabuch  metameta  methodology  metric-space  metrics  ML-MAP-E  model-class  models  moments  monte-carlo  msr  multi  neurons  nibble  nlp  norms  novelty  online-learning  optimization  ORFE  org:bleg  org:edu  org:fin  oscillation  overflow  p:*  p:**  p:***  papers  paradox  pdf  phys-energy  pigeonhole-markov  polynomials  positivity  pragmatic  pre-2013  princeton  prioritizing  probability  problem-solving  proofs  q-n-a  quixotic  ratty  reduction  reference  regularity  research  retrofit  roadmap  robust  rounding  s:***  sampling  scholar-pack  sebastien-bubeck  series  shannon  signum  similarity  skeleton  slides  smoothness  soft-question  sparsity  spatial  spectral  stanford  stochastic-processes  studying  sublinear  submodular  survey  symmetry  synthesis  talks  tcs  tcstariat  techtariat  telos-atelos  thinking  thurston  tidbits  toolkit  tools  top-n  topology  track-record  tricki  tutorial  unit  video  visual-understanding  visualization  wiki  worrydream  yoga  🎓  👳 

Copy this bookmark:



description:


tags: