nhaliday + inner-product   22

Riemannian manifold - Wikipedia
In differential geometry, a (smooth) Riemannian manifold or (smooth) Riemannian space (M,g) is a real smooth manifold M equipped with an inner product {\displaystyle g_{p}} on the tangent space {\displaystyle T_{p}M} at each point {\displaystyle p} that varies smoothly from point to point in the sense that if X and Y are vector fields on M, then {\displaystyle p\mapsto g_{p}(X(p),Y(p))} is a smooth function. The family {\displaystyle g_{p}} of inner products is called a Riemannian metric (tensor). These terms are named after the German mathematician Bernhard Riemann. The study of Riemannian manifolds constitutes the subject called Riemannian geometry.

A Riemannian metric (tensor) makes it possible to define various geometric notions on a Riemannian manifold, such as angles, lengths of curves, areas (or volumes), curvature, gradients of functions and divergence of vector fields.
concept  definition  math  differential  geometry  manifolds  inner-product  norms  measure  nibble 
february 2017 by nhaliday
How do these "neural network style transfer" tools work? - Julia Evans
When we put an image into the network, it starts out as a vector of numbers (the red/green/blue values for each pixel). At each layer of the network we get another intermediate vector of numbers. There’s no inherent meaning to any of these vectors.

But! If we want to, we could pick one of those vectors arbitrarily and declare “You know, I think that vector represents the content” of the image.

The basic idea is that the further down you get in the network (and the closer towards classifying objects in the network as a “cat” or “house” or whatever”), the more the vector represents the image’s “content”.

In this paper, they designate the “conv4_2” later as the “content” layer. This seems to be pretty arbitrary – it’s just a layer that’s pretty far down the network.

Defining “style” is a bit more complicated. If I understand correctly, the definition “style” is actually the major innovation of this paper – they don’t just pick a layer and say “this is the style layer”. Instead, they take all the “feature maps” at a layer (basically there are actually a whole bunch of vectors at the layer, one for each “feature”), and define the “Gram matrix” of all the pairwise inner products between those vectors. This Gram matrix is the style.
techtariat  bangbang  deep-learning  model-class  explanation  art  visuo  machine-learning  acm  SIGGRAPH  init  inner-product  nibble 
february 2017 by nhaliday
Sobolev space - Wikipedia
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself and its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
math  concept  math.CA  math.FA  differential  inner-product  wiki  reference  regularity  smoothness  norms  nibble  zooming 
february 2017 by nhaliday
Cauchy-Schwarz inequality and Hölder's inequality - Mathematics Stack Exchange
- Cauchy-Schwarz (special case of Holder's inequality where p=q=1/2) implies Holder's inequality
- pith: define potential F(t) = int f^{pt} g^{q(1-t)}, show log F is midpoint-convex hence convex, then apply convexity between F(0) and F(1) for F(1/p) = ||fg||_1
q-n-a  overflow  math  estimate  proofs  ground-up  math.FA  inner-product  tidbits  norms  duality  nibble  integral 
january 2017 by nhaliday
Dvoretzky's theorem - Wikipedia
In mathematics, Dvoretzky's theorem is an important structural theorem about normed vector spaces proved by Aryeh Dvoretzky in the early 1960s, answering a question of Alexander Grothendieck. In essence, it says that every sufficiently high-dimensional normed vector space will have low-dimensional subspaces that are approximately Euclidean. Equivalently, every high-dimensional bounded symmetric convex set has low-dimensional sections that are approximately ellipsoids.

math  math.FA  inner-product  levers  characterization  geometry  math.MG  concentration-of-measure  multi  q-n-a  overflow  intuition  examples  proofs  dimensionality  gowers  mathtariat  tcstariat  quantum  quantum-info  norms  nibble  high-dimension  wiki  reference  curvature  convexity-curvature  tcs 
january 2017 by nhaliday
cv.complex variables - Absolute value inequality for complex numbers - MathOverflow
In general, once you've proven an inequality like this in R it holds automatically in any Euclidean space (including C) by averaging over projections. ("Inequality like this" = inequality where every term is the length of some linear combination of variable vectors in the space; here the vectors are a, b, c).

I learned this trick at MOP 30+ years ago, and don't know or remember who discovered it.
q-n-a  overflow  math  math.CV  estimate  tidbits  yoga  oly  mathtariat  math.FA  metabuch  inner-product  calculation  norms  nibble  tricki 
january 2017 by nhaliday
Quarter-Turns | The n-Category Café
In other words, call an operator T a quarter-turn if ⟨Tx,x⟩=0 for all x. Then the real quarter-turns correspond to the skew symmetric matrices — but apart from the zero operator, there are no complex quarter turns at all.
tidbits  math  linear-algebra  hmm  mathtariat  characterization  atoms  inner-product  arrows  org:bleg  nibble 
december 2016 by nhaliday
infinitely divisible matrices
the use of Hilbert spaces for establishing matrices are Gram matrices (and hence PSD) is interesting
tidbits  math  algebra  pdf  yoga  linear-algebra  inner-product  positivity  signum 
august 2016 by nhaliday

bundles : mathsp

related tags

acm  algebra  aphorism  arrows  art  atoms  bangbang  best-practices  better-explained  big-list  big-picture  calculation  cartoons  characterization  concentration-of-measure  concept  convexity-curvature  curiosity  curvature  deep-learning  definition  differential  dimensionality  direction  duality  elegance  embeddings  estimate  examples  existence  explanation  exposition  extrema  geometry  gowers  graph-theory  graphs  ground-up  guessing  hi-order-bits  hierarchy  high-dimension  hmm  homogeneity  identity  init  inner-product  integral  intuition  invariance  levers  linear-algebra  linearity  list  local-global  machine-learning  magnitude  manifolds  math  math.AT  math.CA  math.CO  math.CV  math.FA  math.MG  mathtariat  matrix-factorization  measure  metabuch  metric-space  michael-nielsen  model-class  multi  nibble  norms  novelty  oly  operational  org:bleg  org:mat  overflow  p:whenever  paradox  pdf  pigeonhole-markov  polynomials  positivity  probabilistic-method  probability  problem-solving  proofs  properties  q-n-a  quantifiers-sums  quantum  quantum-info  random  reference  regularity  relaxation  rigidity  s:*  s:**  s:null  separation  shift  SIGGRAPH  signum  smoothness  soft-question  sparsity  spatial  structure  sum-of-squares  symmetry  synchrony  synthesis  tcs  tcstariat  techtariat  tensors  thinking  tidbits  tightness  tricki  tricks  uniqueness  visuo  wiki  wisdom  yoga  zooming 

Copy this bookmark: