nhaliday + norms   33

Accurate Genomic Prediction Of Human Height | bioRxiv
Stephen Hsu's compressed sensing application paper

We construct genomic predictors for heritable and extremely complex human quantitative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, ~40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate ~0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction.

https://infoproc.blogspot.com/2017/09/accurate-genomic-prediction-of-human.html

http://infoproc.blogspot.com/2017/11/23andme.html
I'm in Mountain View to give a talk at 23andMe. Their latest funding round was $250M on a (reported) valuation of $1.5B. If I just add up the Crunchbase numbers it looks like almost half a billion invested at this point...

Slides: Genomic Prediction of Complex Traits

Here's how people + robots handle your spit sample to produce a SNP genotype:

https://drive.google.com/file/d/1e_zuIPJr1hgQupYAxkcbgEVxmrDHAYRj/view
study  bio  preprint  GWAS  state-of-art  embodied  genetics  genomics  compressed-sensing  high-dimension  machine-learning  missing-heritability  hsu  scitariat  education  🌞  frontier  britain  regression  data  visualization  correlation  phase-transition  multi  commentary  summary  pdf  slides  brands  skunkworks  hard-tech  presentation  talks  methodology  intricacy  bioinformatics  scaling-up  stat-power  sparsity  norms  nibble  speedometer  stats  linear-models  2017  biodet 
september 2017 by nhaliday
Riemannian manifold - Wikipedia
In differential geometry, a (smooth) Riemannian manifold or (smooth) Riemannian space (M,g) is a real smooth manifold M equipped with an inner product {\displaystyle g_{p}} on the tangent space {\displaystyle T_{p}M} at each point {\displaystyle p} that varies smoothly from point to point in the sense that if X and Y are vector fields on M, then {\displaystyle p\mapsto g_{p}(X(p),Y(p))} is a smooth function. The family {\displaystyle g_{p}} of inner products is called a Riemannian metric (tensor). These terms are named after the German mathematician Bernhard Riemann. The study of Riemannian manifolds constitutes the subject called Riemannian geometry.

A Riemannian metric (tensor) makes it possible to define various geometric notions on a Riemannian manifold, such as angles, lengths of curves, areas (or volumes), curvature, gradients of functions and divergence of vector fields.
concept  definition  math  differential  geometry  manifolds  inner-product  norms  measure  nibble 
february 2017 by nhaliday
Sobolev space - Wikipedia
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself and its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
math  concept  math.CA  math.FA  differential  inner-product  wiki  reference  regularity  smoothness  norms  nibble  zooming 
february 2017 by nhaliday
Unlearning descriptive statistics | Hacker News
For readers who are OK with some math, I recommend John Myles White's eye-opening post about means, medians, and modes: http://www.johnmyleswhite.com/notebook/2013/03/22/modes-medians-and-means-an-unifying-perspective/. He describes these summary descriptive stats in terms of what penalty function they minimize: mean minimizes L2, median minimizes L1, mode minimizes L0.
hn  commentary  techtariat  acmtariat  data-science  explanation  multi  norms  org:bleg  nibble  scitariat  expectancy 
february 2017 by nhaliday
Cauchy-Schwarz inequality and Hölder's inequality - Mathematics Stack Exchange
- Cauchy-Schwarz (special case of Holder's inequality where p=q=1/2) implies Holder's inequality
- pith: define potential F(t) = int f^{pt} g^{q(1-t)}, show log F is midpoint-convex hence convex, then apply convexity between F(0) and F(1) for F(1/p) = ||fg||_1
q-n-a  overflow  math  estimate  proofs  ground-up  math.FA  inner-product  tidbits  norms  duality  nibble  integral 
january 2017 by nhaliday
Dvoretzky's theorem - Wikipedia
In mathematics, Dvoretzky's theorem is an important structural theorem about normed vector spaces proved by Aryeh Dvoretzky in the early 1960s, answering a question of Alexander Grothendieck. In essence, it says that every sufficiently high-dimensional normed vector space will have low-dimensional subspaces that are approximately Euclidean. Equivalently, every high-dimensional bounded symmetric convex set has low-dimensional sections that are approximately ellipsoids.

http://mathoverflow.net/questions/143527/intuitive-explanation-of-dvoretzkys-theorem
http://mathoverflow.net/questions/46278/unexpected-applications-of-dvoretzkys-theorem
math  math.FA  inner-product  levers  characterization  geometry  math.MG  concentration-of-measure  multi  q-n-a  overflow  intuition  examples  proofs  dimensionality  gowers  mathtariat  tcstariat  quantum  quantum-info  norms  nibble  high-dimension  wiki  reference  curvature  convexity-curvature  tcs 
january 2017 by nhaliday
CS 731 Advanced Artificial Intelligence - Spring 2011
- statistical machine learning
- sparsity in regression
- graphical models
- exponential families
- variational methods
- MCMC
- dimensionality reduction, eg, PCA
- Bayesian nonparametrics
- compressive sensing, matrix completion, and Johnson-Lindenstrauss
course  lecture-notes  yoga  acm  stats  machine-learning  graphical-models  graphs  model-class  bayesian  learning-theory  sparsity  embeddings  markov  monte-carlo  norms  unit  nonparametric  compressed-sensing  matrix-factorization  features 
january 2017 by nhaliday
cv.complex variables - Absolute value inequality for complex numbers - MathOverflow
In general, once you've proven an inequality like this in R it holds automatically in any Euclidean space (including C) by averaging over projections. ("Inequality like this" = inequality where every term is the length of some linear combination of variable vectors in the space; here the vectors are a, b, c).

I learned this trick at MOP 30+ years ago, and don't know or remember who discovered it.
q-n-a  overflow  math  math.CV  estimate  tidbits  yoga  oly  mathtariat  math.FA  metabuch  inner-product  calculation  norms  nibble  tricki 
january 2017 by nhaliday
Information Processing: Search results for compressed sensing
https://www.unz.com/jthompson/the-hsu-boundary/
http://infoproc.blogspot.com/2017/09/phase-transitions-and-genomic.html
Added: Here are comments from "Donoho-Student":
Donoho-Student says:
September 14, 2017 at 8:27 pm GMT • 100 Words

The Donoho-Tanner transition describes the noise-free (h2=1) case, which has a direct analog in the geometry of polytopes.

The n = 30s result from Hsu et al. (specifically the value of the coefficient, 30, when p is the appropriate number of SNPs on an array and h2 = 0.5) is obtained via simulation using actual genome matrices, and is original to them. (There is no simple formula that gives this number.) The D-T transition had only been established in the past for certain classes of matrices, like random matrices with specific distributions. Those results cannot be immediately applied to genomes.

The estimate that s is (order of magnitude) 10k is also a key input.

I think Hsu refers to n = 1 million instead of 30 * 10k = 300k because the effective SNP heritability of IQ might be less than h2 = 0.5 — there is noise in the phenotype measurement, etc.

Donoho-Student says:
September 15, 2017 at 11:27 am GMT • 200 Words

Lasso is a common statistical method but most people who use it are not familiar with the mathematical theorems from compressed sensing. These results give performance guarantees and describe phase transition behavior, but because they are rigorous theorems they only apply to specific classes of sensor matrices, such as simple random matrices. Genomes have correlation structure, so the theorems do not directly apply to the real world case of interest, as is often true.

What the Hsu paper shows is that the exact D-T phase transition appears in the noiseless (h2 = 1) problem using genome matrices, and a smoothed version appears in the problem with realistic h2. These are new results, as is the prediction for how much data is required to cross the boundary. I don’t think most gwas people are familiar with these results. If they did understand the results they would fund/design adequately powered studies capable of solving lots of complex phenotypes, medical conditions as well as IQ, that have significant h2.

Most people who use lasso, as opposed to people who prove theorems, are not even aware of the D-T transition. Even most people who prove theorems have followed the Candes-Tao line of attack (restricted isometry property) and don’t think much about D-T. Although D eventually proved some things about the phase transition using high dimensional geometry, it was initially discovered via simulation using simple random matrices.
hsu  list  stream  genomics  genetics  concept  stats  methodology  scaling-up  scitariat  sparsity  regression  biodet  bioinformatics  norms  nibble  compressed-sensing  applications  search  ideas  multi  albion  behavioral-gen  iq  state-of-art  commentary  explanation  phase-transition  measurement  volo-avolo  regularization  levers  novelty  the-trenches  liner-notes  clarity  random-matrices  innovation  high-dimension  linear-models 
november 2016 by nhaliday
Xavier Amatriain's answer to What is the difference between L1 and L2 regularization? - Quora
So, as opposed to what Andrew Ng explains in his "Feature selection, l1 vs l2 regularization, and rotational invariance" (Page on stanford.edu), I would say that as a rule-of-thumb, you should always go for L2 in practice.
best-practices  q-n-a  machine-learning  acm  optimization  tidbits  advice  qra  regularization  model-class  regression  sparsity  features  comparison  model-selection  norms  nibble 
november 2016 by nhaliday

bundles : mathsp

related tags

acm  acmtariat  advice  albion  algorithmic-econ  algorithms  applications  arrows  atoms  bayesian  behavioral-gen  ben-recht  best-practices  big-list  big-picture  bio  biodet  bioinformatics  brands  britain  calculation  cartoons  characterization  clarity  commentary  communication-complexity  comparison  complexity  compressed-sensing  compression  concentration-of-measure  concept  convexity-curvature  correlation  counterexample  course  curiosity  curvature  data  data-science  data-structures  debate  definition  differential  dimensionality  direction  discovery  duality  education  elegance  embeddings  embodied  essay  estimate  examples  expanders  expectancy  expert  expert-experience  explanation  exposition  features  fourier  frontier  game-theory  genetics  genomics  geometry  gowers  gradient-descent  graphical-models  graphs  ground-up  GWAS  hard-tech  hashing  hi-order-bits  hierarchy  high-dimension  hn  homogeneity  hsu  ideas  IEEE  inner-product  innovation  integral  intricacy  intuition  invariance  iq  isotropy  learning-theory  lecture-notes  levers  linear-algebra  linear-models  linear-programming  linearity  liner-notes  links  list  lower-bounds  machine-learning  manifolds  markov  math  math.CA  math.CV  math.FA  math.MG  mathtariat  matrix-factorization  measure  measurement  metabuch  methodology  metric-space  mihai  missing-heritability  mit  model-class  model-selection  monte-carlo  motivation  multi  nibble  nonparametric  norms  novelty  oly  optimization  org:bleg  overflow  p:***  p:someday  p:whenever  PAC  papers  pdf  phase-transition  photography  pic  preprint  presentation  princeton  probability  proofs  properties  q-n-a  qra  quantitative-qualitative  quantum  quantum-info  quixotic  random-matrices  reference  reflection  regression  regularity  regularization  research  rigidity  robust  sampling  scaling-up  scitariat  search  separation  skunkworks  slides  smoothness  soft-question  space-complexity  sparsity  spatial  spectral  speedometer  stanford  stat-power  state-of-art  stats  stories  stream  study  sublinear  summary  synthesis  talks  tcs  tcstariat  techtariat  the-trenches  thinking  tidbits  tim-roughgarden  toolkit  topics  tricki  unit  valiant  video  visual-understanding  visualization  volo-avolo  wiki  wormholes  yoga  zooming  🌞  👳  🔬 

Copy this bookmark:



description:


tags: