compressed-sensing   248

« earlier    

[1704.08326] Multidimensional Rational Covariance Extension with Approximate Covariance Matching
In our companion paper "Multidimensional rational covariance extension with applications to spectral estimation and image compression" we discussed the multidimensional rational covariance extension problem (RCEP), which has important applications in image processing, and spectral estimation in radar, sonar, and medical imaging. This is an inverse problem where a power spectrum with a rational absolutely continuous part is reconstructed from a finite set of moments. However, in most applications these moments are determined from observed data and are therefore only approximate, and RCEP may not have a solution. In this paper we extend the results to handle approximate covariance matching. We consider two problems, one with a soft constraint and the other one with a hard constraint, and show that they are connected via a homeomorphism. We also demonstrate that the problems are well-posed and illustrate the theory by examples in spectral estimation and texture generation.
image-processing  inverse-problems  optimization  compressed-sensing  signal-processing  nudge-targets  consider:looking-to-see  representation 
september 2017 by Vaguery
[1705.08664] Towards Understanding the Invertibility of Convolutional Neural Networks
Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable re- construction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.
neural-networks  deep-learning  generative-models  rather-interesting  compressed-sensing  to-understand 
september 2017 by Vaguery
Accurate Genomic Prediction Of Human Height | bioRxiv
Stephen Hsu's compressed sensing application paper

We construct genomic predictors for heritable and extremely complex human quantitative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, ~40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate ~0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction.

https://infoproc.blogspot.com/2017/09/accurate-genomic-prediction-of-human.html

http://infoproc.blogspot.com/2017/11/23andme.html
I'm in Mountain View to give a talk at 23andMe. Their latest funding round was $250M on a (reported) valuation of $1.5B. If I just add up the Crunchbase numbers it looks like almost half a billion invested at this point...

Slides: Genomic Prediction of Complex Traits

Here's how people + robots handle your spit sample to produce a SNP genotype:

https://drive.google.com/file/d/1e_zuIPJr1hgQupYAxkcbgEVxmrDHAYRj/view
study  bio  preprint  GWAS  state-of-art  embodied  genetics  genomics  compressed-sensing  high-dimension  machine-learning  missing-heritability  hsu  scitariat  education  ğŸŒž  frontier  britain  regression  data  visualization  correlation  phase-transition  multi  commentary  summary  pdf  slides  brands  skunkworks  hard-tech  presentation  talks  methodology  intricacy  bioinformatics  scaling-up  stat-power  sparsity  norms  nibble  speedometer  stats  linear-models  2017  biodet 
september 2017 by nhaliday
[1703.03208] Compressed Sensing using Generative Models
"The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G:ℝk→ℝn. Our main theorem is that, if G is L-Lipschitz, then roughly O(klogL) random Gaussian measurements suffice for an ℓ2/ℓ2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy."
papers  compressed-sensing  gan 
june 2017 by arsyed
[1704.00708] No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis
In this paper we develop a new framework that captures the common landscape underlying the common non-convex low-rank matrix problems including matrix sensing, matrix completion and robust PCA. In particular, we show for all above problems (including asymmetric cases): 1) all local minima are also globally optimal; 2) no high-order saddle points exists. These results explain why simple algorithms such as stochastic gradient descent have global converge, and efficiently optimize these non-convex objective functions in practice. Our framework connects and simplifies the existing analyses on optimization landscapes for matrix sensing and symmetric matrix completion. The framework naturally leads to new results for asymmetric matrix completion and robust PCA.
compressed-sensing  matrices  optimization  approximation  rather-interesting  machine-learning  nudge-targets  consider:representation 
may 2017 by Vaguery
[1606.07104] Manifolds' Projective Approximation Using The Moving Least-Squares (MMLS)
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and non-linear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate a d-dimensional Cm+1 smooth submanifold  residing in ℝn (d<<n) based upon scattered data points (i.e., a data cloud). We assume that the data points are located "near" the noisy lower dimensional manifold and perform a non-linear moving least-squares projection on an approximating manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., O(hm+1), where h is the fill distance and m is the degree of the local polynomial approximation). Furthermore, the method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension n.
models  machine-learning  curse-of-dimensionality  compressed-sensing  feature-extraction  nudge-targets  consider:looking-to-see  consider:feature-discovery 
april 2017 by Vaguery
[1610.05834] Lensless Imaging with Compressive Ultrafast Sensing
Conventional imaging uses a set of lenses to form an image on the sensor plane. This pure hardware-based approach doesn't use any signal processing, nor the extra information in the time of arrival of photons to the sensor. Recently, modern compressive sensing techniques have been applied for lensless imaging. However, this computational approach tends to depend as much as possible on signal processing (for example, single pixel camera) and results in a long acquisition time. Here we propose using compressive ultrafast sensing for lensless imaging. We use extremely fast sensors (picosecond time resolution) to time tag photons as they arrive to an omnidirectional pixel. Thus, each measurement produces a time series where time is a function of the photon source location in the scene. This allows lensless imaging with significantly fewer measurements compared to regular single pixel imaging (33× less measurements in our experiments). To achieve this goal, we developed a framework for using ultrafast pixels with compressive sensing, including an algorithm for ideal sensor placement, and an algorithm for optimized active illumination patterns. We show that efficient lensless imaging is possible with ultrafast imaging and compressive sensing. This paves the way for novel imaging architectures, and remote sensing in extreme situations where imaging with a lens is not possible.
optics  indistinguishable-from-magic  inverse-problems  compressed-sensing  rather-interesting  to-understand 
march 2017 by Vaguery
[1702.04917] Compressed sensing in Hilbert spaces
In many linear inverse problems, we want to estimate an unknown vector belonging to a high-dimensional (or infinite-dimensional) space from few linear measurements. To overcome the ill-posed nature of such problems, we use a low-dimension assumption on the unknown vector: it belongs to a low-dimensional model set. The question of whether it is possible to recover such an unknown vector from few measurements then arises. If the answer is yes, it is also important to be able to describe a way to perform such a recovery. We describe a general framework where appropriately chosen random measurements guarantee that recovery is possible. We further describe a way to study the performance of recovery methods that consist in the minimization of a regularization function under a data-fit constraint.
approximation  compressed-sensing  inference  modeling  algorithms 
february 2017 by Vaguery
[1702.02891] Sparse Approximation by Semidefinite Programming
The problem of sparse approximation and the closely related compressed sensing have received tremendous attention in the past decade. Primarily studied from the viewpoint of applied harmonic analysis and signal processing, there have been two dominant algorithmic approaches to this problem: Greedy methods called the matching pursuit (MP) and the linear programming based approaches called the basis pursuit (BP). The aim of the current paper is to bring a fresh perspective to sparse approximation by treating it as a combinatorial optimization problem and providing an algorithm based on the powerful optimization technique semidefinite programming (SDP). In particular, we show that there is a randomized algorithm based on a semidefinite relaxation of the problem with performance guarantees depending on the coherence and the restricted isometry constant of the dictionary used. We then show a derandomization of the algorithm based on the method of conditional probabilities.
approximation  compressed-sensing  representation  mathematical-programming  numerical-methods  performance-measure  nudge-targets  consider:looking-to-see  consider:feature-discovery 
february 2017 by Vaguery
[1701.00694] Mixed one-bit compressive sensing with applications to overexposure correction for CT reconstruction
When a measurement falls outside the quantization or measurable range, it becomes saturated and cannot be used in classical reconstruction methods. For example, in C-arm angiography systems, which provide projection radiography, fluoroscopy, digital subtraction angiography, and are widely used for medical diagnoses and interventions, the limited dynamic range of C-arm flat detectors leads to overexposure in some projections during an acquisition, such as imaging relatively thin body parts (e.g., the knee). Aiming at overexposure correction for computed tomography (CT) reconstruction, we in this paper propose a mixed one-bit compressive sensing (M1bit-CS) to acquire information from both regular and saturated measurements. This method is inspired by the recent progress on one-bit compressive sensing, which deals with only sign observations. Its successful applications imply that information carried by saturated measurements is useful to improve recovery quality. For the proposed M1bit-CS model, alternating direction methods of multipliers is developed and an iterative saturation detection scheme is established. Then we evaluate M1bit-CS on one-dimensional signal recovery tasks. In some experiments, the performance of the proposed algorithms on mixed measurements is almost the same as recovery on unsaturated ones with the same amount of measurements. Finally, we apply the proposed method to overexposure correction for CT reconstruction on a phantom and a simulated clinical image. The results are promising, as the typical streaking artifacts and capping artifacts introduced by saturated projection data are effectively reduced, yielding significant error reduction compared with existing algorithms based on extrapolation.
tomography  inference  medical-technology  compressed-sensing  signal-processing  image-processing  rather-interesting  nudge-targets  consider:performance-measures  to-write-about 
february 2017 by Vaguery

« earlier    

related tags

(?)  2012  2017  acm  acmtariat  albion  algorithmic-econ  algorithms  andrew-gelman  applications  approximation  architecture  article  arxiv  association-studies  backhanded-mo  bayesian  behavioral-gen  ben-recht  big-list  big-picture  bio  biodet  bioinformatics  blog-post  blogs  books  boolean  brands  britain  clarity  clustering  code  coevolution  coffeescript  commentary  comments  communication-complexity  comparison  complexity-measures  complexity  compression  computational-complexity  concentration-of-measure  concept  consider:analogous-approach  consider:doing-it-over-with-gp  consider:extensions-of-data-balancing  consider:feature-discovery  consider:fight!fight!  consider:looking-to-see  consider:making-it-a-primitive  consider:performance-measures  consider:representation  consider:robustness  consider:stress-testing  consider:stu-card's-approach  consider:using-compression-as-complexity-objective  control-theory  convexity-curvature  correlation  course  curse-of-dimensionality  curvature  data-analysis  data-balancing  data-fusion  data-structures  data  debate  deep-learning  dimension-reduction  dimensionality  direction  discovery  education  embeddings  embodied  engineering-design  epistasis  error  essay  examples  experiment  experimental-design  explanation  exploitation-and-exploration  exposition  feature-extraction  features  fourier  frontier  funny  game-theory  gan  gaussian-process  gcta  generative-models  genetics  genomics  geometry  gowers  graph-theory  graphical-models  graphs  group-testing  gwas  hard-tech  heuristics  high-dimension  horse-races  hsu  icassp  ideas  ieee  image-analysis  image-processing  indistinguishable-from-magic  inference  information-theory  innovation  interesting  intricacy  inverse-problems  iq  isotropy  john-langford  johnson-lindenstrauss  l1  l2  laplacian  learning-from-data  learning-theory  lecture-notes  levers  libs  linear-algebra  linear-models  linear-programming  liner-notes  links  list  lower-bounds  machine-learning  machinelearning  mario-figueiredo  markov  math.mg  math  mathematical-programming  mathematics  mathtariat  matrices  matrix-factorization  measurement  medical-technology  methodology  metrics  misrepresentations  missing-heritability  model-class  model-system  modeling  models  monte-carlo  multi  multifractals  multiobjective-optimization  multispectral-images  network-theory  neural-networks  nibble  nonlinear-dynamics  nonparametric  norm  norms  novelty  nudge-targets  numeric.js  numerical-methods  oh-that-old-thing  online-compression  online-learning  operations-research  optics  optimization  org:bleg  overflow  p:someday  pac  papers  pdf  people  performance-measure  phase-transition  photography  physics!  pic  population-genetics  preprint  presentation  proof-of-concept  proof  proofs  python  q-n-a  quixotic  random-matrices  random-projections  rather-interesting  rec  reference  reflection  regression  regularization  representation  research-article  research  rip  robust  robustness  rule-learning  sampling  scaling-up  scitariat  search  sensors  signal-processing  skunkworks  slides  soft-question  sorting  space-complexity  sparse-encoding  sparse-learning  sparse-matrices  sparsity  spearhead  speedometer  stanford  stat-power  state-of-art  statistics  stats  stories  stream  study  sublinear  subspace-sampling  sufficiently-advanced-technology  summary  superresolution  surveys  synthesis  system-identification  talks  tcs  tensors  the-mangle-in-practice  the-trenches  tim-roughgarden  to-understand  to-write-about  tomography  topics  tutorials  unit  video  visual-understanding  visualization  volkan-cevher  volo-avolo  wavelets  web-repl  wiki  work  wormholes  yoga  ğŸŒž  👳  🔬 

Copy this bookmark:



description:


tags: