approximation   623

« earlier    

[1712.08373] Notes on complexity of packing coloring
A packing k-coloring for some integer k of a graph G=(V,E) is a mapping
φ:V→{1,…,k} such that any two vertices u,v of color φ(u)=φ(v) are in distance at least φ(u)+1. This concept is motivated by frequency assignment problems. The \emph{packing chromatic number} of G is the smallest k such that there exists a packing k-coloring of G.
Fiala and Golovach showed that determining the packing chromatic number for chordal graphs is \NP-complete for diameter exactly 5. While the problem is easy to solve for diameter 2, we show \NP-completeness for any diameter at least 3. Our reduction also shows that the packing chromatic number is hard to approximate within n1/2−ε for any ε>0.
In addition, we design an \FPT algorithm for interval graphs of bounded diameter. This leads us to exploring the problem of finding a partial coloring that maximizes the number of colored vertices.
graph-theory  algorithms  combinatorics  proof  approximation  nudge-targets  consider:looking-to-see  consider:feature-discovery 
9 weeks ago by Vaguery
[1801.00548] A Machine Learning Approach to Adaptive Covariance Localization
Data assimilation plays a key role in large-scale atmospheric weather forecasting, where the state of the physical system is estimated from model outputs and observations, and is then used as initial condition to produce accurate future forecasts. The Ensemble Kalman Filter (EnKF) provides a practical implementation of the statistical solution of the data assimilation problem and has gained wide popularity as. This success can be attributed to its simple formulation and ease of implementation. EnKF is a Monte-Carlo algorithm that solves the data assimilation problem by sampling the probability distributions involved in Bayes theorem. Because of this, all flavors of EnKF are fundamentally prone to sampling errors when the ensemble size is small. In typical weather forecasting applications, the model state space has dimension 109−1012, while the ensemble size typically ranges between 30−100 members. Sampling errors manifest themselves as long-range spurious correlations and have been shown to cause filter divergence. To alleviate this effect covariance localization dampens spurious correlations between state variables located at a large distance in the physical space, via an empirical distance-dependent function. The quality of the resulting analysis and forecast is greatly influenced by the choice of the localization function parameters, e.g., the radius of influence. The localization radius is generally tuned empirically to yield desirable results.This work, proposes two adaptive algorithms for covariance localization in the EnKF framework, both based on a machine learning approach. The first algorithm adapts the localization radius in time, while the second algorithm tunes the localization radius in both time and space. Numerical experiments carried out with the Lorenz-96 model, and a quasi-geostrophic model, reveal the potential of the proposed machine learning approaches.
modeling  machine-learning  prediction  rather-interesting  looking-to-see  approximation  algorithms  to-write-about 
11 weeks ago by Vaguery
[1709.08004] Slow-scale split-step tau-leap method for stiff stochastic chemical systems
Tau-leaping is a family of algorithms for the approximate simulation of discrete state continuous time Markov chains. The motivation for the development of such methods can be found, for instance, in the fields of chemical kinetics and systems biology. It is well known that the dynamical behavior of biochemical systems is often intrinsically stiff representing a serious challenge for their numerical approximation. The naive extension of stiff deterministic solvers to stochastic integration usually yields numerical solutions with either impractically large relaxation times or incorrectly resolved covariance. In this paper, we propose a novel splitting heuristic which allows to resolve these issues. The proposed numerical integrator takes advantage of the special structure of the linear systems with explicitly available equations for the mean and the covariance which we use to calibrate the parameters of the scheme. It is shown that the method is able to reproduce the exact mean and variance of the linear scalar test equation and has very good accuracy for the arbitrarily stiff systems at least in linear case. The numerical examples for both linear and nonlinear systems are also provided and the obtained results confirm the efficiency of the considered splitting approach.
numerical-methods  diffy-Qs  approximation  systems-biology  algorithms  to-write-about  consider:representation  rather-interesting  nudge 
12 weeks ago by Vaguery
A few shades of interpolation
The topic of this snapshot is interpolation. In the
ordinary sense, interpolation means to insert something
of a different nature into something else. In
mathematics, interpolation means constructing new
data points from given data points. The new points
usually lie in between the already-known points. The
purpose of this snapshot is to introduce a particular
type of interpolation, namely, polynomial interpolation.
This will be explained starting from basic ideas
that go back to the ancient Babylonians and Greeks,
and will arrive at subjects of current research activity.
interpolation  rather-interesting  history-of-science  algorithms  approximation  to-write-about 
february 2018 by Vaguery
[1709.10030] Sparse Hierarchical Regression with Polynomials
We present a novel method for exact hierarchical sparse polynomial regression. Our regressor is that degree r polynomial which depends on at most k inputs, counting at most ℓ monomial terms, which minimizes the sum of the squares of its prediction errors. The previous hierarchical sparse specification aligns well with modern big data settings where many inputs are not relevant for prediction purposes and the functional complexity of the regressor needs to be controlled as to avoid overfitting. We present a two-step approach to this hierarchical sparse regression problem. First, we discard irrelevant inputs using an extremely fast input ranking heuristic. Secondly, we take advantage of modern cutting plane methods for integer optimization to solve our resulting reduced hierarchical (k,ℓ)-sparse problem exactly. The ability of our method to identify all k relevant inputs and all ℓ monomial terms is shown empirically to experience a phase transition. Crucially, the same transition also presents itself in our ability to reject all irrelevant features and monomials as well. In the regime where our method is statistically powerful, its computational complexity is interestingly on par with Lasso based heuristics. The presented work fills a void in terms of a lack of powerful disciplined nonlinear sparse regression methods in high-dimensional settings. Our method is shown empirically to scale to regression problems with n≈10,000 observations for input dimension p≈1,000.
statistics  regression  algorithms  approximation  performance-measure  to-understand  nudge-targets  consider:looking-to-see 
february 2018 by Vaguery
[1712.08558] Lattice-based Locality Sensitive Hashing is Optimal
Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC `98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes "nearby" points to the same bucket and "far away" points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage.
In a seminal work, Andoni and Indyk (FOCS `06) constructed an LSH family based on random ball partitioning of space that achieves an LSH exponent of 1/c^2 for the l_2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA `07) and O'Donnell, Wu and Zhou (TOCT `14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH.
In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c^2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem.
algorithms  numerical-methods  rather-interesting  approximation  computational-complexity  nudge-targets  consider:looking-to-see  consider:representation 
january 2018 by Vaguery
Salvage | Bleak is the new red.
"Salvage is a quarterly of revolutionary arts and letters. Salvage is edited and written by and for the desolated Left, by and for those committed to radical change, sick of capitalism and its sadisms, and sick too of the Left’s bad faith and bullshit. "
magazine  journal  leftism  salvage  approximation 
january 2018 by tsuomela
[1605.04679] Typical Performance of Approximation Algorithms for NP-hard Problems
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with some theoretical frameworks. Here three approximation algorithms are examined; the linear-programming relaxation, the loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using the statistical-mechanical technique while the average-case analysis of the last one is studied by the generating function method. These algorithms have a threshold in the typical performance with increasing the average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases determined by the order of the typical-performance thresholds. We provide some conditions for classifying the graph ensembles and demonstrate explicitly examples for the difference in the threshold.
approximation  computational-complexity  rather-interesting  to-write-about  to-do  nudge-targets  consider:approximation 
december 2017 by Vaguery
[1708.04215] A Constant-Factor Approximation Algorithm for the Asymmetric Traveling Salesman Problem
We give a constant-factor approximation algorithm for the asymmetric traveling salesman problem. Our approximation guarantee is analyzed with respect to the standard LP relaxation, and thus our result confirms the conjectured constant integrality gap of that relaxation.
Our techniques build upon the constant-factor approximation algorithm for the special case of node-weighted metrics. Specifically, we give a generic reduction to structured instances that resemble but are more general than those arising from node-weighted metrics. For those instances, we then solve Local-Connectivity ATSP, a problem known to be equivalent (in terms of constant-factor approximation) to the asymmetric traveling salesman problem.
algorithms  via:vielmetti  operations-research  approximation  optimization  computational-complexity  to-write-about  nudge-targets  consider:looking-to-see 
december 2017 by Vaguery
[1711.02724] Algorithms to Approximate Column-Sparse Packing Problems
Column-sparse packing problems arise in several contexts in both deterministic and stochastic discrete optimization. We present two unifying ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain improved approximation algorithms for some well-known families of such problems. As three main examples, we attain the integrality gap, up to lower-order terms, for known LP relaxations for k-column sparse packing integer programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set packing (Bansal et al., Algorithmica, 2012), and go "half the remaining distance" to optimal for a major integrality-gap conjecture of Furedi, Kahn and Seymour on hypergraph matching (Combinatorica, 1993).
integer-programming  matrices  optimization  operations-research  set-theory  hypergraphs  to-write-about  approximation  algorithms 
november 2017 by Vaguery

« earlier    

related tags

absolute-relative  accuracy  acm  acmtariat  additive  aggregate  aggregation  ai  aknn  algebra  algorithm  algorithms  amt  analogy  analytical-expressions  anthropic  antiquity  applications  approxnearestneighbors  approxsimilarityjoin  archaeology  archetypal-analysis  arrows  article  artificial-intelligence  arxiv  atan  atoms  automata  ben-recht  benchmarking  berkeley  big-picture  binomial  bio  biophysics  bisection  books  books:noted  boundingbox  bounds  cache  calculation  can't-wait-to-understand-this  cardinality  carmack  characterization  cheap-learning  cheatsheet  classification  closure  clustering  cocktail  combinatorics  communication  comp_sci  compare-to-pareto-gp-features  comparison  compass-and-straightedge  complexity  composition-decomposition  compressed-sensing  computational-complexity  computational-geometry  computational_complexity  computerscience  concept  concurrency  consider:approximation  consider:classification  consider:complexity  consider:convergence  consider:evolving-some  consider:feature-discovery  consider:for-behavior-classification  consider:looking-to-see  consider:parsimony  consider:performance-measures  consider:performance-space-analysis  consider:re-approximation  consider:rediscovery  consider:representation  consider:stress-testing  constraint-satisfaction  construction  continued-fractions  control-theory  convergence  convex  cooperate-defect  count-min  count  counter  counting  cs  data-structures  data  datamining  datastructure  deep-learning  deep-materialism  demographics  density  differential-equations  differential  diffy-qs  diffyq  dimensionality  dirty-hands  discrete  discussion  disease  distance  distribution  dsp  earth  electromag  embeddings  empirical  encyclopedic  engineering-design  epistemology  error  essay  estimate  estimation  evolution  examples  explanation  exposition  fast-inverse-sqrt  feature-construction  field  floating-point_arithmetic  fluid  fourier  fractions  frontier  frustrated-systems  function  fuzzy-match  game_theory  gender-diff  gender  geometry  georgia  github  golang  good-question  gradient-descent  graph-layout  graph-theory  graphs  gravity  greedy  ground-up  guide  hardness  hash  hashing  heuristics  hi-order-bits  high-dimension  history-of-science  history  horse-races  html  hypergraphs  hyperloglog  idealization  ideas  identity  ieee  immune  important  in_nb  incentives  inference  information-theory  integer-programming  integral  interpolation  intricacy  iron-age  is-ought  ising  iteration-recursion  javascript  journal  kernels  learning-algorithms  lecture-notes  leftism  length  lens  levenshtein  levers  lfu  library  like-the-soft-push-scheme  limits  linear-algebra  linear  linearity  liner-notes  list  local-global  looking-to-see  lower-bounds  lsh  machine-learning  magazine  magnitude  manual  markup  matching  math.nt  math  mathematical-programming  mathematical-recreations  mathematics  matrices  maxim-gun  mean  measurement  mechanics  medieval  metabuch  methodology  metric-space  metrics  michael-jordan  mihai  mit  mnemonics  model-class  modeling  models  models_of_behavior  moments  multi  multiplicative  mutation  mwua  natural-language-processing  network-theory  network  neural-networks  neural  news  nibble  nitty-gritty  nlp  noise  non-equilibrium  nonlinear-dynamics  nudge-targets  nudge  number-theory  numerical-methods  numerical  numerics  objektbuch  ontology  open-questions  opensource  operations-research  optimization  orders  org:bleg  org:edu  org:inst  org:junk  org:lite  org:mag  org:sci  out-of-the-box  overflow  p:someday  packing  paper  papers  parasites-microbiome  parsimony  pdf  performance-measure  performance  perturbation  philosophy-of-engineering  philosophy-of-science  philosophy  philosophy_of_science  php  physics!  physics  pi  pigeonhole-markov  plane-geometry  planning  popsci  population  prediction  preprint  pro-rata  probabilistic  probability  profile  programming  proof  proofs  protein-folding  python  q-n-a  qra  quake  quanta_mag  quantum  rand-approx  random  ranking  rather-interesting  rather-odd  realness  red-queen  reduction  reference  reflection  regression  replication  representation  research  review  rigor  robust  roots  rounding  salvage  sampling  sapiens  scale  science  scitariat  selection  self-organization  series  set-theory  similarity  simplification  simulation  sketch  slides  smoothness  software  space  spark  spatial  stat-mech  statistics  stats  stochastic-processes  stochastic-resonance  stories  streaming  string  structural-biology  stylized-facts  submodularity  success  summary  symbolic-methods  symbolic  synthesis  systematic-ad-hoc  systems-biology  talks  tcs  techtariat  the-trenches  theoretical-biology  theory-and-practice-sitting-in-a-tree  thinking  tidbits  tightness  tiling  time-complexity  time  timeseries  to-do  to-understand  to-write-about  tolearn  toread  tounderstand  training  trees  tricks  trivia  ugc  unit  universal  vazirani  video  visualization  volo-avolo  waves  west-hunter  wiki  wtf?  yahoo  yoga  🌞  👳 

Copy this bookmark: