**consider:feature-discovery**261

[1809.07481] $L_1$ Shortest Path Queries in Simple Polygons

25 days ago by Vaguery

Let P be a simple polygon of n vertices. We consider two-point L1 shortest path queries in P. We build a data structure of O(n) size in O(n) time such that given any two query points s and t, the length of an L1 shortest path from s to t in P can be computed in O(logn) time, or in O(1) time if both s and t are vertices of P, and an actual shortest path can be output in additional linear time in the number of edges of the path. To achieve the result, we propose a mountain decomposition of simple polygons, which may be interesting in its own right. Most importantly, our approach is much simpler than the previous work on this problem.

computational-geometry
planning
algorithms
computational-complexity
representation
to-write-about
consider:feature-discovery
25 days ago by Vaguery

[1801.08056] The ascent-plateau statistics on Stirling permutations

4 weeks ago by Vaguery

In this paper, several variants of the ascent-plateau statistic are introduced, including flag ascent-plateau, double ascent and descent-plateau. We first study the flag ascent-plateau statistic on Stirling permutations by using context-free grammars. We then present a unified refinement of the ascent polynomials and the ascent-plateau polynomials. In particular, by using Foata and Strehl's group action, we prove two bistatistics over the set of Stirling permutations of order n are equidistributed.

combinatorics
permutations
enumeration
probability-theory
to-understand
nudge-targets
consider:feature-discovery
4 weeks ago by Vaguery

[1803.11463] Arctic curves for paths with arbitrary starting points: a tangent method approach

4 weeks ago by Vaguery

We use the tangent method to investigate the arctic curve in a model of non-intersecting lattice paths with arbitrary fixed starting points aligned along some boundary and whose distribution is characterized by some arbitrary piecewise differentiable function. We find that the arctic curve has a simple explicit parametric representation depending of this function, providing us with a simple transform that maps the arbitrary boundary condition to the arctic curve location. We discuss generic starting point distributions as well as particular freezing ones which create additional frozen domains adjacent to the boundary, hence new portions for the arctic curve. A number of examples are presented, corresponding to both generic and freezing distributions.

combinatorics
arctic-curves
tiling
statistical-mechanics
simulation
looking-to-see
feature-construction
emergence
to-write-about
consider:prediction
consider:feature-discovery
4 weeks ago by Vaguery

[1809.07390] A general framework for secondary constructions of bent and plateaued functions

5 weeks ago by Vaguery

In this work, we employ the concept of {\em composite representation} of Boolean functions, which represents an arbitrary Boolean function as a composition of one Boolean function and one vectorial function, for the purpose of specifying new secondary constructions of bent/plateaued functions. This representation gives a better understanding of the existing secondary constructions and it also allows us to provide a general construction framework of these objects. This framework essentially gives rise to an {\em infinite number} of possibilities to specify such secondary construction methods (with some induced sufficient conditions imposed on initial functions) and in particular we solve several open problems in this context. We provide several explicit methods for specifying new classes of bent/plateaued functions and demonstrate through examples that the imposed initial conditions can be easily satisfied. Our approach is especially efficient when defining new bent/plateaued functions on larger variable spaces than initial functions. For instance, it is shown that the indirect sum methods and Rothaus' construction are just special cases of this general framework and some explicit extensions of these methods are given. In particular, similarly to the basic indirect sum method of Carlet, we show that it is possible to derive (many) secondary constructions of bent functions without any additional condition on initial functions apart from the requirement that these are bent functions. In another direction, a few construction methods that generalize the secondary constructions which do not extend the variable space of the employed initial functions are also proposed.

representation
boolean-functions
decomposition
construction
inverse-problems
rather-interesting
Walsh-polynomials
to-write-about
consider:feature-discovery
5 weeks ago by Vaguery

[1810.04692] Probability distributions related to tilings of non-convex Polygons

5 weeks ago by Vaguery

This paper is based on the study of random lozenge tilings of non-convex polygonal regions with interacting non-convexities (cuts) and the corresponding asymptotic kernel as in [3] and [4] (discrete tacnode kernel). Here this kernel is used to find the probability distributions and joint probability distributions for the fluctuation of tiles along lines in between the cuts. These distributions are new.

combinatorics
tiling
counting
rather-interesting
phase-transitions
condensed-matter
statistical-mechanics
feature-extraction
representation
to-write-about
consider:feature-discovery
5 weeks ago by Vaguery

[1806.01378] Strong Pseudo Transitivity and Intersection Graphs

5 weeks ago by Vaguery

A directed graph G=(V,E) is {\it strongly pseudo transitive} if there is a partition {A,E−A} of E so that graphs G1=(V,A) and G2=(V,E−A) are transitive, and additionally, if ab∈A and bc∈E implies that ac∈E. A strongly pseudo transitive graph G=(V,E) is strongly pseudo transitive of the first type, if ab∈A and bc∈E implies ac∈A. An undirected graph is co-strongly pseudo transitive (co-strongly pseudo transitive of the first type) if its complement has an orientation which is strongly pseudo transitive (co-strongly pseudo transitive of the first type). Our purpose is show that the results in computational geometry \cite{CFP, Lu} and intersection graph theory \cite{Ga2, ES} can be unified and extended, using the notion of strong pseudo transitivity. As a consequence the general algorithmic framework in \cite{Sh} is applicable to solve the maximum independent set in O(n3) time in a variety of problems, thereby, avoiding case by case lengthily arguments for each problem. We show that the intersection graphs of axis parallel rectangles intersecting a diagonal line from bottom, and half segments are co-strongly pseudo transitive. In addition, we show that the class of the interval filament graphs is co-strongly transitive of the first type, and hence the class of polygon circle graphs which is contained in the class of interval filament graphs (but contains the classes of chordal graphs, circular arc, circle, and outer planar graphs), and the class of incomparability graphs are strongly transitive of the first type. For class of chordal graphs we give two different proofs, using two different characterizations, verifying that they are co-strongly transitive of the first type. We present some containment results.

graph-theory
feature-extraction
feature-construction
algorithms
computational-complexity
nudge-targets
consider:feature-discovery
5 weeks ago by Vaguery

Evolution of metazoan morphological disparity | PNAS

october 2018 by Vaguery

We attempt to quantify animal “bodyplans” and their variation within Metazoa. Our results challenge the view that maximum variation was achieved early in animal evolutionary history by nonuniformitarian mechanisms. Rather, they are compatible with the view that the capacity for fundamental innovation is not limited to the early evolutionary history of clades. We perform quantitative tests of the principal hypotheses of the molecular mechanisms underpinning the establishment of animal bodyplans and corroborate the hypothesis that animal evolution has been permitted or driven by gene regulatory evolution.

developmental-biology
biology
rather-interesting
clustering
looking-to-see
to-write-about
consider:feature-discovery
october 2018 by Vaguery

[1512.04722] Visible lattice points in random walks

october 2018 by Vaguery

We consider the possible visits to visible points of a random walker moving up and right in the integer lattice (with probability α and 1−α, respectively) and starting from the origin.

We show that, almost surely, the asymptotic proportion of strings of k consecutive visible lattice points visited by such an α-random walk is a certain constant ck(α), which is actually an (explicitly calculable) polynomial in α of degree 2⌊(k−1)/2⌋. For k=1, this gives that, almost surely, the asymptotic proportion of time the random walker is visible from the origin is c1(α)=6/π2, independently of α.

random-walks
rather-interesting
combinatorics
probability-theory
simulation
nudge-targets
number-theory
representation
to-simulate
consider:feature-discovery
We show that, almost surely, the asymptotic proportion of strings of k consecutive visible lattice points visited by such an α-random walk is a certain constant ck(α), which is actually an (explicitly calculable) polynomial in α of degree 2⌊(k−1)/2⌋. For k=1, this gives that, almost surely, the asymptotic proportion of time the random walker is visible from the origin is c1(α)=6/π2, independently of α.

october 2018 by Vaguery

[1801.08003] Threadable Curves

october 2018 by Vaguery

We define a plane curve to be threadable if it can rigidly pass through a point-hole in a line L without otherwise touching L. Threadable curves are in a sense generalizations of monotone curves. We have two main results. The first is a linear-time algorithm for deciding whether a polygonal curve is threadable---O(n) for a curve of n vertices---and if threadable, finding a sequence of rigid motions to thread it through a hole. We also sketch an argument that shows that the threadability of algebraic curves can be decided in time polynomial in the degree of the curve. The second main result is an O(n polylog n)-time algorithm for deciding whether a 3D polygonal curve can thread through hole in a plane in R^3, and if so, providing a description of the rigid motions that achieve the threading.

computational-geometry
geometry
rather-interesting
definition
nudge-targets
consider:feature-discovery
to-write-about
consider:algorithms
october 2018 by Vaguery

Same-different problems strain convolutional neural networks | the morning paper

september 2018 by Vaguery

Digging deeper, when learning did occur in SD, increasing item size never strained performance. But increasing the overall image size, or increasing the number of items did. (Gray bars in the above figures indicate the number of trials in which learning failed). The results suggest that straining is not simply a direct outcome of an increase in image variability. Using CNNs with more than twice the number of kernels (wide), or twice as many layers (deep) did not change the observed trend.

neural-networks
representation
problem-solving
rather-interesting
ontology
generalization
to-write-about
nudge-targets
consider:feature-discovery
september 2018 by Vaguery

Figuring out when you can do a puzzle. – Occupy Math

august 2018 by Vaguery

This week’s Occupy Math looks at a type of puzzle where you want to fill a rectangle with a shape. We will be using the L-shaped 3-square polyomino, used to fill a 5×9 rectangle below, as our example shape. The goal is to figure out every possible size of rectangle that can be filled with this shape. If you are constructing puzzles for other people — e.g., your students — knowing which problems can be solved gives you an edge. The post will not only solve the problem for our example shape, but also give you tools for doing this for other shapes. The answers, and the tools, are at the bottom if you don’t feel like working through the reasoning.

mathematical-recreations
polyominoes
proof
rather-interesting
nudge-targets
consider:classification
consider:feature-discovery
august 2018 by Vaguery

Kumaraswamy distribution: a beta-like probability density

june 2018 by Vaguery

Maybe the algorithm I suggested for picking parameters is not very good, but I suspect the optimal parameters are not much better. Rather than saying that the Kumaraswamy distribution approximates the beta distribution, I’d say that the Kumaraswamy distribution is capable of assuming roughly the same shapes as the beta distribution. If the only reason you’re using a beta distribution is to get a certain density shape, the Kumaraswamy distribution would be a reasonable alternative. But if you need to approximate a beta distribution closely, it may not work well enough.

probability-theory
representation
rather-interesting
to-write-about
consider:feature-discovery
consider:heuristics
consider:approximation
june 2018 by Vaguery

Exactly how bad is the 13 times table? | The Aperiodical

april 2018 by Vaguery

Along the way, OEIS editor Charles R Greathouse IV added this intriguing conjecture:

Conjecture: a(n)≤N

for all n

. Perhaps N

can be taken as 81

.

number-theory
mathematical-recreations
open-questions
to-write-about
consider:feature-discovery
Conjecture: a(n)≤N

for all n

. Perhaps N

can be taken as 81

.

april 2018 by Vaguery

[1709.04109] Empower Sequence Labeling with Task-Aware Neural Language Model

march 2018 by Vaguery

Linguistic sequence labeling is a general modeling approach that encompasses a variety of problems, such as part-of-speech tagging and named entity recognition. Recent advances in neural networks (NNs) make it possible to build reliable models without handcrafted features. However, in many cases, it is hard to obtain sufficient annotations to train these models. In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task. Besides word-level knowledge contained in pre-trained word embeddings, character-aware neural language models are incorporated to extract character-level knowledge. Transfer learning techniques are further adopted to mediate different components and guide the language model towards the key knowledge. Comparing to previous methods, these task-specific knowledge allows us to adopt a more concise model and conduct more efficient training. Different from most transfer learning methods, the proposed framework does not rely on any additional supervision. It extracts knowledge from self-contained order information of training sequences. Extensive experiments on benchmark datasets demonstrate the effectiveness of leveraging character-level knowledge and the efficiency of co-training. For example, on the CoNLL03 NER task, model training completes in about 6 hours on a single GPU, reaching F1 score of 91.71±0.10 without using any extra annotation.

natural-language-processing
deep-learning
neural-networks
nudge-targets
consider:feature-discovery
consider:representation
to-write-about
march 2018 by Vaguery

Estimating barriers to gene flow from distorted isolation by distance patterns | bioRxiv

march 2018 by Vaguery

In continuous populations with local migration, nearby pairs of individuals have on average more similar genotypes than geographically well separated pairs. A barrier to gene flow distorts this classical pattern of isolation by distance. Genetic similarity is decreased for sample pairs on different sides of the barrier and increased for pairs on the same side near the barrier. Here, we introduce an inference scheme that utilizes this signal to detect and estimate the strength of a linear barrier to gene flow in two-dimensions. We use a diffusion approximation to model the effects of a barrier on the geographical spread of ancestry backwards in time. This approach allows us to calculate the chance of recent coalescence and probability of identity by descent. We introduce an inference scheme that fits these theoretical results to the geographical covariance structure of bialleleic genetic markers. It can estimate the strength of the barrier as well as several demographic parameters. We investigate the power of our inference scheme to detect barriers by applying it to a wide range of simulated data. We also showcase an example application to a Antirrhinum majus (snapdragon) flower color hybrid zone, where we do not detect any signal of a strong genome wide barrier to gene flow.

population-biology
theoretical-biology
rather-interesting
simulation
to-write-about
consider:feature-discovery
march 2018 by Vaguery

[1712.08373] Notes on complexity of packing coloring

march 2018 by Vaguery

A packing k-coloring for some integer k of a graph G=(V,E) is a mapping

φ:V→{1,…,k} such that any two vertices u,v of color φ(u)=φ(v) are in distance at least φ(u)+1. This concept is motivated by frequency assignment problems. The \emph{packing chromatic number} of G is the smallest k such that there exists a packing k-coloring of G.

Fiala and Golovach showed that determining the packing chromatic number for chordal graphs is \NP-complete for diameter exactly 5. While the problem is easy to solve for diameter 2, we show \NP-completeness for any diameter at least 3. Our reduction also shows that the packing chromatic number is hard to approximate within n1/2−ε for any ε>0.

In addition, we design an \FPT algorithm for interval graphs of bounded diameter. This leads us to exploring the problem of finding a partial coloring that maximizes the number of colored vertices.

graph-theory
algorithms
combinatorics
proof
approximation
nudge-targets
consider:looking-to-see
consider:feature-discovery
φ:V→{1,…,k} such that any two vertices u,v of color φ(u)=φ(v) are in distance at least φ(u)+1. This concept is motivated by frequency assignment problems. The \emph{packing chromatic number} of G is the smallest k such that there exists a packing k-coloring of G.

Fiala and Golovach showed that determining the packing chromatic number for chordal graphs is \NP-complete for diameter exactly 5. While the problem is easy to solve for diameter 2, we show \NP-completeness for any diameter at least 3. Our reduction also shows that the packing chromatic number is hard to approximate within n1/2−ε for any ε>0.

In addition, we design an \FPT algorithm for interval graphs of bounded diameter. This leads us to exploring the problem of finding a partial coloring that maximizes the number of colored vertices.

march 2018 by Vaguery

[1710.02271] Unsupervised Extraction of Representative Concepts from Scientific Literature

february 2018 by Vaguery

This paper studies the automated categorization and extraction of scientific concepts from titles of scientific articles, in order to gain a deeper understanding of their key contributions and facilitate the construction of a generic academic knowledgebase. Towards this goal, we propose an unsupervised, domain-independent, and scalable two-phase algorithm to type and extract key concept mentions into aspects of interest (e.g., Techniques, Applications, etc.). In the first phase of our algorithm we propose PhraseType, a probabilistic generative model which exploits textual features and limited POS tags to broadly segment text snippets into aspect-typed phrases. We extend this model to simultaneously learn aspect-specific features and identify academic domains in multi-domain corpora, since the two tasks mutually enhance each other. In the second phase, we propose an approach based on adaptor grammars to extract fine grained concept mentions from the aspect-typed phrases without the need for any external resources or human effort, in a purely data-driven manner. We apply our technique to study literature from diverse scientific domains and show significant gains over state-of-the-art concept extraction techniques. We also present a qualitative analysis of the results obtained.

natural-language-processing
POS-tagging
algorithms
data-fusion
machine-learning
text-mining
nudge-targets
consider:feature-discovery
february 2018 by Vaguery

[1802.08435] Efficient Neural Audio Synthesis

february 2018 by Vaguery

Sequential models achieve state-of-the-art results in audio, visual and textual domains with respect to both estimating the data distribution and generating high-quality samples. Efficient sampling for this class of models has however remained an elusive problem. With a focus on text-to-speech synthesis, we describe a set of general techniques for reducing sampling time while maintaining high output quality. We first describe a single-layer recurrent neural network, the WaveRNN, with a dual softmax layer that matches the quality of the state-of-the-art WaveNet model. The compact form of the network makes it possible to generate 24kHz 16-bit audio 4x faster than real time on a GPU. Second, we apply a weight pruning technique to reduce the number of weights in the WaveRNN. We find that, for a constant number of parameters, large sparse networks perform better than small dense networks and this relationship holds for sparsity levels beyond 96%. The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time. Finally, we propose a new generation scheme based on subscaling that folds a long sequence into a batch of shorter sequences and allows one to generate multiple samples at once. The Subscale WaveRNN produces 16 samples per step without loss of quality and offers an orthogonal method for increasing sampling efficiency.

audio-synthesis
machine-learning
WaveNet
neural-networks
signal-processing
time-series
generative-models
to-write-about
nudge-targets
recurrent-networks
performance-measure
consider:feature-discovery
february 2018 by Vaguery

[1506.09039] Scalable Discrete Sampling as a Multi-Armed Bandit Problem

february 2018 by Vaguery

Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.

sampling
inverse-problems
rather-interesting
probability-theory
simulation
engineering-design
nudge-targets
consider:feature-discovery
february 2018 by Vaguery

[1707.05994] Computing Tutte Paths

january 2018 by Vaguery

Tutte paths are one of the most successful tools for attacking Hamiltonicity problems in planar graphs. Unfortunately, results based on them are non-constructive, as their proofs inherently use an induction on overlapping subgraphs and these overlaps hinder to bound the running time to a polynomial. For special cases however, computational results of Tutte paths are known: For 4-connected planar graphs, Tutte paths are in fact Hamiltonian paths and Chiba and Nishizeki showed how to compute such paths in linear time. For 3-connected planar graphs, Tutte paths have a more complicated structure, and it has only recently been shown that they can be computed in polynomial time. However, Tutte paths are defined for general 2-connected planar graphs and this is what most applications need. Unfortunately, no computational results are known. We give the first efficient algorithm that computes a Tutte path (for the general case of 2-connected planar graphs). One of the strongest existence results about such Tutte paths is due to Sanders, which allows to prescribe the end vertices and an intermediate edge of the desired path. Encompassing and strengthening all previous computational results on Tutte paths, we show how to compute this special Tutte path efficiently. Our method refines both, the results of Thomassen and Sanders, and avoids overlapping subgraphs by using a novel iterative decomposition along 2-separators. Finally, we show that our algorithm runs in quadratic time.

graph-theory
algorithms
representation
rather-interesting
to-understand
nudge-targets
consider:representation
consider:feature-discovery
january 2018 by Vaguery

**related tags**

Copy this bookmark: