**consider:performance-measures**203

[1812.04948] A Style-Based Generator Architecture for Generative Adversarial Networks

5 days ago by Vaguery

We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.

generative-art
generative-models
neural-networks
multiscale
very-impressive
to-write-about
consider:performance-measures
5 days ago by Vaguery

[1811.09620] TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer

8 days ago by Vaguery

In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness. In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as high-quality waveform generation. We introduce TimbreTron, a method for musical timbre transfer which applies "image" domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer. We show that the Constant Q Transform (CQT) representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance. Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples.

style-transfer
neural-networks
feature-extraction
signal-processing
audio
to-write-about
consider:performance-measures
8 days ago by Vaguery

Journey of a Single Line – BLDGBLOG

june 2018 by Vaguery

“When he was eighty-five,” Sarah Cowan writes in a review of a show mounted by the Miguel Abreu Gallery in New York City, “Wacław Szpakowski wrote a treatise for a lifetime project that no one had known about. Titled ‘Rhythmical Lines,’ it describes a series of labyrinthine geometrical abstractions, each one produced from a single continuous line. He’d begun these drawings around 1900, when he was just seventeen—what started as sketches he then formalized, compiled, and made ever more intricate over the course of his life.”

abstraction
design
art
constraints
rather-interesting
to-write-about
consider:performance-measures
june 2018 by Vaguery

[1805.07980] Collisionless periodic orbits in the free-fall three-body problem

may 2018 by Vaguery

Although the free-fall three-body problem have been investigated for more than one century, however, only four collisionless periodic orbits have been found. In this paper, we report 234 collisionless periodic orbits of the free-fall three-body system with some mass ratios, including three known collisionless periodic orbits. Thus, 231 collisionless free-fall periodic orbits among them are entirely new. In theory, we can gain periodic orbits of the free-fall three-body system in arbitrary ratio of mass. Besides, it is found that, for a given ratio of masses of two bodies, there exists a generalized Kepler's third law for the periodic three-body system. All of these would enrich our knowledge and deepen our understanding about the famous three-body problem as a whole.

nonlinear-dynamics
stamp-collecting
rather-interesting
nudge-targets
consider:looking-to-see
consider:performance-measures
may 2018 by Vaguery

[1803.07694] Defective and Clustered Graph Colouring

may 2018 by Vaguery

Consider the following two ways to colour the vertices of a graph where the requirement that adjacent vertices get distinct colours is relaxed. A colouring has "defect" d if each monochromatic component has maximum degree at most d. A colouring has "clustering" c if each monochromatic component has at most c vertices. This paper surveys research on these types of colourings, where the first priority is to minimise the number of colours, with small defect or small clustering as a secondary goal. List colouring variants are also considered. The following graph classes are studied: outerplanar graphs, planar graphs, graphs embeddable in surfaces, graphs with given maximum degree, graphs with given maximum average degree, graphs excluding a given subgraph, graphs with linear crossing number, linklessly or knotlessly embeddable graphs, graphs with given Colin de Verdi\`ere parameter, graphs with given circumference, graphs excluding a fixed graph as an immersion, graphs with given thickness, graphs with given stack- or queue-number, graphs excluding Kt as a minor, graphs excluding Ks,t as a minor, and graphs excluding an arbitrary graph H as a minor. Several open problems are discussed.

combinatorics
graph-theory
algorithms
rather-interesting
computational-complexity
nudge-targets
consider:performance-measures
may 2018 by Vaguery

[1712.08566] Extended Product and Integrated Interleaved Codes

march 2018 by Vaguery

A new class of codes, Extended Product (EPC) Codes, consisting of a product code with a number of extra parities added, is presented and applications for erasure decoding are discussed. An upper bound on the minimum distance of EPC codes is given, as well as constructions meeting the bound for some relevant cases. A special case of EPC codes, Extended Integrated Interleaved (EII) codes, which naturally unify Integrated Interleaved (II) codes and product codes, is defined and studied in detail. It is shown that EII codes often improve the minimum distance of II codes with the same rate, and they enhance the decoding algorithm by allowing decoding on columns as well as on rows. It is also shown that EII codes allow for encoding II codes with an uniform distribution of the parity symbols.

information-theory
codes
integer-programming
rather-interesting
robustness
performance-measure
algorithms
representation
to-write-about
nudge-targets
consider:performance-measures
march 2018 by Vaguery

[1801.01922] Vectorization of Line Drawings via PolyVector Fields

march 2018 by Vaguery

Image tracing is a foundational component of the workflow in graphic design, engineering, and computer animation, linking hand-drawn concept images to collections of smooth curves needed for geometry processing and editing. Even for clean line drawings, modern algorithms often fail to faithfully vectorize junctions, or points at which curves meet; this produces vector drawings with incorrect connectivity. This subtle issue undermines the practical application of vectorization tools and accounts for hesitance among artists and engineers to use automatic vectorization software. To address this issue, we propose a novel image vectorization method based on state-of-the-art mathematical algorithms for frame field processing. Our algorithm is tailored specifically to disambiguate junctions without sacrificing quality.

graphics
algorithms
inference
rather-interesting
feature-construction
nudge-targets
consider:representation
consider:looking-to-see
consider:performance-measures
march 2018 by Vaguery

[1801.06769] Deep joint rain and haze removal from single images

march 2018 by Vaguery

Rain removal from a single image is a challenge which has been studied for a long time. In this paper, a novel convolutional neural network based on wavelet and dark channel is proposed. On one hand, we think that rain streaks correspond to high frequency component of the image. Therefore, haar wavelet transform is a good choice to separate the rain streaks and background to some extent. More specifically, the LL subband of a rain image is more inclined to express the background information, while LH, HL, HH subband tend to represent the rain streaks and the edges. On the other hand, the accumulation of rain streaks from long distance makes the rain image look like haze veil. We extract dark channel of rain image as a feature map in network. By increasing this mapping between the dark channel of input and output images, we achieve haze removal in an indirect way. All of the parameters are optimized by back-propagation. Experiments on both synthetic and real- world datasets reveal that our method outperforms other state-of- the-art methods from a qualitative and quantitative perspective.

image-processing
machine-learning
deep-learning
algorithms
rather-interesting
consider:performance-measures
march 2018 by Vaguery

[1606.05381] Deep Image Set Hashing

march 2018 by Vaguery

In applications involving matching of image sets, the information from multiple images must be effectively exploited to represent each set. State-of-the-art methods use probabilistic distribution or subspace to model a set and use specific distance measure to compare two sets. These methods are slow to compute and not compact to use in a large scale scenario. Learning-based hashing is often used in large scale image retrieval as they provide a compact representation of each sample and the Hamming distance can be used to efficiently compare two samples. However, most hashing methods encode each image separately and discard knowledge that multiple images in the same set represent the same object or person. We investigate the set hashing problem by combining both set representation and hashing in a single deep neural network. An image set is first passed to a CNN module to extract image features, then these features are aggregated using two types of set feature to capture both set specific and database-wide distribution information. The computed set feature is then fed into a multilayer perceptron to learn a compact binary embedding. Triplet loss is used to train the network by forming set similarity relations using class labels. We extensively evaluate our approach on datasets used for image matching and show highly competitive performance compared to state-of-the-art methods.

algorithms
machine-learning
rather-interesting
feature-construction
nudge-targets
consider:looking-at-code
consider:performance-measures
march 2018 by Vaguery

[1709.08878] Generating Sentences by Editing Prototypes

january 2018 by Vaguery

We propose a new generative model of sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.

machine-learning
natural-language-processing
rather-interesting
representation
to-write-about
nudge-targets
consider:representation
consider:performance-measures
generative-models
january 2018 by Vaguery

[1705.00441] Learning Topic-Sensitive Word Representations

january 2018 by Vaguery

Distributed word representations are widely used for modeling words in NLP tasks. Most of the existing models generate one representation per word and do not consider different meanings of a word. We present two approaches to learn multiple topic-sensitive representations per word by using Hierarchical Dirichlet Process. We observe that by modeling topics and integrating topic distributions for each document we obtain representations that are able to distinguish between different meanings of a given word. Our models yield statistically significant improvements for the lexical substitution task indicating that commonly used single word representations, even when combined with contextual information, are insufficient for this task.

natural-language-processing
feature-construction
neural-networks
algorithms
nudge-targets
consider:looking-to-see
consider:performance-measures
january 2018 by Vaguery

[1709.05725] FlashProfile: Interactive Synthesis of Syntactic Profiles

january 2018 by Vaguery

We address the problem of learning comprehensive syntactic profiles for a set of strings. Real-world datasets, typically curated from multiple sources, often contain data in various formats. Thus any data processing task is preceded by the critical step of data format identification. However, manual inspection of data to identify various formats is infeasible in standard big-data scenarios.

We present a technique for generating comprehensive syntactic profiles in terms of user-defined patterns that also allows for interactive refinement. We define a syntactic profile as a set of succinct patterns that describe the entire dataset. Our approach efficiently learns such profiles, and allows refinement by exposing a desired number of patterns.

Our implementation, FlashProfile, shows a median profiling time of 0.7s over 142 tasks on 74 real datasets. We also show that access to the generated data profiles allow for more accurate synthesis of programs, using fewer examples in programming-by-example workflows.

pattern-discovery
rather-interesting
strings
data-mining
statistics
nudge-targets
consider:looking-to-see
consider:representation
consider:performance-measures
We present a technique for generating comprehensive syntactic profiles in terms of user-defined patterns that also allows for interactive refinement. We define a syntactic profile as a set of succinct patterns that describe the entire dataset. Our approach efficiently learns such profiles, and allows refinement by exposing a desired number of patterns.

Our implementation, FlashProfile, shows a median profiling time of 0.7s over 142 tasks on 74 real datasets. We also show that access to the generated data profiles allow for more accurate synthesis of programs, using fewer examples in programming-by-example workflows.

january 2018 by Vaguery

Self-Replicating Functions

december 2017 by Vaguery

There are many exciting directions to explore from here, but I’m going to take a break now, considering this as a nice initial step into the realm of self-replicating functions.

mathematical-recreations
self-similarity
nudge-targets
to-write-about
consider:performance-measures
december 2017 by Vaguery

[1703.07244] A Hybrid Feasibility Constraints-Guided Search to the Two-Dimensional Bin Packing Problem with Due Dates

november 2017 by Vaguery

The two-dimensional non-oriented bin packing problem with due dates packs a set of rectangular items, which may be rotated by 90 degrees, into identical rectangular bins. The bins have equal processing times. An item's lateness is the difference between its due date and the completion time of its bin. The problem packs all items without overlap as to minimize maximum lateness Lmax.

The paper proposes a tight lower bound that enhances an existing bound on Lmax for 24.07% of the benchmark instances and matches it in 30.87% cases. In addition, it models the problem using mixed integer programming (MIP), and solves small-sized instances exactly using CPLEX. It approximately solves larger-sized instances using a two-stage heuristic. The first stage constructs an initial solution via a first-fit heuristic that applies an iterative constraint programming (CP)-based neighborhood search. The second stage, which is iterative too, approximately solves a series of assignment low-level MIPs that are guided by feasibility constraints. It then enhances the solution via a high-level random local search. The approximate approach improves existing upper bounds by 27.45% on average, and obtains the optimum for 33.93% of the instances. Overall, the exact and approximate approaches identify the optimum for 39.07% cases.

The proposed approach is applicable to complex problems. It applies CP and MIP sequentially, while exploring their advantages, and hybridizes heuristic search with MIP. It embeds a new lookahead strategy that guards against infeasible search directions and constrains the search to improving directions only; thus, differs from traditional lookahead beam searches.

operations-research
planning
algorithms
hard-problems
bin-packing
computational-complexity
to-write-about
rather-interesting
nudge-targets
consider:multiobjective-approach
consider:performance-measures
The paper proposes a tight lower bound that enhances an existing bound on Lmax for 24.07% of the benchmark instances and matches it in 30.87% cases. In addition, it models the problem using mixed integer programming (MIP), and solves small-sized instances exactly using CPLEX. It approximately solves larger-sized instances using a two-stage heuristic. The first stage constructs an initial solution via a first-fit heuristic that applies an iterative constraint programming (CP)-based neighborhood search. The second stage, which is iterative too, approximately solves a series of assignment low-level MIPs that are guided by feasibility constraints. It then enhances the solution via a high-level random local search. The approximate approach improves existing upper bounds by 27.45% on average, and obtains the optimum for 33.93% of the instances. Overall, the exact and approximate approaches identify the optimum for 39.07% cases.

The proposed approach is applicable to complex problems. It applies CP and MIP sequentially, while exploring their advantages, and hybridizes heuristic search with MIP. It embeds a new lookahead strategy that guards against infeasible search directions and constrains the search to improving directions only; thus, differs from traditional lookahead beam searches.

november 2017 by Vaguery

[1710.01410] Learning Registered Point Processes from Idiosyncratic Observations

november 2017 by Vaguery

A parametric point process model is developed, with modeling based on the assumption that sequential observations often share latent phenomena, while also possessing idiosyncratic effects. An alternating optimization method is proposed to learn a "registered" point process that accounts for shared structure, as well as "warping" functions that characterize idiosyncratic aspects of each observed sequence. Under reasonable constraints, in each iteration we update the sample-specific warping functions by solving a set of constrained nonlinear programming problems in parallel, and update the model by maximum likelihood estimation. The justifiability, complexity and robustness of the proposed method are investigated in detail. Experiments on both synthetic and real-world data demonstrate that the method yields explainable point process models, achieving encouraging results compared to state-of-the-art methods.

modeling-is-not-mathematics
rather-interesting
representation
machine-learning
algorithms
nudge
nudge-targets
consider:representation
consider:performance-measures
november 2017 by Vaguery

[1611.08035] Automating the Last-Mile for High Performance Dense Linear Algebra

november 2017 by Vaguery

High performance dense linear algebra (DLA) libraries often rely on a general matrix multiply (Gemm) kernel that is implemented using assembly or with vector intrinsics. In particular, the real-valued Gemm kernels provide the overwhelming fraction of performance for the complex-valued Gemm kernels, along with the entire level-3 BLAS and many of the real and complex LAPACK routines. Thus,achieving high performance for the Gemm kernel translates into a high performance linear algebra stack above this kernel. However, it is a monumental task for a domain expert to manually implement the kernel for every library-supported architecture. This leads to the belief that the craft of a Gemm kernel is more dark art than science. It is this premise that drives the popularity of autotuning with code generation in the domain of DLA.

This paper, instead, focuses on an analytical approach to code generation of the Gemm kernel for different architecture, in order to shed light on the details or voo-doo required for implementing a high performance Gemm kernel. We distill the implementation of the kernel into an even smaller kernel, an outer-product, and analytically determine how available SIMD instructions can be used to compute the outer-product efficiently. We codify this approach into a system to automatically generate a high performance SIMD implementation of the Gemm kernel. Experimental results demonstrate that our approach yields generated kernels with performance that is competitive with kernels implemented manually or using empirical search.

matrices
algorithms
rather-interesting
optimization
code-generation
nudge-targets
consider:looking-to-see
consider:performance-measures
This paper, instead, focuses on an analytical approach to code generation of the Gemm kernel for different architecture, in order to shed light on the details or voo-doo required for implementing a high performance Gemm kernel. We distill the implementation of the kernel into an even smaller kernel, an outer-product, and analytically determine how available SIMD instructions can be used to compute the outer-product efficiently. We codify this approach into a system to automatically generate a high performance SIMD implementation of the Gemm kernel. Experimental results demonstrate that our approach yields generated kernels with performance that is competitive with kernels implemented manually or using empirical search.

november 2017 by Vaguery

[1406.5895] Universal Lyndon Words

november 2017 by Vaguery

A word w over an alphabet Σ is a Lyndon word if there exists an order defined on Σ for which w is lexicographically smaller than all of its conjugates (other than itself). We introduce and study \emph{universal Lyndon words}, which are words over an n-letter alphabet that have length n! and such that all the conjugates are Lyndon words. We show that universal Lyndon words exist for every n and exhibit combinatorial and structural properties of these words. We then define particular prefix codes, which we call Hamiltonian lex-codes, and show that every Hamiltonian lex-code is in bijection with the set of the shortest unrepeated prefixes of the conjugates of a universal Lyndon word. This allows us to give an algorithm for constructing all the universal Lyndon words.

combinatorics
strings
to-write-about
nudge-targets
consider:rediscovery
consider:performance-measures
november 2017 by Vaguery

[1702.02017] Pushing the Bounds for Matrix-Matrix Multiplication

november 2017 by Vaguery

A tight lower bound for required I/O when computing a matrix-matrix multiplication on a processor with two layers of memory is established. Prior work obtained weaker lower bounds by reasoning about the number of \textit{phases} needed to perform C:=AB, where each phase is a series of operations involving S reads and writes to and from fast memory, and S is the size of fast memory. A lower bound on the number of phases was then determined by obtaining an upper bound on the number of scalar multiplications performed per phase. This paper follows the same high level approach, but improves the lower bound by considering C:=AB+C instead of C:=AB, and obtains the maximum number of scalar fused multiply-adds (FMAs) per phase instead of scalar additions. Key to obtaining the new result is the decoupling of the per-phase I/O from the size of fast memory. The new lower bound is 2mnk/S‾√−2S. The constant for the leading term is an improvement of a factor 42‾√. A theoretical algorithm that attains the lower bound is given, and how the state-of-the-art Goto's algorithm also in some sense meets the lower bound is discussed.

computational-complexity
algorithms
matrices
rather-interesting
nudge-targets
consider:looking-to-see
consider:representation
consider:performance-measures
november 2017 by Vaguery

[1710.01992] Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks

november 2017 by Vaguery

Convolutional neural networks have recently demonstrated high-quality reconstruction for single image super-resolution. However, existing methods often require a large number of network parameters and entail heavy computational loads at runtime for generating high-accuracy super-resolution results. In this paper, we propose the deep Laplacian Pyramid Super-Resolution Network for fast and accurate image super-resolution. The proposed network progressively reconstructs the sub-band residuals of high-resolution images at multiple pyramid levels. In contrast to existing methods that involve the bicubic interpolation for pre-processing (which results in large feature maps), the proposed method directly extracts features from the low-resolution input space and thereby entails low computational loads. We train the proposed network with deep supervision using the robust Charbonnier loss functions and achieve high-quality image reconstruction. Furthermore, we utilize the recursive layers to share parameters across as well as within pyramid levels, and thus drastically reduce the number of parameters. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of run-time and image quality.

superresolution
neural-networks
representation
algorithms
image-processing
generative-models
to-write-about
nudge-targets
consider:representation
consider:performance-measures
november 2017 by Vaguery

[1401.6200] Three Series for the Generalized Golden Mean

october 2017 by Vaguery

As is well-known, the ratio of adjacent Fibonacci numbers tends to phi = (1 + sqrt(5))/2, and the ratio of adjacent Tribonacci numbers (where each term is the sum of the three preceding numbers) tends to the real root eta of X^3 - X^2 - X - 1 = 0. Letting alpha(n) denote the corresponding ratio for the generalized Fibonacci numbers, where each term is the sum of the n preceding, we obtain rapidly converging series for alpha(n), 1/alpha(n), and 1/(2-alpha(n)).

number-theory
approximation
rather-interesting
series
convergence
nudge-targets
consider:evolving-some
consider:performance-measures
consider:convergence
consider:complexity
october 2017 by Vaguery

**related tags**

Copy this bookmark: