dynamical-systems   281

« earlier    

[1802.03905] How to Match when All Vertices Arrive Online
We introduce a fully online model of maximum cardinality matching in which all vertices arrive online. On the arrival of a vertex, its incident edges to previously-arrived vertices are revealed. Each vertex has a deadline that is after all its neighbors' arrivals. If a vertex remains unmatched until its deadline, the algorithm must then irrevocably either match it to an unmatched neighbor, or leave it unmatched. The model generalizes the existing one-sided online model and is motivated by applications including ride-sharing platforms, real-estate agency, etc.
We show that the Ranking algorithm by Karp et al. (STOC 1990) is 0.5211-competitive in our fully online model for general graphs. Our analysis brings a novel charging mechanic into the randomized primal dual technique by Devanur et al. (SODA 2013), allowing a vertex other than the two endpoints of a matched edge to share the gain. To our knowledge, this is the first analysis of Ranking that beats 0.5 on general graphs in an online matching problem, a first step towards solving the open problem by Karp et al. (STOC 1990) about the optimality of Ranking on general graphs. If the graph is bipartite, we show that the competitive ratio of Ranking is between 0.5541 and 0.5671. Finally, we prove that the fully online model is strictly harder than the previous model as no online algorithm can be 0.6317<1−1e-competitive in our model even for bipartite graphs.
algorithms  dynamical-systems  graph-theory  rather-interesting  computational-complexity  to-write-about  to-simulate 
10 days ago by Vaguery
The meaning of model equivalence: Network models, latent variables, and the theoretical space in between | Psych Networks
Recently, an important set of equivalent representations of the Ising model was published by Joost Kruis and Gunter Maris in Scientific Reports. The paper constructs elegant representations of the Ising model probability distribution in terms of a network model (which consists of direct relations between observables), a latent variable model (which consists of relations between a latent variable and observables, in which the latent variable acts as a common cause), and a common effect model (which also consists of relations between a latent variable and observables, but here the latent variable acts as a common effect). The latter equivalence is a novel contribution to the literature and a quite surprising finding, because it means that a formative model can be statistically equivalent to a reflective model, which one may not immediately expect (do note that this equivalence need not maintain dimensionality, so a model with a single common effect may translate in a higher-dimensional latent variable model).

However, the equivalence between the ordinary (reflective) latent variable models and network models has been with us for a long time, and I therefore was rather surprised at some people’s reaction to the paper and the blog post that accompanies it. Namely, it appears that some think that (a) the fact that network structures can mimic reflective latent variables and vice versa is a recent discovery, that (b) somehow spells trouble for the network approach itself (because, well, what’s the difference?). The first of these claims is sufficiently wrong to go through the trouble of refuting it, if only to set straight the historical record; the second is sufficiently interesting to investigate it a little more deeply. Hence the following notes.
dynamical-systems  models  models-and-modes  representation  philosophy-of-science  (in-practice)  to-write-about  via:several 
11 weeks ago by Vaguery
[1803.09574] Long short-term memory and Learning-to-learn in networks of spiking neurons
Networks of spiking neurons (SNNs) are frequently studied as models for networks of neurons in the brain, but also as paradigm for novel energy efficient computing hardware. In principle they are especially suitable for computations in the temporal domain, such as speech processing, because their computations are carried out via events in time and space. But so far they have been lacking the capability to preserve information for longer time spans during a computation, until it is updated or needed - like a register of a digital computer. This function is provided to artificial neural networks through Long Short-Term Memory (LSTM) units. We show here that SNNs attain similar capabilities if one includes adapting neurons in the network. Adaptation denotes an increase of the firing threshold of a neuron after preceding firing. A substantial fraction of neurons in the neocortex of rodents and humans has been found to be adapting. It turns out that if adapting neurons are integrated in a suitable manner into the architecture of SNNs, the performance of these enhanced SNNs, which we call LSNNs, for computation in the temporal domain approaches that of artificial neural networks with LSTM-units. In addition, the computing and learning capabilities of LSNNs can be substantially enhanced through learning-to-learn (L2L) methods from machine learning, that have so far been applied primarily to LSTM networks and apparently never to SSNs.
This preliminary report on arXiv will be replaced by a more detailed version in about a month.
neural-networks  biological-engineering  dynamical-systems  rather-interesting  architecture  machine-learning  simulation  learning-by-watching  to-write-about 
11 weeks ago by Vaguery
[1706.05621] Double jump phase transition in a random soliton cellular automaton
In this paper, we consider the soliton cellular automaton introduced in [Takahashi 1990] with a random initial configuration. We give multiple constructions of a Young diagram describing various statistics of the system in terms of familiar objects like birth-and-death chains and Galton-Watson forests. Using these ideas, we establish limit theorems showing that if the first n boxes are occupied independently with probability p∈(0,1), then the number of solitons is of order n for all p, and the length of the longest soliton is of order logn for p<1/2, order n‾√ for p=1/2, and order n for p>1/2. Additionally, we uncover a condensation phenomenon in the supercritical regime: For each fixed j≥1, the top j soliton lengths have the same order as the longest for p≤1/2, whereas all but the longest have order at most logn for p>1/2. As an application, we obtain scaling limits for the lengths of the kth longest increasing and decreasing subsequences in a random stack-sortable permutation of length n in terms of random walks and Brownian excursions.
cellular-automata  self-organization  rather-interesting  dynamical-systems  phase-transitions  to-write-about 
february 2018 by Vaguery
[1709.05601] Markov Brains: A Technical Introduction
Markov Brains are a class of evolvable artificial neural networks (ANN). They differ from conventional ANNs in many aspects, but the key difference is that instead of a layered architecture, with each node performing the same function, Markov Brains are networks built from individual computational components. These computational components interact with each other, receive inputs from sensors, and control motor outputs. The function of the computational components, their connections to each other, as well as connections to sensors and motors are all subject to evolutionary optimization. Here we describe in detail how a Markov Brain works, what techniques can be used to study them, and how they can be evolved.
representation  evolutionary-algorithms  dynamical-systems  emergent-design  to-write-about  to-invite-to-speak  hey-I-know-these-folks 
february 2018 by Vaguery
Metrical properties for a class of continued fractions with increasing digits - ScienceDirect
In 2002, Hartono, Kraaikamp and Schweiger introduced the Engel continued fractions (ECF), whose partial quotients are increasing. Later, Schweiger generalized it into a class of continued fractions with increasing digits and a parameter ϵ, called generalized continued fractions (GCF). In this paper, we will give some arithmetic properties of such an expansion, and show that the GCF holds similar metric properties with ECF under the condition that
. But when it comes to the condition that
, it does not.
continued-fractions  number-theory  to-understand  algorithms  dynamical-systems  to-write-about 
january 2018 by Vaguery
[1606.05099] Invariant measures for continued fraction algorithms with finitely many digits
In this paper we consider continued fraction (CF) expansions on intervals different from [0,1]. For every x in such interval we find a CF expansion with a finite number of possible digits. Using the natural extension, the density of the invariant measure is obtained in a number of examples. In case this method does not work, a Gauss-Kuzmin-L\'evy based approximation method is used. Finally, a subfamily of the N-expansions is studied. In particular, the entropy as a function of a parameter α is estimated for N=2 and N=36. Interesting behavior can be observed from numerical results.
continued-fractions  number-theory  looking-to-see  dynamical-systems  to-write-about 
january 2018 by Vaguery
[1710.05183] Inferring Mesoscale Models of Neural Computation
Recent years have seen dramatic progress in the development of techniques for measuring the activity and connectivity of large populations of neurons in the brain. However, as these techniques grow ever more powerful---allowing us to even contemplate measuring every neuron in entire brain---a new problem arises: how do we make sense of the mountains of data that these techniques produce? Here, we argue that the time is ripe for building an intermediate or "mesoscale" computational theory that can bridge between single-cell (microscale) accounts of neural function and behavioral (macroscale) accounts of animal cognition and environmental complexity. Just as digital accounts of computation in conventional computers abstract away the non-essential dynamics of the analog circuits that implementing gates and registers, so too a computational account of animal cognition can afford to abstract from the non-essential dynamics of neurons. We argue that the geometry of neural circuits is essential in explaining the computational limitations and technological innovations inherent in biological information processing. We propose a blueprint for how to employ tools from modern machine learning to automatically infer a satisfying mesoscale account of neural computation that combines functional and structural data, with an emphasis on learning and exploiting regularity and repeating motifs in neuronal circuits. Rather than suggest a specific theory, we present a new class of scientific instruments that can enable neuroscientists to design, propose, implement and test mesoscale theories of neural computation.
dynamical-systems  machine-learning  deep-learning  representation  temporal-models  rather-interesting  inference  to-write-about 
november 2017 by Vaguery
Understanding LSTM Networks -- colah's blog
Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence.

Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to classify what kind of event is happening at every point in a movie. It’s unclear how a traditional neural network could use its reasoning about previous events in the film to inform later ones.
recurrent-neural-nets  machine-learning  architecture  dynamical-systems  explainer 
november 2017 by Vaguery
[1711.00963] The Computational Complexity of Finding Separators in Temporal Graphs
Vertex separators, that is, vertex sets whose deletion disconnects two distinguished vertices in a graph, play a pivotal role in algorithmic graph theory. For instance, the concept of tree decompositions of graphs is tightly connected to the separator concept. For many realistic models of the real world, however, it is necessary to consider graphs whose edge set changes with time. More specifically, the edges are labeled with time stamps. In the literature, these graphs are referred to as temporal graphs, temporal networks, time-varying networks, edge-scheduled networks, etc. While there is an extensive literature on separators in "static" graphs, much less is known for the temporal setting. Building on previous work (e.g., Kempe et al. [STOC '00]), for the first time we systematically investigate the (parameterized) complexity of finding separators in temporal graphs. Doing so, we discover a rich landscape of computationally (fixed-parameter) tractable and intractable cases. In particular, we shed light on the so far seemingly overlooked fact that two frequently used models of temporal separation may lead to quite significant differences in terms of computational complexity. More specifically, considering paths in temporal graphs one may distinguish between strict paths (the time stamps along a path are strictly increasing) and non-strict paths (the time stamps along a path are monotonically non-decreasing). We observe that the corresponding strict case of temporal separators leads to several computationally much easier to handle cases than the non-strict case does.
graph-theory  combinatorics  dynamical-systems  robustness  nudge-targets  heuristics  algorithms  to-write-about 
november 2017 by Vaguery

« earlier    

related tags

(in-practice)  active-matter  agent-based  algebra  algorithms  analysis  aperiodic-tiling  architecture  asynchronous  automata  billiards  biological-engineering  biology  biophysics  books  boolean-networks  brueckner-networks  cellular-automata  chaos  chip-firing  collective-behavior  collective-intelligence  combinatorics  community-detection  complexology  computational-complexity  computational-geometry  consider:classification  consider:control-theory  consider:extensive-properties  consider:feature-discovery  consider:fractel-type-in-klapaucius  consider:looking-to-see  consider:open-systems  consider:performance-measures  consider:prediction  consider:representation  consider:req-like-systems  consider:review  consider:robustness  consider:simple-examples  consider:simulation  consider:simulations  consider:variant-overlays  constraint-satisfaction  continued-fractions  control-theory  coupled-oscillators  deep-learning  define-your-terms  differential-geometry  discrete-mathematics  dynamic-optimization  ecology  economics  electronics  emergence  emergent-design  emulation  engineering-design  ergodic-theory  evolution  evolutionary-algorithms  explainer  feature-construction  feature-extraction  formal-languages  fractals  game-theory  generative-models  generative-processes  geometry  graph-layout  graph-theory  heuristics  hey-i-know-these-folks  ifs  inference  information-theory  inverse-problems  kauffmania  lattice-polymers  learning-by-watching  learning-theory  looking-to-see  machine-learning  macroeconomics  mathematical-model  mathematical-recreations  mathematics  models-and-modes  models  multiagent-systems  multiobjective-optimization  network-theory  neural-networks  neurology  nonlinear-dynamics  nudge-targets  nudge  number-theory  numerics  odes  open-questions  optimization  out-of-the-box  papers  parameter-scanning  parametrization  pdes  phase-transitions  philosophy-of-science  physics!  physics  plane-geometry  pnas  probability-theory  probability  programming  proof  python  random-walks  rather-interesting  reaction-networks  recurrent-neural-nets  representation  rewriting-systems  robustness  sandpile  sandpiles  science  self-organization  self-similarity  signal-processing  simulation  software-engineering  sorting  statistical-mechanics  statistics  stochastic-resonance  stochastic-systems  strings  substitution-systems  systems-biology  temporal-models  theoretical-biology  tiling  to-cartoon-about  to-do  to-draw  to-invite-to-speak  to-read  to-simulate  to-understand  to-write-about  to-write  visualization  wikipedia 

Copy this bookmark:



description:


tags: