to-write-about   1024

« earlier    

The Only Woman to Win the Nobel Prize in Economics Also Debunked the Orthodoxy - Evonomics
I mention Lloyd’s essay to illustrate how ridiculous yet persistent the misconceptions about the “tragedy” dynamic truly are. Commons scholar Lewis Hyde dryly notes, “Just as Hardin proposes a herdsman whose reason is unable to encompass the common good, so Lloyd supposes persons who have no way to speak with each other or make joint decisions. Both writers inject laissez-faire individualism into an old agrarian village and then gravely announce that the commons is dead. From the point of view of such a village, Lloyd’s assumptions are as crazy as asking us to ‘suppose a man to have a purse to which his left and right hand may freely resort, each unaware of the other’.”

This absurdity, unfortunately, is the basis for a large literature of “prisoner’s dilemma” experiments that purport to show how “rational individuals” behave when confronted with “social dilemmas,” such as how to allocate a limited resource. Should the “prisoner” cooperate with other potential claimants and share the limited rewards? Or should he or she defect by grabbing as much for himself as possible?
economics  ideology  public-policy  models-and-modes  commons  to-write-about  theory-and-practice-sitting-in-a-tree  libertarianism  assumptions 
7 days ago by Vaguery
[1804.05445] Evolving Event-driven Programs with SignalGP
We present SignalGP, a new genetic programming (GP) technique designed to incorporate the event-driven programming paradigm into computational evolution's toolbox. Event-driven programming is a software design philosophy that simplifies the development of reactive programs by automatically triggering program modules (event-handlers) in response to external events, such as signals from the environment or messages from other programs. SignalGP incorporates these concepts by extending existing tag-based referencing techniques into an event-driven context. Both events and functions are labeled with evolvable tags; when an event occurs, the function with the closest matching tag is triggered. In this work, we apply SignalGP in the context of linear GP. We demonstrate the value of the event-driven paradigm using two distinct test problems (an environment coordination problem and a distributed leader election problem) by comparing SignalGP to variants that are otherwise identical, but must actively use sensors to process events or messages. In each of these problems, rapid interaction with the environment or other agents is critical for maximizing fitness. We also discuss ways in which SignalGP can be generalized beyond our linear GP implementation.
gptp  hey-I-know-this-guy  genetic-programming  representation  to-write-about 
7 days ago by Vaguery
[1803.05859v3] Neural Network Quine
Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The network replicates itself by learning to output its own weights. The network is designed using a loss function that can be optimized with either gradient-based or non-gradient-based methods. We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters. The best solution for a self-replicating network was found by alternating between regeneration and optimization steps. Finally, we describe a design for a self-replicating neural network that can solve an auxiliary task such as MNIST image classification. We observe that there is a trade-off between the network's ability to classify images and its ability to replicate, but training is biased towards increasing its specialization at image classification at the expense of replication. This is analogous to the trade-off between reproduction and other tasks observed in nature. We suggest that a self-replication mechanism for artificial intelligence is useful because it introduces the possibility of continual improvement through natural selection.
artificial-life  machine-learning  quines  rather-interesting  to-write-about  hey-I-know-this-guy 
7 days ago by Vaguery
[1801.07155] Continued fractions and orderings on the Markov numbers
Markov numbers are integers that appear in the solution triples of the Diophantine equation, x2+y2+z2=3xyz, called the Markov equation. A classical topic in number theory, these numbers are related to many areas of mathematics such as combinatorics, hyperbolic geometry, approximation theory and cluster algebras.
There is a natural map from the rational numbers between zero and one to the Markov numbers. In this paper, we prove two conjectures seen in Martin Aigner's book, Markov's theorem and 100 years of the uniqueness conjecture, that determine an ordering on subsets of the Markov numbers based on their corresponding rational.
The proof uses the cluster algebra of the torus with one puncture and a resulting reformulation of the conjectures in terms of continued fractions. The key step is to analyze the difference in the numerator of a continued fraction when an operation is applied to its entries.
number-theory  continued-fractions  rather-interesting  to-write-about  consider:looking-to-see  consider:generalizations 
15 days ago by Vaguery
Agility should pay attention to Sociology – Romeu Moura – Medium
So I wrote myself a thread to explain 3 concepts from Bourdieu and their impact in Agility
sociology  agility  rather-good  to-write-about 
16 days ago by Vaguery
The Artist Who Drew With Computers, Before Computers Were a Thing - SURFACE
“What makes Molnár’s work so important today is that her ability to experiment was aided and amplified by the tools she used,” say Anderson and Bianconi. “This spirit of experimentation allowed these works to be both systematic and humanistic, and has been influential for artists who have worked with computers since.”
generative-art  art-criticism  history  to-write-about  exhibition 
16 days ago by Vaguery
The Metonymy of Matrices - Scientific American Blog Network
As a tool, the matrix is so powerful that it is easy to forget that it is a representation of a function, not a function itself. A matrix truly is just the array of numbers, but I think in this context, most mathematicians are metonymists. (Metonymers? Metonymnistes?) We think of the matrix as the function itself, and it’s easy to lose sight of the fact that it's only notation. Matrices don’t even have to encode linear transformations. They are used in other contexts in mathematics, too, and restricting our definition to linear transformations can shortchange the other applications (though for my money, the value of the matrix as a way of representing linear transformations dwarfs any other use they have).
matrices  representation  functions  mathematical-recreations  to-write-about 
18 days ago by Vaguery
Orthogonal polygons | The Math Less Traveled
Quite a few commenters figured out what was going on, and mentioned several nice (equivalent) ways to think about it. Primarily, the idea is to draw all possible orthogonal polygons, that is, polygons with only right angles, organized by the total number of vertices. (So, for example, the picture above shows all orthogonal polygons with exactly ten vertices.) However, we have to be careful what we mean by the phrase “all possible”: there would be an infinite number of such polygons if we think about things like the precise lengths of edges. So we have to say when two polygons are considered the same, and when they are distinct. My rules are as follows:
mathematical-recreations  looking-to-see  geometry  polyominoes  to-write-about 
20 days ago by Vaguery
Props in Network Theory | Azimuth
We start with circuits made solely of ideal perfectly conductive wires. Then we consider circuits with passive linear components like resistors, capacitors and inductors. Finally we turn on the power and consider circuits that also have voltage and current sources.

And here’s the cool part: each kind of circuit corresponds to a prop that pure mathematicians would eventually invent on their own! So, what’s good for engineers is often mathematically natural too.

commentary: while abstract, it might be worth trying to understand this stuff
network-theory  abstraction  rather-interesting  models-and-modes  circles-and-arrows  bond-diagrams  to-write-about  to-understand  functional-programming  category-theory  via:Vaguery 
23 days ago by WMTrenfield
Props in Network Theory | Azimuth
We start with circuits made solely of ideal perfectly conductive wires. Then we consider circuits with passive linear components like resistors, capacitors and inductors. Finally we turn on the power and consider circuits that also have voltage and current sources.

And here’s the cool part: each kind of circuit corresponds to a prop that pure mathematicians would eventually invent on their own! So, what’s good for engineers is often mathematically natural too.
network-theory  abstraction  rather-interesting  models-and-modes  circles-and-arrows  bond-diagrams  to-write-about  to-understand  functional-programming  category-theory 
23 days ago by Vaguery
Generating function - Wikipedia
In mathematics, a generating function is a way of encoding an infinite sequence of numbers (an) by treating them as the coefficients of a power series. The sum of this infinite series is the generating function. Unlike an ordinary series, this formal series is allowed to diverge, meaning that the generating function is not always a true function and the "variable" is actually an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem.[1] One can generalize to formal series in more than one indeterminate, to encode information about arrays of numbers indexed by several natural numbers.
There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series; definitions and examples are given below. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed.
Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations defined for formal series. These expressions in terms of the indeterminate x may involve arithmetic operations, differentiation with respect to x and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of x. Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of x, and which has the formal series as its series expansion; this explains the designation "generating functions". However such interpretation is not required to be possible, because formal series are not required to give a convergent series when a nonzero numeric value is substituted for x. Also, not all expressions that are meaningful as functions of x are meaningful as expressions designating formal series; for example, negative and fractional powers of x are examples of functions that do not have a corresponding formal power series.
Generating functions are not functions in the formal sense of a mapping from a domain to a codomain. Generating functions are sometimes called generating series,[2] in that a series of terms can be said to be the generator of its sequence of term coefficients.
mathematics  via:arthegall  to-understand  rather-interesting  to-write-about  nudge-targets  hmmmmmm 
27 days ago by Vaguery
Complexity Explorer
New for 2018, our flagship course, Introduction to Complexity, will be open year round. All units will be available at all times, so you can learn the fundamentals of Complex Systems Science at your own pace, and earn your certificate at any time. 

All other courses are designed to be similar to a semester-long course, offering the chance to delve into a subject deeply. Courses are available in two ways: as a course session, or an archived course.  Each course is usually offered in one session per calendar year.

Tutorials are short, self-paced “mini-courses” designed to introduce students to important techniques and to provide illustrations of their application in complex systems.

Complexity Explorer's courses and tutorials are supported by user donations and contributions from the Santa Fe Institute.  Please consider donating to support further course and tutorial development.
MOOC  complexology  hey-I-know-this-guy  to-watch  to-write-about 
27 days ago by Vaguery
[1804.01622] Image Generation from Scene Graphs
To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.
deep-learning  image-processing  generative-art  generative-models  turning-the-handle-the-other-way  to-write-about  to-do 
5 weeks ago by Vaguery
Exactly how bad is the 13 times table? | The Aperiodical
Along the way, OEIS editor Charles R Greathouse IV added this intriguing conjecture:

Conjecture: a(n)≤N
for all n
. Perhaps N
can be taken as 81
.
number-theory  mathematical-recreations  open-questions  to-write-about  consider:feature-discovery 
5 weeks ago by Vaguery
mathrecreation: Tigers and Treasure
The graph below shows all 196 possible puzzles. The puzzles that lead to contradictions are coloured as white squares, the puzzles that do not lead to contradictions are coloured in black. We can see a white strip across the bottom: these are all the puzzles where door 2 is labelled with statement 1. Can you find the statement that always leads to contradictions for door 1? There are 131 puzzles that are "good" in the sense that they do not lead to contradictions.
mathematical-recreations  logic  looking-to-see  to-write-about  consider:robustness  nudge-targets 
6 weeks ago by Vaguery
[1803.01422] DAGs with NO TEARS: Smooth Optimization for Structure Learning
Estimating the structure of directed acyclic graphs (DAGs, also known as Bayesian networks) is a challenging problem since the search space of DAGs is combinatorial and scales superexponentially with the number of nodes. Existing approaches rely on various local heuristics for enforcing the acyclicity constraint and are not well-suited to general purpose optimization packages for their solution. In this paper, we introduce a fundamentally different strategy: We formulate the structure learning problem as a smooth, constrained optimization problem over real matrices that avoids this combinatorial constraint entirely. This is achieved by a novel characterization of acyclicity that is not only smooth but also exact. The resulting nonconvex, constrained program involves smooth functions whose gradients are easy to compute and only involve elementary matrix operations. By using existing black-box optimization routines, our method uses global search to find an optimal DAG and can be implemented in about 50 lines of Python and outperforms existing methods without imposing any structural constraints.
graph-theory  combinatorics  algorithms  rather-interesting  computational-complexity  to-understand  to-write-about  nudge-targets  consider:looking-to-see  representation 
6 weeks ago by Vaguery
History Dependent Cellular Automata | Softology's Blog
I have been exploring a variety of cellular automata lately and here is another one.

This is from another idea I had. Andrew Adamatzky let me know there has been work done using previous states before referred to as “Cellular Automata with Memory”. See these papers by Ramon Alonso-Sanz for other examples of 1D and 2D CAs using memory from previous generations.

This is a totalistic CA that uses the usual 8 immediate neighbor cells as well as the last step’s current cell and 8 neighbors. This gives a total of 17 neighbor cells that can influence the birth and survival of the cells.

I call them “History Dependent Cellular Automata” because they depend on the previous cycles’ neighbor cells as well as the usual 8 immediate neighbor cells.
cellular-automata  generative-art  to-write-about 
6 weeks ago by Vaguery
[1804.00222] Learning Unsupervised Learning Rules
A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.
machine-learning  unsupervised-learning  rather-interesting  feature-extraction  clustering  algorithms  to-write-about 
6 weeks ago by Vaguery
[1803.05316] Seven Sketches in Compositionality: An Invitation to Applied Category Theory
This book is an invitation to discover advanced topics in category theory through concrete, real-world examples. It aims to give a tour: a gentle, quick introduction to guide later exploration. The tour takes place over seven sketches, each pairing an evocative application, such as databases, electric circuits, or dynamical systems, with the exploration of a categorical structure, such as adjoint functors, enriched categories, or toposes.
No prior knowledge of category theory is assumed.
category-theory  book  to-read  to-write-about  via:several  rather-interesting 
7 weeks ago by Vaguery
[1802.01396] To understand deep learning we need to understand kernel learning
Generalization performance of classifiers in deep learning has recently become a subject of intense study. Heavily over-parametrized deep models tend to fit training data exactly. Despite overfitting, they perform well on test data, a phenomenon not yet fully understood.
The first point of our paper is that strong performance of overfitted classifiers is not a unique feature of deep learning. Using real-world and synthetic datasets, we establish that kernel classifiers trained to have zero classification error (overfitting) or even zero regression error (interpolation) perform very well on test data.
We proceed to prove lower bounds on the norm of overfitted solutions for smooth kernels, showing that they increase nearly exponentially with the data size. Since most generalization bounds depend polynomially on the norm of the solution, this result implies that they diverge as data increases. Furthermore, the existing bounds do not apply to interpolated classifiers.
We also show experimentally that (non-smooth) Laplacian kernels easily fit random labels using a version of SGD, a finding that parallels results reported for ReLU neural networks. In contrast, fitting noisy data requires many more epochs for smooth Gaussian kernels. The observation that the performance of overfitted Laplacian and Gaussian classifiers on the test is quite similar, suggests that generalization is tied to the properties of the kernel function rather than the optimization process.
We see that some key phenomena of deep learning are manifested similarly in kernel methods in the overfitted regime. We argue that progress on understanding deep learning will be difficult, until more analytically tractable "shallow" kernel methods are better understood. The experimental and theoretical results presented in this paper indicate a need for new theoretical ideas for understanding classical kernel methods.
machine-learning  philosophy-of-engineering  looking-to-see  statistics  algorithms  explanation  to-write-about  explanatory-power-and-narrative-discovery 
7 weeks ago by Vaguery

« earlier    

related tags

(in-practice)  abstraction  agent-based  agents  agility  algebra  algorithms  approximation  architecture  archives  art-criticism  artificial-intelligence  artificial-life  assumptions  audio-synthesis  audio  augmented-reality  autocatalysis  biochemistry  bioinformatics  biological-engineering  blockchain  bond-diagrams  book  boolean-networks  cad  capitalism  category-theory  cellular-automata  circles-and-arrows  classification  clojure  clustering  codes  collective-intelligence  colonialism  combinatorics-in-practice  combinatorics  commons  community-detection  competition  complexology  computational-complexity  computational-geometry  computer-science  computer  condensed-matter  conferences  consider:embedding-space  consider:feature-discovery  consider:for-code  consider:generalization  consider:generalizations  consider:genetic-programming  consider:integrating-nlp  consider:lexicase  consider:looking-to-see  consider:performance-measures  consider:push  consider:representation  consider:robustness  consider:simulation  consider:style  consider:the-mangle  constraint-satisfaction  context  continued-fractions  controversy  cultural-norms  dammit-arthegall  data-fusion  deep-learning  demonstrations-of-the-mangle  dichotomy-or-not?  diffy-qs  digital-humanities  digitization  disambiguation  disintermediation-in-action  diversity  dynamical-systems  economics  education  electronics  emergent-design  energy  engineering-criticism  engineering-design  enumeration  evolutionary-algorithms  evolutionary-biology  exhibition  experiment  explanation  explanatory-power-and-narrative-discovery  exploitation  exploration  exploratory-data-analysis  feature-construction  feature-extraction  functional-programming  functions  game-theory  games  generative-art  generative-models  genetic-programming  geometry  gptp  graph-layout  graph-theory  hardware-changes-models  have-done  have-read  healthcare  hey-i-know-these-folks  hey-i-know-this-guy  history-is-a-feature-not-a-bug  history-of-science  history  hmmmmmm  hyperheuristics  ideology  image-analysis  image-processing  inference  infinities  information-theory  infrastructure  integer-programming  interpolation  interpretability  inverse-problems  latex  learning-by-doing  learning-by-watching  lecture  libertarianism  linear-programming  literary-criticism  logic  looking-to-see  machine-learning  magic-squares  manifestos  maps  mathematical-recreations  mathematics  matrices  maybe-try-multiobjective  medical-solutionism  meh  metaheuristics  microbiology  modeling  models-and-modes  models  modularity  monte-carlo  mooc  multiobjective-optimization  nanohistory  nanotechnology  natural-language-processing  network-theory  neural-networks  neuroscience  nudge-targets  nudge  number-theory  numerical-methods  open-problems  open-questions  open-source  operations-research  optimization  origin-of-life  packing  pedagogy  performance-measure  phase-transitions  philosophy-in-practice  philosophy-of-engineering  philosophy-of-science  plane-geometry  planning  political-economy  politics  polyominoes  population-biology  power-relations  prediction  probability-theory  program-synthesis  programming-language  programming  proof  psychology  public-policy  puzzles  python  quantum-computing  quantums  quines  random-processes  rather-good  rather-interesting  rational-numbers  reaction-networks  recipes  recurrent-networks  recurrent-neural-networks  reinventing-the-wheel  relevance  replicators  representation  reverse-engineering  robotics  robustness  sampling  self-organization  semiotics  sensors  signal-processing  simulation  social-dynamics  sociology  sorting  speech-synthesis  statistics  superresolution  swarms  systemic-problems  systems-biology  technology  the-collective-mangle  the-mangle-in-practice  the-pragmatics-of-the-thing  the-whole-objective-history-thing-in-a-nutshell  theoretical-biology  theoretical-chemistry  theory-and-practice-sitting-in-a-tree  theory  time-series  to-do  to-invite-to-speak  to-read  to-simulate  to-try  to-understand  to-watch  turning-the-handle-the-other-way  type-theory  unsupervised-learning  user-experience  user-interface  visualization  wavenet  what-gets-measured-gets-fudged  worklife  workshop 

Copy this bookmark:



description:


tags: