[1506.09039] Scalable Discrete Sampling as a Multi-Armed Bandit Problem

14 hours ago

Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.

sampling
inverse-problems
rather-interesting
probability-theory
simulation
engineering-design
nudge-targets
consider:feature-discovery
14 hours ago

[1612.00279] Nicolas-Auguste Tissot: A link between cartography and quasiconformal theory

14 hours ago

Nicolas-Auguste Tissot (1824--1897) published a series of papers on cartography in which he introduced a tool which became known later on, among geographers, under the name of the "Tissot indicatrix." This tool was broadly used during the twentieth century in the theory and in the practical aspects of the drawing of geographical maps. The Tissot indicatrix is a graphical representation of a field of ellipses on a map that describes its distortion. Tissot studied extensively, from a mathematical viewpoint, the distortion of mappings from the sphere onto the Euclidean plane that are used in drawing geographical maps, and more generally he developed a theory for the distorsion of mappings between general surfaces. His ideas are at the heart of the work on quasiconformal mappings that was developed several decades after him by Gr{\"o}tzsch, Lavrentieff, Ahlfors and Teichm{\"u}ller. Gr{\"o}tzsch mentions the work of Tissot and he uses the terminology related to his name (in particular, Gr{\"o}tzsch uses the Tissot indicatrix). Teichm{\"u}ller mentions the name of Tissot in a historical section in one of his fundamental papers where he claims that quasiconformal mappings were used by geographers, but without giving any hint about the nature of Tissot's work. The name of Tissot is also missing from all the historical surveys on quasiconformal mappings. In the present paper, we report on this work of Tissot. We shall also mention some related works on cartography, on the differential geometry of surfaces, and on the theory of quasiconformal mappings. This will place Tissot's work in its proper context. The final version of this paper will appear in the journal Arch. Hist. Exact Sciences.

history
history-of-science
philosophy-of-engineering
maps
performance-measure
to-write-about
consider:generalization
14 hours ago

[1711.01092] Cost-Optimal Operation of Energy Storage Units: Impact of Uncertainties and Robust Estimator

17 hours ago

The rapid expansion of wind and solar energy leads to an increasing volatility in the electricity generation. Previous studies have shown that storage devices provide an opportunity to balance fluctuations in the power grid. An economical operation of these devices is linked to solutions of probabilistic optimization problems, due to the fact that future generation is not deterministic in general. For this reason, reliable forecast methods as well as appropriate robust optimization algorithms take an increasingly important role in future power operation systems. Taking an uncertain availability of electricity into account, we present a method to calculate cost-optimal charging strategies for energy storage units. The proposed method solves subproblems which result from a sliding window approach applied on a linear program by utilizing statistical measures. The prerequisite of this method is a recently developed fast algorithm for storage-related optimization problems. To present the potential of the proposed method, a Power-To-Heat storage system is investigated as an example using today's available forecast data and a robust statistical measure. Second, the novel approach proposed here is compared with a common robust optimization method for stochastic scenario problems. In comparison, the proposed method provides lower acquisition costs, especially for today's available forecasts, and is more robust under perturbations in terms of deteriorating predictions, both based on empirical analyses. Furthermore, the introduced approach is applicable to general cost-optimal operation problems and also real-time optimization concerning uncertain acquisition costs.

operations-research
energy
prediction
time-series
nudge-targets
performance-measure
to-write-about
17 hours ago

[1710.00194] Where computer vision can aid physics: dynamic cloud motion forecasting from satellite images

17 hours ago

This paper describes a new algorithm for solar energy forecasting from a sequence of Cloud Optical Depth (COD) images. The algorithm is based on the following simple observation: the dynamics of clouds represented by COD images resembles the motion (transport) of a density in a fluid flow. This suggests that, to forecast the motion of COD images, it is sufficient to forecast the flow. The latter, in turn, can be accomplished by fitting a parametric model of the fluid flow to the COD images observed in the past. Namely, the learning phase of the algorithm is composed of the following steps: (i) given a sequence of COD images, the snapshots of the optical flow are estimated from two consecutive COD images; (ii) these snapshots are then assimilated into a Navier-Stokes Equation (NSE), i.e. an initial velocity field for NSE is selected so that the corresponding NSE' solution is as close as possible to the optical flow snapshots. The prediction phase consists of utilizing a linear transport equation, which describes the propagation of COD images in the fluid flow predicted by NSE, to estimate the future motion of the COD images. The algorithm has been tested on COD images provided by two geostationary operational environmental satellites from NOAA serving the west-hemisphere.

image-processing
prediction
nudge-targets
consider:looking-to-see
representation
low-hanging-fruit
17 hours ago

[1709.08609] Phase-Tunable Thermal Logic: Computation with Heat

17 hours ago

Boolean algebra, the branch of mathematics where variables can assume only true or false value, is the theoretical basis of classical computation. The analogy between Boolean operations and electronic switching circuits, highlighted by Shannon in 1938, paved the way to modern computation based on electronic devices. The grow of computational power of such devices, after an exciting exponential -Moore trend, is nowadays blocked by heat dissipation due to computational tasks, very demanding after the chips miniaturization. Heat is often a detrimental form of energy which increases the systems entropy decreasing the efficiency of logic operations. Here, we propose a physical system able to perform thermal logic operations by reversing the old heat-disorder epitome into a novel heat-order paradigm. We lay the foundations of heat computation by encoding logic state variables in temperature and introducing the thermal counterparts of electronic logic gates. Exploiting quantum effects in thermally biased Josephson junctions (JJs), we propound a possible realization of a functionally complete dissipationless logic. Our architecture ensures high operation stability and robustness with switching frequencies reaching the GHz.

nanotechnology
electronics
rather-interesting
boolean-networks
engineering-design
to-write-about
17 hours ago

[1705.08432] Question-Answering with Grammatically-Interpretable Representations

17 hours ago

We introduce an architecture, the Tensor Product Recurrent Network (TPRN). In our application of TPRN, internal representations learned by end-to-end optimization in a deep neural network performing a textual question-answering (QA) task can be interpreted using basic concepts from linguistic theory. No performance penalty need be paid for this increased interpretability: the proposed model performs comparably to a state-of-the-art system on the SQuAD QA task. The internal representation which is interpreted is a Tensor Product Representation: for each input word, the model selects a symbol to encode the word, and a role in which to place the symbol, and binds the two together. The selection is via soft attention. The overall interpretation is built from interpretations of the symbols, as recruited by the trained model, and interpretations of the roles as used by the model. We find support for our initial hypothesis that symbols can be interpreted as lexical-semantic word meanings, while roles can be interpreted as approximations of grammatical roles (or categories) such as subject, wh-word, determiner, etc. Fine-grained analysis reveals specific correspondences between the learned roles and parts of speech as assigned by a standard tagger (Toutanova et al. 2003), and finds several discrepancies in the model's favor. In this sense, the model learns significant aspects of grammar, after having been exposed solely to linguistically unannotated text, questions, and answers: no prior linguistic knowledge is given to the model. What is given is the means to build representations using symbols and roles, with an inductive bias favoring use of these in an approximately discrete manner.

natural-language-processing
deep-learning
recurrent-neural-networks
rather-interesting
representation
to-write-about
machine-learning
17 hours ago

[1709.10030] Sparse Hierarchical Regression with Polynomials

17 hours ago

We present a novel method for exact hierarchical sparse polynomial regression. Our regressor is that degree r polynomial which depends on at most k inputs, counting at most ℓ monomial terms, which minimizes the sum of the squares of its prediction errors. The previous hierarchical sparse specification aligns well with modern big data settings where many inputs are not relevant for prediction purposes and the functional complexity of the regressor needs to be controlled as to avoid overfitting. We present a two-step approach to this hierarchical sparse regression problem. First, we discard irrelevant inputs using an extremely fast input ranking heuristic. Secondly, we take advantage of modern cutting plane methods for integer optimization to solve our resulting reduced hierarchical (k,ℓ)-sparse problem exactly. The ability of our method to identify all k relevant inputs and all ℓ monomial terms is shown empirically to experience a phase transition. Crucially, the same transition also presents itself in our ability to reject all irrelevant features and monomials as well. In the regime where our method is statistically powerful, its computational complexity is interestingly on par with Lasso based heuristics. The presented work fills a void in terms of a lack of powerful disciplined nonlinear sparse regression methods in high-dimensional settings. Our method is shown empirically to scale to regression problems with n≈10,000 observations for input dimension p≈1,000.

statistics
regression
algorithms
approximation
performance-measure
to-understand
nudge-targets
consider:looking-to-see
17 hours ago

[1710.06647] Image Restoration by Iterative Denoising and Backward Projections

17 hours ago

Inverse problems appear in many applications such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this work, we propose an alternative method for solving inverse problems using denoising algorithms, which requires less parameter tuning. We provide a theoretical analysis of the method, and empirically demonstrate that it is competitive with task-specific techniques and the P&P approach for image inpainting and deblurring.

inverse-problems
image-processing
superresolution
algorithms
performance-measure
to-write-about
nudge-targets
consider:looking-to-see
17 hours ago

[1707.06340] On Unlimited Sampling

17 hours ago

Shannon's sampling theorem provides a link between the continuous and the discrete realms stating that bandlimited signals are uniquely determined by its values on a discrete set. This theorem is realized in practice using so called analog--to--digital converters (ADCs). Unlike Shannon's sampling theorem, the ADCs are limited in dynamic range. Whenever a signal exceeds some preset threshold, the ADC saturates, resulting in aliasing due to clipping. The goal of this paper is to analyze an alternative approach that does not suffer from these problems. Our work is based on recent developments in ADC design, which allow for ADCs that reset rather than to saturate, thus producing modulo samples. An open problem that remains is: Given such modulo samples of a bandlimited function as well as the dynamic range of the ADC, how can the original signal be recovered and what are the sufficient conditions that guarantee perfect recovery? In this paper, we prove such sufficiency conditions and complement them with a stable recovery algorithm. Our results are not limited to certain amplitude ranges, in fact even the same circuit architecture allows for the recovery of arbitrary large amplitudes as long as some estimate of the signal norm is available when recovering. Numerical experiments that corroborate our theory indeed show that it is possible to perfectly recover function that takes values that are orders of magnitude higher than the ADC's threshold.

information-theory
rather-interesting
signal-processing
the-mangle-in-practice
hardware-changes-models
to-write-about
nudge-targets
consider:looking-to-see
17 hours ago

[1802.07740] Machine Theory of Mind

17 hours ago

Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans' ability to represent the mental states of others, including their desires, beliefs, and intentions. We propose to train a machine to build such models too. We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone. Through this process, it acquires a strong prior model for agents' behaviour, as well as the ability to bootstrap to richer predictions about agents' characteristics and mental states using only a small number of behavioural observations. We apply the ToMnet to agents behaving in simple gridworld environments, showing that it learns to model random, algorithmic, and deep reinforcement learning agents from varied populations, and that it passes classic ToM tasks such as the "Sally-Anne" test (Wimmer & Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold false beliefs about the world. We argue that this system -- which autonomously learns how to model other agents in its world -- is an important step forward for developing multi-agent AI systems, for building intermediating technology for machine-human interaction, and for advancing the progress on interpretable AI.

artificial-intelligence
relevance
machine-learning
to-write-about
collective-intelligence
agents
consider:the-mangle
17 hours ago

Algebraic Semiotics

17 hours ago

Algebraic semiotics is a new approach to meaning and representation, and in particular to user interface design, that builds on five important insights from the last hundred years:

Semiotics: Signs are not isolated items; they come in systems, and the structure of a sign is to a great extent inherited from the system to which it belongs. Signs do not have pre-given "Platonic" meanings, but rather their meaning is relational, because signs are always interpreted in particular contexts. (The first sentence reflects the influence of Saussure, the second that of Pierce.)

Social Context: Signs are used by people as part of their participation in social groups; meaning is primarily a social phenomenon; its purpose is communication. (This reflects some concerns of post-structuralism.)

Morphisms: If some class of objects is interesting, then structure preserving maps or morphisms of those objects are also interesting - perhaps even more so. For semiotics, these morphisms are representations. Objects and morphisms together form structures known as categories.

Blending and Colimits: If some class of objects is interesting, then putting those objects together in various ways is probably also interesting. Morphisms can be used to indicate that certain subojects are to be shared in such constructions, and colimits of various kinds are a category theoretic formalization of ways to put objects together. In cognitive linguistics, blending has been identified as an important way to combine conceptual systems.

Algebraic Specification: Sign systems and their morphisms can be described and studied in a precise way using semantic methods based on equational logic that were developed for the theory of abstract data types.

semiotics
user-interface
user-experience
digital-humanities
computer-science
engineering-design
rather-interesting
to-understand
to-write-about
Semiotics: Signs are not isolated items; they come in systems, and the structure of a sign is to a great extent inherited from the system to which it belongs. Signs do not have pre-given "Platonic" meanings, but rather their meaning is relational, because signs are always interpreted in particular contexts. (The first sentence reflects the influence of Saussure, the second that of Pierce.)

Social Context: Signs are used by people as part of their participation in social groups; meaning is primarily a social phenomenon; its purpose is communication. (This reflects some concerns of post-structuralism.)

Morphisms: If some class of objects is interesting, then structure preserving maps or morphisms of those objects are also interesting - perhaps even more so. For semiotics, these morphisms are representations. Objects and morphisms together form structures known as categories.

Blending and Colimits: If some class of objects is interesting, then putting those objects together in various ways is probably also interesting. Morphisms can be used to indicate that certain subojects are to be shared in such constructions, and colimits of various kinds are a category theoretic formalization of ways to put objects together. In cognitive linguistics, blending has been identified as an important way to combine conceptual systems.

Algebraic Specification: Sign systems and their morphisms can be described and studied in a precise way using semantic methods based on equational logic that were developed for the theory of abstract data types.

17 hours ago

[1711.08988] Exponential growth for self-reproduction in a catalytic reaction network: relevance of a minority molecular species and crowdedness

3 days ago

Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary to acquire evolvability at the primitive stage of life.

autocatalysis
artificial-life
origin-of-life
reaction-networks
self-organization
biochemistry
simulation
rather-interesting
to-write-about
to-simulate
3 days ago

[1706.05621] Double jump phase transition in a random soliton cellular automaton

3 days ago

In this paper, we consider the soliton cellular automaton introduced in [Takahashi 1990] with a random initial configuration. We give multiple constructions of a Young diagram describing various statistics of the system in terms of familiar objects like birth-and-death chains and Galton-Watson forests. Using these ideas, we establish limit theorems showing that if the first n boxes are occupied independently with probability p∈(0,1), then the number of solitons is of order n for all p, and the length of the longest soliton is of order logn for p<1/2, order n‾√ for p=1/2, and order n for p>1/2. Additionally, we uncover a condensation phenomenon in the supercritical regime: For each fixed j≥1, the top j soliton lengths have the same order as the longest for p≤1/2, whereas all but the longest have order at most logn for p>1/2. As an application, we obtain scaling limits for the lengths of the kth longest increasing and decreasing subsequences in a random stack-sortable permutation of length n in terms of random walks and Brownian excursions.

cellular-automata
self-organization
rather-interesting
dynamical-systems
phase-transitions
to-write-about
3 days ago

Browse Princeton Catalog in Popular Math | Princeton University Press

3 days ago

Browse Princeton's Math Subject Area (by Title) in Popular Math

mathematics
books
to-read
nudge-targets
mathematical-recreations
3 days ago

[1802.01548] Regularized Evolution for Image Classifier Architecture Search

3 days ago

The effort devoted to hand-crafting image classifiers has motivated the use of architecture search to discover them automatically. Reinforcement learning and evolution have both shown promise for this purpose. This study employs a regularized version of a popular asynchronous evolutionary algorithm. We rigorously compare it to the non-regularized form and to a highly-successful reinforcement learning baseline. Using the same hardware, compute effort and neural network training code, we conduct repeated experiments side-by-side, exploring different datasets, search spaces and scales. We show regularized evolution consistently produces models with similar or higher accuracy, across a variety of contexts without need for re-tuning parameters. In addition, evolution exhibits considerably better performance than reinforcement learning at early search stages, suggesting it may be the better choice when fewer compute resources are available. This constitutes the first controlled comparison of the two search algorithms in this context. Finally, we present new architectures discovered with evolution that we nickname AmoebaNets. These models set a new state of the art for CIFAR-10 (mean test error = 2.13%) and mobile-size ImageNet (top-5 accuracy = 92.1% with 5.06M parameters), and reach the current state of the art for ImageNet (top-5 accuracy = 96.2%).

deep-learning
evolutionary-algorithms
machine-learning
image-analysis
classification
performance-measure
to-write-about
nudge-targets
3 days ago

The cargo cult of versioning

3 days ago

In particular, Semantic Versioning is misguided, an attempt to fix something that is broken beyond repair. The correct way to practice semantic versioning is without any version strings at all, just Rich Hickey's directive: if you change behavior, rename. Or ok, keep a number around if you really need a security blanket. Either way, we programmers should be manually messing with version numbers a whole lot less. They're a holdover from the pre-version-control pre-internet days of shrink-wrapped software, vestigial in a world of pervasive connectivity and constant push updates. All version numbers do is provide cover for incompatibilities to hide under.

version-control
software-development-is-not-programming
representation
contingency
3 days ago

[1709.05601] Markov Brains: A Technical Introduction

3 days ago

Markov Brains are a class of evolvable artificial neural networks (ANN). They differ from conventional ANNs in many aspects, but the key difference is that instead of a layered architecture, with each node performing the same function, Markov Brains are networks built from individual computational components. These computational components interact with each other, receive inputs from sensors, and control motor outputs. The function of the computational components, their connections to each other, as well as connections to sensors and motors are all subject to evolutionary optimization. Here we describe in detail how a Markov Brain works, what techniques can be used to study them, and how they can be evolved.

representation
evolutionary-algorithms
dynamical-systems
emergent-design
to-write-about
to-invite-to-speak
hey-I-know-these-folks
3 days ago

Keep Anaconda from Constricting Your Homebrew Installs | Hashrocket

4 days ago

I recently installed Anaconda to my local machine and noticed that Anaconda modified my PATH variable. For Mac installations, if you check your .bash_profile, you should see something like the following:

software-development-is-not-programming
python
bash-stuff
4 days ago

Managing Dotfiles — davidculley.com

4 days ago

Have you ever asked yourself what the ~/.bash_profile, ~/.profile and ~/.bashrc files are good for? What do you do with them in the first place? Why are there so many of such files? And which one do you use in which case? If so, this blog post is for you.

software-development-is-not-programming
tutorial
4 days ago

Setting Up Your Python Environment on a Mac — davidculley.com

4 days ago

In this post I’m going to explain how to set up a working Python environment for Scientific Computing on macOS. That includes NumPy, SciPy, Matplotlib, etc. but also some more advanced tools and even a framework for Machine Intelligence and Deep Learning called TensorFlow.

Before we get started, I want to briefly give you a little bit of background so that you actually understand what you are doing and why you are doing it as opposed to simply copying code from some random site on the internet and wondering afterwards why something isn’t working on your specific machine.

python
homebrew-and-anaconda-sittin-in-a-tree
package-management
dammit
software-development-is-not-programming
Before we get started, I want to briefly give you a little bit of background so that you actually understand what you are doing and why you are doing it as opposed to simply copying code from some random site on the internet and wondering afterwards why something isn’t working on your specific machine.

4 days ago

Magic: The Gathering is Turing complete / Boing Boing

5 days ago

Alex Churchill has posted a way to implement a Turing complete computer within a game of Magic: The Gathering ("Turing complete" is a way of classifying a calculating engine that is capable of general-purpose computation). The profound and interesting thing about the recurrence of Turing completeness in many unexpected places -- such as page-layout descriptive engines -- is that it suggests that there's something foundational about the ability to do general computation. It also suggests that attempts to limit general computation will be complicated by the continued discovery of new potential computing engines. That is, even if you lock down all the PCs so that they only play restricted music formats and not Ogg, if you allow a sufficiently speedy and scriptable Magic: The Gathering program to exist, someone may implement the Ogg player using collectible card games.

computational-complexity
computer-science
amusing
proof
unconventional-representation-schemes
5 days ago

Dave Snowden at TedX: A succinct overview of his groundbreaking work | More Beyond

5 days ago

In less than 18 minutes, Dave manages to introduce complex systems theory; tell the children’s party story (3 mins 30 secs) and introduce a new theory of change based on the power of micro-narrative and vector measures enabled by Sensemaker (7 mins).

cynefin
TED-talk
emergent-design
engineering-philosophy
public-policy
hey-I-know-this-guy
5 days ago

App Extension Programming Guide: Custom Keyboard

10 days ago

Because a custom keyboard can draw only within the primary view of its UIInputViewController object, it cannot select text. Text selection is under the control of the app that is using the keyboard. If that app provides an editing menu interface (such as for Cut, Copy, and Paste), the keyboard has no access to it. A custom keyboard cannot offer inline autocorrection controls near the insertion point.

iOS
documentation
software-development
to-understand
10 days ago

[1802.02627] Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

13 days ago

Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware. However, their application in machine learning have largely been limited to very shallow neural network architectures for simple problems. In this paper, we propose a novel algorithmic technique for generating an SNN with a deep architecture, and demonstrate its effectiveness on complex visual recognition problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and Residual network architectures, with significantly better accuracy than the state-of-the-art. Finally, we present analysis of the sparse event-driven computations to demonstrate reduced hardware overhead when operating in the spiking domain.

via:?
to-write-about
to-read
neural-networks
representation
models-and-modes
machine-learning
simulation
13 days ago

[1709.08119] Analogies between the crossing number and the tangle crossing number

19 days ago

Tanglegrams are special graphs that consist of a pair of rooted binary trees with the same number of leaves, and a perfect matching between the two leaf-sets. These objects are of use in phylogenetics and are represented with straightline drawings where the leaves of the two plane binary trees are on two parallel lines and only the matching edges can cross. The tangle crossing number of a tanglegram is the minimum crossing number over all such drawings and is related to biologically relevant quantities, such as the number of times a parasite switched hosts.

Our main results for tanglegrams which parallel known theorems for crossing numbers are as follows. The removal of a single matching edge in a tanglegram with n leaves decreases the tangle crossing number by at most n−3, and this is sharp. Additionally, if γ(n) is the maximum tangle crossing number of a tanglegram with n leaves, we prove 12(n2)(1−o(1))≤γ(n)<12(n2). Further, we provide an algorithm for computing non-trivial lower bounds on the tangle crossing number in O(n4) time. This lower bound may be tight, even for tanglegrams with tangle crossing number Θ(n2).

graph-theory
feature-construction
phylogenetics
metrics
distance-measures
algorithms
rather-interesting
computational-complexity
nudge-targets
Our main results for tanglegrams which parallel known theorems for crossing numbers are as follows. The removal of a single matching edge in a tanglegram with n leaves decreases the tangle crossing number by at most n−3, and this is sharp. Additionally, if γ(n) is the maximum tangle crossing number of a tanglegram with n leaves, we prove 12(n2)(1−o(1))≤γ(n)<12(n2). Further, we provide an algorithm for computing non-trivial lower bounds on the tangle crossing number in O(n4) time. This lower bound may be tight, even for tanglegrams with tangle crossing number Θ(n2).

19 days ago

[1710.03720] Practical Integer Overflow Prevention

19 days ago

Integer overflows in commodity software are a main source for software bugs, which can result in exploitable memory corruption vulnerabilities and may eventually contribute to powerful software based exploits, i.e., code reuse attacks (CRAs).

In this paper, we present IntGuard , a tool that can repair integer overflows with high-quality source code repairs. Specifically, given the source code of a program, IntGuard first discovers the location of an integer overflow error by using static source code analysis and satisfiability modulo theories (SMT) solving. IntGuard then generates integer multi-precision code repairs based on modular manipulation of SMT constraints as well as an extensible set of customizable code repair patterns.

We have implemented and evaluated IntGuard with 2052 C programs (approx. 1 Mil. LOC) available in the currently largest open- source test suite for C/C++ programs and with a benchmark containing large and complex programs. The evaluation results show that IntGuard can precisely (i.e., no false positives are accidentally repaired), with low computational and runtime overhead repair programs with very small binary and source code blow-up. In a controlled experiment, we show that IntGuard is more time-effective and achieves a higher repair success rate than manually generated code repairs.

software-engineering
testing
security
rather-interesting
nudge-targets
consider:stress-testing
In this paper, we present IntGuard , a tool that can repair integer overflows with high-quality source code repairs. Specifically, given the source code of a program, IntGuard first discovers the location of an integer overflow error by using static source code analysis and satisfiability modulo theories (SMT) solving. IntGuard then generates integer multi-precision code repairs based on modular manipulation of SMT constraints as well as an extensible set of customizable code repair patterns.

We have implemented and evaluated IntGuard with 2052 C programs (approx. 1 Mil. LOC) available in the currently largest open- source test suite for C/C++ programs and with a benchmark containing large and complex programs. The evaluation results show that IntGuard can precisely (i.e., no false positives are accidentally repaired), with low computational and runtime overhead repair programs with very small binary and source code blow-up. In a controlled experiment, we show that IntGuard is more time-effective and achieves a higher repair success rate than manually generated code repairs.

19 days ago

The Transformation of Gender in English-Language Fiction | hc:18127 | Humanities CORE

19 days ago

Preprint to appear in a special issue of Cultural Analytics on "Identity." The article explores the paradox that the representation of gender in fiction became more flexible while the sheer balance of attention between fictional men and women was growing more unequal. We measure the rigidity of gendered roles by asking how easy it is to infer grammatical gender from ostensibly ungendered words used in characterization. In the nineteenth century, roles are so predictable that the inference is easy; it becomes harder as we move toward the present. But the diminishing power of stereotypes does not parallel progress toward equality of representation. On the contrary, by the middle of the twentieth century, women have lost almost half the space they occupied in nineteenth-century fiction. The tension between growing flexibility and growing inequality of representation presents literary historians with a striking paradox; a few potential explanations are considered.

gender
literary-criticism
distant-reading
digital-humanities
rather-interesting
19 days ago

Chimamanda Adichie: The daughter of postcolonial theory | France | Al Jazeera

19 days ago

But postcolonial artists and theorists alike face an intractable challenge: the burden of representation, which American literary critic Henry Louis Gates Jr defines as "that homely notion that you represent your race, thus that your actions can betray or honor it".

While Gates Jr has in mind the eight, remarkable Black men he profiles in Thirteen Ways of Looking at a Black Man, his concept resonates with the literary world. Because the global literary marketplace can only celebrate a few writers of colour at a time, such writers become laden with the responsibility of representing their people.

The stakes are high. Under this pressure, there is little room for decontextualised humour. The risks of erasure of entire intellectual histories and hard-earned victories are real. Perhaps the lesson is not that we should joke less.

If we are to dismantle the inequalities that limit the possibilities of art and ideas from the postcolonial world, the lesson is clear: we should all embrace postcolonial thought.

literary-criticism
colonialism
theory
philosophy-in-practice
to-write-about
While Gates Jr has in mind the eight, remarkable Black men he profiles in Thirteen Ways of Looking at a Black Man, his concept resonates with the literary world. Because the global literary marketplace can only celebrate a few writers of colour at a time, such writers become laden with the responsibility of representing their people.

The stakes are high. Under this pressure, there is little room for decontextualised humour. The risks of erasure of entire intellectual histories and hard-earned victories are real. Perhaps the lesson is not that we should joke less.

If we are to dismantle the inequalities that limit the possibilities of art and ideas from the postcolonial world, the lesson is clear: we should all embrace postcolonial thought.

19 days ago

Fine Parallel Processing Using a Work Queue | Kubernetes

20 days ago

In this example, we will run a Kubernetes Job with multiple parallel worker processes. You may want to be familiar with the basic, non-parallel, use of Job first.

In this example, as each pod is created, it picks up one unit of work from a task queue, processes it, and repeats until the end of the queue is reached.

kubernetes
architecture
distributed-processing
to-learn
nudge
In this example, as each pod is created, it picks up one unit of work from a task queue, processes it, and repeats until the end of the queue is reached.

20 days ago

Input Variables - Terraform by HashiCorp

20 days ago

You now have enough Terraform knowledge to create useful configurations, but we're still hard-coding access keys, AMIs, etc. To become truly shareable and version controlled, we need to parameterize the configurations. This page introduces input variables as a way to do this.

terraform
cloud-computing
documentation
to-understand
setup
nudge
20 days ago

Open Conference Systems | Public Knowledge Project

20 days ago

Open Conference Systems (OCS) is a free Web publishing tool that will create a complete Web presence for your scholarly conference. OCS will allow you to:

create a conference Web site

compose and send a call for papers

electronically accept paper and abstract submissions

allow paper submitters to edit their work

post conference proceedings and papers in a searchable format

post, if you wish, the original data sets

register participants

integrate post-conference online discussions

open-source
conferences
rather-interesting
infrastructure
disintermediation-in-action
to-write-about
to-understand
create a conference Web site

compose and send a call for papers

electronically accept paper and abstract submissions

allow paper submitters to edit their work

post conference proceedings and papers in a searchable format

post, if you wish, the original data sets

register participants

integrate post-conference online discussions

20 days ago

Conference Hosting | PKP Publishing Services

20 days ago

Interested in using Open Conference Systems (OCS) for your upcoming conference or event? We can provide a permanent home for recurring conferences or give you a short-term space for a one-time event.

For an annual fee we will:

install OCS on our servers

get you started with initial setup and training support

provide daily backups (seven days incremental, other regimens available for an additional fee)

upgrade the software

ensure that your content is highly visible on the web.

Leave the technical details to us, and get on with your event!

open-access
hosted-solutions
academic-culture
rather-interesting
to-understand
maybe-not-but-still
publishing
For an annual fee we will:

install OCS on our servers

get you started with initial setup and training support

provide daily backups (seven days incremental, other regimens available for an additional fee)

upgrade the software

ensure that your content is highly visible on the web.

Leave the technical details to us, and get on with your event!

20 days ago

Open Journal Systems | Public Knowledge Project

20 days ago

Open Journal Systems (OJS) is a journal management and publishing system that has been developed by the Public Knowledge Project through its federally funded efforts to expand and improve access to research.

open-access
publishing
academia
rather-interesting
to-understand
to-write-about
20 days ago

About the Journal

20 days ago

fully open-access journal with explicit integration into distributed scholarly archives and libraries

computational-geometry
open-access
publishing
to-write-about
to-do
20 days ago

[1404.6520] How to partition diversity

20 days ago

Diversity measurement underpins the study of biological systems, but measures used vary across disciplines. Despite their common use and broad utility, no unified framework has emerged for measuring, comparing and partitioning diversity. The introduction of information theory into diversity measurement has laid the foundations, but the framework is incomplete without the ability to partition diversity, which is central to fundamental questions across the life sciences: How do we prioritise communities for conservation? How do we identify reservoirs and sources of pathogenic organisms? How do we measure ecological disturbance arising from climate change?

The lack of a common framework means that diversity measures from different fields have conflicting fundamental properties, allowing conclusions reached to depend on the measure chosen. This conflict is unnecessary and unhelpful. A mathematically consistent framework would transform disparate fields by delivering scientific insights in a common language. It would also allow the transfer of theoretical and practical developments between fields.

We meet this need, providing a versatile unified framework for partitioning biological diversity. It encompasses any kind of similarity between individuals, from functional to genetic, allowing comparisons between qualitatively different kinds of diversity. Where existing partitioning measures aggregate information across the whole population, our approach permits the direct comparison of subcommunities, allowing us to pinpoint distinct, diverse or representative subcommunities and investigate population substructure. The framework is provided as a ready-to-use R package to easily test our approach.

diversity
population-biology
bioinformatics
philosophy-of-science
algorithms
statistics
rather-interesting
to-write-about
to-understand
consider:genetic-programming
define-your-terms
The lack of a common framework means that diversity measures from different fields have conflicting fundamental properties, allowing conclusions reached to depend on the measure chosen. This conflict is unnecessary and unhelpful. A mathematically consistent framework would transform disparate fields by delivering scientific insights in a common language. It would also allow the transfer of theoretical and practical developments between fields.

We meet this need, providing a versatile unified framework for partitioning biological diversity. It encompasses any kind of similarity between individuals, from functional to genetic, allowing comparisons between qualitatively different kinds of diversity. Where existing partitioning measures aggregate information across the whole population, our approach permits the direct comparison of subcommunities, allowing us to pinpoint distinct, diverse or representative subcommunities and investigate population substructure. The framework is provided as a ready-to-use R package to easily test our approach.

20 days ago

Adactio: Journal—GDPR and Google Analytics

20 days ago

In the second scenario, you don’t do anything differently than before—you just add that single line of JavaScript. But now that script would need to launch the interface for getting consent before doing any tracking. Google Analytics would go from being something invisible to something that directly impacts the user experience of your site.

I’m just using Google Analytics as an example here because it’s so widespread. This also applies to third-party sharing buttons—Twitter, Facebook, etc.—and of course, advertising.

In the case of advertising, it gets even thornier because quite often, the site owner has no idea which third party is about to do the tracking. Many, many sites use intermediary services (y’know, ‘cause bloated ad scripts aren’t slowing down sites enough so let’s throw some just-in-time bidding into the mix too). You could get consent for the intermediary service, but not for the final advert—neither you nor your site’s user would have any idea what they were consenting to.

Interesting times. One way or another, a massive amount of the web—every website using Google Analytics, embedded YouTube videos, Facebook comments, embedded tweets, or third-party advertisements—will be liable under GDPR.

analytics
privacy
law
to-understand
web-design
I’m just using Google Analytics as an example here because it’s so widespread. This also applies to third-party sharing buttons—Twitter, Facebook, etc.—and of course, advertising.

In the case of advertising, it gets even thornier because quite often, the site owner has no idea which third party is about to do the tracking. Many, many sites use intermediary services (y’know, ‘cause bloated ad scripts aren’t slowing down sites enough so let’s throw some just-in-time bidding into the mix too). You could get consent for the intermediary service, but not for the final advert—neither you nor your site’s user would have any idea what they were consenting to.

Interesting times. One way or another, a massive amount of the web—every website using Google Analytics, embedded YouTube videos, Facebook comments, embedded tweets, or third-party advertisements—will be liable under GDPR.

20 days ago

Author Summary: Political Stability in the Open Society – American Journal of Political Science

20 days ago

Given these challenges, we distinguish two different kinds of stability that apply at different levels of social organization. The first kind of stability applies to constitutional rules that set out the general legal rules within which our lower-level institutional rules operate. These constitutional rules must remain in equilibrium despite challenges and threats in order to preserve the social conditions that foster experimentation. But we reject a similar form of stability for lower-level legal and institutional rules. Experimentation at that level can be productive in ways that constitutional experimentation is not. Instead, lower level legal and institutional rules need to be robust in the sense that, when challenged, old rules can be replaced by stable new rules without undermining the system of rules as a whole.

philosophy
social-norms
public-policy
cultural-dynamics
political-economy
to-read
20 days ago

The gig economy and the future of work — Crooked Timber

20 days ago

Full realization of the benefits of technological progress would require a situation where paid work is a free choice rather than, as at present, either a necessity or (in situations of high unemployment) an impossibility. The policies required to achieve this goal should include a Jobs Guarantee and some form of Participation Income. In the longer term, these could be extended to an unconditional Guaranteed Minimum Income or Universal Basic Income.

political-economy
essay
public-policy
20 days ago

Breakdown - Futility Closet

20 days ago

These are called Borwein integrals, after David and Jonathan Borwein, the father-and-son mathematicians who first presented them in 2001.

Engineer Hanspeter Schmid writes, “[W]hen this fact was recently verified by a researcher using a computer algebra package, he concluded that there must be a ‘bug’ in the software. It is not a bug, though; this series of integrals really only results in π/2 up to a certain point, and then breaks down. This astonishes most mathematically educated readers, as especially those readers mentally extrapolate the sequence shown above and find it surprising that something fundamental should change when the factor sinc(x/15) is introduced.” He gives a graphic explanation of what’s happening.

mathematical-recreations
rather-interesting
misleading-trends
to-write-about
Engineer Hanspeter Schmid writes, “[W]hen this fact was recently verified by a researcher using a computer algebra package, he concluded that there must be a ‘bug’ in the software. It is not a bug, though; this series of integrals really only results in π/2 up to a certain point, and then breaks down. This astonishes most mathematically educated readers, as especially those readers mentally extrapolate the sequence shown above and find it surprising that something fundamental should change when the factor sinc(x/15) is introduced.” He gives a graphic explanation of what’s happening.

20 days ago

World Wide Wanderings

21 days ago

Over the years I have worked on many different research and computing projects, all over the world. Most of my scientific work is related to the origin of life, evolution, and complex systems & emergence. Below is a brief overview of the main projects that I am currently working on. Preprints of my publications on this work are available for download (see the list of publications).

hey-I-know-this-guy
science
complexology
to-write-about
21 days ago

BDD Done easy – The three levels of BDD | The IT Risk Manager

21 days ago

The easy way to do BDD is “Working Backwards“(1). Sadly “Working Backwards” is a principle that few people are aware of and even fewer practice.

software-development-is-not-programming
BDD
agile-practices
philosophy-of-engineering
the-mangle-in-practice
to-write-about
21 days ago

Highly composite number - Wikipedia

21 days ago

A highly composite number (or anti-prime[1]) is a positive integer with more divisors than any smaller positive integer has. The term was coined by Ramanujan (1915). However, Jean-Pierre Kahane has suggested that the concept might have been known to Plato, who set 5040 as the ideal number of citizens in a city as 5040 has more divisors than any numbers less than it.[2]

The related concept of largely composite number refers to a positive integer which has at least as many divisors as any smaller positive integer.

number-theory
nudge-targets
to-write-about
consider:classification
The related concept of largely composite number refers to a positive integer which has at least as many divisors as any smaller positive integer.

21 days ago

Creating Intricate Art with Neural Style Transfer – Becoming Human: Artificial Intelligence Magazine

21 days ago

The architecture is based on Gatys’ style transfer algorithm with a few minor modifications. In this case, the content image is a silhouette and style image can be any pattern (ranging from simple black and white doodle to more complex color mosaics). The code also contains a module to invert and create a mask based on the content image, which will eventually be applied to the generated pattern.

generative-art
generative-models
style-transfer
computational-recreations
to-write-about
to-do
nudge-targets
consider:representation
21 days ago

[1711.10485] AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

21 days ago

In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.

deep-learning
generative-models
adversarial-models
coevolution
machine-learning
representation
algorithms
to-write-about
21 days ago

[no title]

21 days ago

This paper is an attempt to explain all the matrix calculus you need in order to understand the training of deep neural networks. We assume no math knowledge beyond what you learned in calculus 1, and provide links to help you refresh the necessary math where needed. Note that you do not need to understand this material before you start learning to train and use deep learning in practice; rather, this material is for those who are already familiar with the basics of neural networks, and wish to deepen their understanding of the underlying math. Don't worry if you get stuck at some point along the way---just go back and reread the previous section, and try writing down and working through some examples. And if you're still stuck, we're happy to answer your questions in the Theory category at forums.fast.ai. Note: There is a reference section at the end of the paper summarizing all the key matrix calculus rules and terminology discussed here.

via:numerous
matrices
mathematics
review
calculus
deep-learning
21 days ago

[1604.02181] A Unified Framework for Sparse Non-Negative Least Squares using Multiplicative Updates and the Non-Negative Matrix Factorization Problem

21 days ago

We study the sparse non-negative least squares (S-NNLS) problem. S-NNLS occurs naturally in a wide variety of applications where an unknown, non-negative quantity must be recovered from linear measurements. We present a unified framework for S-NNLS based on a rectified power exponential scale mixture prior on the sparse codes. We show that the proposed framework encompasses a large class of S-NNLS algorithms and provide a computationally efficient inference procedure based on multiplicative update rules. Such update rules are convenient for solving large sets of S-NNLS problems simultaneously, which is required in contexts like sparse non-negative matrix factorization (S-NMF). We provide theoretical justification for the proposed approach by showing that the local minima of the objective function being optimized are sparse and the S-NNLS algorithms presented are guaranteed to converge to a set of stationary points of the objective function. We then extend our framework to S-NMF, showing that our framework leads to many well known S-NMF algorithms under specific choices of prior and providing a guarantee that a popular subclass of the proposed algorithms converges to a set of stationary points of the objective function. Finally, we study the performance of the proposed approaches on synthetic and real-world data.

matrices
inverse-problems
algorithms
rather-interesting
nudge-targets
performance-measure
to-write-about
21 days ago

End Times Philosophy Interviews: The First 302 - 3:AM Magazine

21 days ago

end times philosophy interviews: the first 302

philosophy
interviews
to-read
21 days ago

Theorem of the Day

21 days ago

The links below will bring you (in a new window) a superb collections of sites, blogs, online magazines etc, covering large parts of mathematics. Some subject-specific links are here.

resources
mathematical-recreations
mathematics
to-read
21 days ago

The (Math) Problem With Pentagons | Quanta Magazine

21 days ago

To understand the problem with pentagons, let’s start with one of the simplest and most elegant of geometric structures: the regular tilings of the plane. These are arrangements of regular polygons that cover flat space entirely and perfectly, with no overlap and no gaps. Here are the familiar triangular, square and hexagonal tilings. We find them in floors, walls and honeycombs, and we use them to pack, organize and build things more efficiently.

tiling
to-write-about
mathematical-recreations
the-mangle-in-practice
define-your-terms
rather-interesting
21 days ago

Concrete examples for utopian ideals: how the Sharing Cities movement is paving the way | P2P Foundation

21 days ago

Humanitas, the very first case study is a wonderful example.This Netherlands-based project has proven how intergenerational living works. This is an elder-care residence that also provides housing to students and young people. The exchange for living there is 30 hours of their time per month, engaging with the elderly community. The project has been so successful they have set up in multiple locations. This is a very encouraging concept when we think of the baby-boom generation globally ageing along with the housing-crisis the millennial generation is going through. This is a model for institutions to copy or for individuals to replicate.

utopianism
sharing
social-norms
social-engineering
rather-interesting
to-write-about
political-economy
21 days ago

[1710.00709] Synthesising Evolutionarily Stable Normative Systems

22 days ago

Within the area of multi-agent systems, normative systems are a widely used framework for the coordination of interdependent activities. A crucial problem associated with normative systems is that of synthesising norms that effectively accomplish a coordination task and whose compliance forms a rational choice for the agents within the system. In this work, we introduce a framework for the synthesis of normative systems that effectively coordinate a multi-agent system and whose norms are likely to be adopted by rational agents. Our approach roots in evolutionary game theory. Our framework considers multi-agent systems in which evolutionary forces lead successful norms to prosper and spread within the agent population, while unsuccessful norms are discarded. The outputs of this evolutionary norm synthesis process are normative systems whose compliance forms a rational choice for the agents. We empirically show the effectiveness of our approach through empirical evaluation in a simulated traffic domain.

evolutionary-dynamics
game-theory
rather-interesting
meta-simulation
performance-measure
inverse-problems
to-write-about
consider:looking-to-see
22 days ago

[1710.04211] StackSeq2Seq: Dual Encoder Seq2Seq Recurrent Networks

22 days ago

A widely studied non-deterministic polynomial time (NP) hard problem lies in finding a route between the two nodes of a graph. Often meta-heuristics algorithms such as A∗ are employed on graphs with a large number of nodes. Here, we propose a deep recurrent neural network architecture based on the Sequence-2-Sequence (Seq2Seq) model, widely used, for instance in text translation. Particularly, we illustrate that utilising a context vector that has been learned from two different recurrent networks enables increased accuracies in learning the shortest route of a graph. Additionally, we show that one can boost the performance of the Seq2Seq network by smoothing the loss function using a homotopy continuation of the decoder's loss function.

neural-networks
machine-learning
algorithms
rather-interesting
operations-research
optimization
heuristics
to-write-about
nudge-targets
22 days ago

[1710.04372] Wealth and Identity: The dynamics of dual segregation

22 days ago

We extend our model of wealth segregation to incorporate migration and study the tendencies towards dual segregation - segregation due to identity (migrants vs. residents) and segregation due to wealth. We find a sharp, non-linear transformation between segregated and mixed-wealth states as neighborhood wealth thresholds become less stringent, allowing agents to move into neighborhoods they cannot afford. The number of such moves required for the onset of this transformation varies inversely with the likelihood of agents willing to move into less wealthy neighborhoods. We also find that this sharp transformation from segregated to mixed wealth states is simultaneously accompanied by a corresponding non-linear transformation from a less identity-segregated to a highly identity-segregated state. We argue that the decrease in wealth segregation does not merely accompany, but in fact drives the increase in identity-based segregation. This implies that lowering wealth segregation necessarily exacerbates identity segregation and that therefore, this trade-off could to be an important consideration in designing urban policy. Additionally, we find that the time taken for the tolerance levels of residents to be breached is significantly lesser under rapid migration. This rapidity of breach of tolerance could potentially underpin the intensity of resident responses to sharp bursts of migration.

social-dynamics
agent-based
simulation
evolutionary-economics
to-write-about
consider:making-one
22 days ago

[1611.03144] Therapeutic target discovery using Boolean network attractors: improvements of kali

22 days ago

In a previous article, an algorithm for identifying therapeutic targets in Boolean networks modeling pathological mechanisms was introduced. In the present article, the improvements made on this algorithm, named kali, are described. These improvements are i) the possibility to work on asynchronous Boolean networks, ii) a finer assessment of therapeutic targets and iii) the possibility to use multivalued logic. kali assumes that the attractors of a dynamical system, such as a Boolean network, are associated with the phenotypes of the modeled biological system. Given a logic-based model of pathological mechanisms, kali searches for therapeutic targets able to reduce the reachability of the attractors associated with pathological phenotypes, thus reducing their likeliness. kali is illustrated on an example network and used on a biological case study. The case study is a published logic-based model of bladder tumorigenesis from which kali returns consistent results. However, like any computational tool, kali can predict but can not replace human expertise: it is a supporting tool for coping with the complexity of biological systems in the field of drug discovery.

reaction-networks
boolean-networks
Kauffmania
bioinformatics
systems-biology
rather-interesting
nudge-targets
consider:looking-to-see
22 days ago

What is Category Theory Anyway? — Math3ma

22 days ago

A quick browse through my Twitter or Instagram accounts, and you might guess that I've had category theory on my mind. You'd be right, too! So I have a few category-theory themed posts lined up for this semester, and to start off, I'd like to (attempt to) answer the question, What is category theory, anyway? for anyone who may not be familiar with the subject.

Now rather than give you a list of definitions--which are easy enough to find and may feel a bit unmotivated at first--I thought it would be nice to tell you what category theory is in the grand scheme of (mathematical) things. You see, it's very different than other branches of math. Rather than being another sibling lined up in the family photograph, it's more like a common gene that unites them in the first place.

Let me explain.

category-theory
introduction
rather-interesting
to-read
to-write-about
to-emulate
Now rather than give you a list of definitions--which are easy enough to find and may feel a bit unmotivated at first--I thought it would be nice to tell you what category theory is in the grand scheme of (mathematical) things. You see, it's very different than other branches of math. Rather than being another sibling lined up in the family photograph, it's more like a common gene that unites them in the first place.

Let me explain.

22 days ago

More Secrets of the Associahedra | The n-Category Café

22 days ago

The associahedra are wonderful things discovered by Jim Stasheff around 1963 but even earlier by Dov Tamari in his thesis. They hold the keys to understanding ‘associativity up to coherent homotopy’ in exquisite combinatorial detail.

But do they still hold more secrets? I think so!

group-theory
mathematical-recreations
open-questions
symmetry
to-write-about
to-understand
consider:visual-algorithms
But do they still hold more secrets? I think so!

22 days ago

Maverick Solitaire

22 days ago

An early form of Poker solitaire is actually a puzzle of sorts, one which has been called Maverick solitaire (after its appearance on the 1950's/1960's Western T.V. show Maverick, in the first season episode Rope of Cards). Twenty five cards are dealt from a shuffled 52-card deck. The object is to divide the 25 cards into five groups of five cards so that each is a pat hand in draw poker (a hand which does not need to be drawn to). Maverick Solitaire is well-known as a sucker bet, as the probability of success with a random group of 25 cards would seem low, but is actually quite high: the eponymous author of Maverick's Guide to Poker estimates the odds to be at least 98 percent (he disallows four of a kind). This is remarkably accurate: Mark Masten's computer solver, allowing four of a kind, solved about 98.1 percent of a random set of 1000 deals. Deals with unique solutions are even less common than impossible ones: the sample above had 19 impossible deals and only 8 with unique solutions.

mathematical-recreations
cards
nudge-targets
consider:representation
consider:looking-to-see
22 days ago

A Town that Starts with “L” in North Carolina | radical eyes for equity

24 days ago

I thought of how this is what we come to, after almost 8 decades of life—four brief paragraphs in a newspaper and loved ones who have your town of birth narrowed down to a town that starts with “L” in NC.

life
the-stories-oft-told
been-there
personal
24 days ago

Riddled: Onan, what is best in life? To crush your enemies. See them driven before you. And to see the retractions of their papers

26 days ago

Evidently Sharma & Madhuri (or their sympathisers) do not want anyone to read their papers. Understandably so. They have the consolation of knowing that at least the editors & peer reviewers never read them.

oof
academic-culture
publishing
retractions
bad-science
26 days ago

Why Authoritarians Attack the Arts - The New York Times

26 days ago

In 1937, ascending leaders of the Third Reich hosted two art exhibitions in Munich. One, the “Great German Art Exhibition,” featured art Adolf Hitler deemed acceptable and reflective of an ideal Aryan society: representational, featuring blond people in heroic poses and pastoral landscapes of the German countryside. The other featured what Hitler and his followers referred to as “degenerate art”: work that was modern or abstract, and art produced by people disavowed by Nazis — Jewish people, Communists, or those suspected of being one or the other. The “degenerate art” was presented in chaos and disarray, accompanied by derogatory labels, graffiti and catalog entries describing “the sick brains of those who wielded the brush or pencil.” Hitler and those close to him strictly controlled how artists lived and worked in Nazi Germany, because they understood that art could play a key role in the rise or fall of their dictatorship and the realization of their vision for Germany’s future.

Photo

“Degenerate Art,” a Nazi-curated exhibition, at the Haus der Kunst in Berlin, February 1938. Credit Reuters

Last month, the Trump administration proposed a national budget that includes the elimination of the National Endowment for the Arts. The NEA operates with a budget of about $150 million a year. As critics have observed, this amount is about 0.004 percent of the federal budget, making the move a fairly inefficient approach to trimming government spending. Many Americans have been protesting the cuts by pointing out the many ways that art enriches our lives — as they should. The arts bring us joy and entertainment; they can offer a reprieve from the trials of life or a way to understand them.

But as Hitler understood, artists play a distinctive role in challenging authoritarianism. Art creates pathways for subversion, for political understanding and solidarity among coalition builders. Art teaches us that lives other than our own have value. Like the proverbial court jester who can openly mock the king in his own court, artists who occupy marginalized social positions can use their art to challenge structures of power in ways that would otherwise be dangerous or impossible.

art
cultural-norms
criticism
fascism
to-write-about
Photo

“Degenerate Art,” a Nazi-curated exhibition, at the Haus der Kunst in Berlin, February 1938. Credit Reuters

Last month, the Trump administration proposed a national budget that includes the elimination of the National Endowment for the Arts. The NEA operates with a budget of about $150 million a year. As critics have observed, this amount is about 0.004 percent of the federal budget, making the move a fairly inefficient approach to trimming government spending. Many Americans have been protesting the cuts by pointing out the many ways that art enriches our lives — as they should. The arts bring us joy and entertainment; they can offer a reprieve from the trials of life or a way to understand them.

But as Hitler understood, artists play a distinctive role in challenging authoritarianism. Art creates pathways for subversion, for political understanding and solidarity among coalition builders. Art teaches us that lives other than our own have value. Like the proverbial court jester who can openly mock the king in his own court, artists who occupy marginalized social positions can use their art to challenge structures of power in ways that would otherwise be dangerous or impossible.

26 days ago

languagehat.com : Decoding the Khipus.

27 days ago

This is why we can’t decide in advance who’s allowed to work on what, and I very much look forward to seeing what further discoveries are made! (Khipus, or quipus, previously on LH.)

research
basic-science
anthropology
liberal-arts
insight
to-write-about
27 days ago

[1709.08880] An enhanced method to compute the similarity between concepts of ontology

27 days ago

With the use of ontologies in several domains such as semantic web, information retrieval, artificial intelligence, the concept of similarity measuring has become a very important domain of research. Therefore, in the current paper, we propose our method of similarity measuring which uses the Dijkstra algorithm to define and compute the shortest path. Then, we use this one to compute the semantic distance between two concepts defined in the same hierarchy of ontology. Afterward, we base on this result to compute the semantic similarity. Finally, we present an experimental comparison between our method and other methods of similarity measuring.

metrics
information-architecture
data-structures
algorithms
classification
semantic-web
27 days ago

[1712.02657] Generalised primal-dual grids for unstructured co-volume schemes

27 days ago

The generation of high-quality staggered unstructured grids for computational simulation is considered; leading to the development of a new optimisation-based strategy for the construction of weighted `Regular-Power' tessellations appropriate for co-volume type numerical techniques. This new framework aims to extend the conventional Delaunay-Voronoi primal-dual structure; seeking to assemble generalised orthogonal tessellations with enhanced geometric quality. The construction of these grids is motivated by the desire to improve the performance and accuracy of numerical methods based on unstructured co-volume type schemes, including various staggered grid techniques for the simulation of fluid dynamics and hyperbolic transport. In this study, a hybrid optimisation strategy is proposed; seeking to optimise the geometry, topology and weights associated with general, two-dimensional Regular-Power tessellations using a combination of gradient-ascent and energy-based techniques. The performance of the new method is tested experimentally, with a range of complex, multi-resolution primal-dual grids generated for various coastal and regional ocean modelling applications.

finite-elements-methods
computational-geometry
algorithms
performance-measure
rather-interesting
to-write-about
to-simulate
27 days ago

Lessons from Optics, The Other Deep Learning – arg min blog

27 days ago

Imagine you’re an engineer, you’re given this net, and you’re asked to make it work better on a dataset. You might presume each of these layers is there for a reason. But as a field, we don’t yet have a common language to express these reasons. The way we teach deep learning is very different from the way we teach other technical disciplines.

A few years ago, I got into optics. In optics, you also build stacks of components that process inputs. Here’s a camera lens.

machine-learning
philosophy-of-engineering
system-of-professions
pedagogy
knowing-what-you-know
epistemology
engineering-criticism
A few years ago, I got into optics. In optics, you also build stacks of components that process inputs. Here’s a camera lens.

27 days ago

net.wars: Bodies in the clouds

27 days ago

A friend who buys and turns on an Amazon Echo is deemed to have accepted its privacy policy. Does visiting their home mean I've accepted it too? What happens to data about me that the Echo has collected if I am not a suspect? And if it controls their whole house, how do I get it to work after they've gone to bed?

At Privacy Law Scholars in 2016, Andrea Matwyshyn introduced a new idea: the Internet of Bodies, the theme of this year's CPDP. As she spotted then, the Internet of Bodies make us dependent for our bodily integrity and ability to function on this hybrid ecosystem. At that first discussion of what I'm sure will be an important topic for many years to come, someone commented, "A pancreas has never reported to the cloud before."

A few weeks ago, a small American ISP sent a letter to warn a copyright-infringing subscriber that continuing to attract complaints would cause the ISP to throttle their bandwidth, potentially interfering with devices requiring continuous connections, such as CCTV monitoring and thermostats. The kind of conflict this suggests - copyright laws designed for "cyberspace" touching our physical ability to stay warm and alive in a cold snap - is what awaits us now.

net-neutrality
public-policy
law
networked-world
essay
At Privacy Law Scholars in 2016, Andrea Matwyshyn introduced a new idea: the Internet of Bodies, the theme of this year's CPDP. As she spotted then, the Internet of Bodies make us dependent for our bodily integrity and ability to function on this hybrid ecosystem. At that first discussion of what I'm sure will be an important topic for many years to come, someone commented, "A pancreas has never reported to the cloud before."

A few weeks ago, a small American ISP sent a letter to warn a copyright-infringing subscriber that continuing to attract complaints would cause the ISP to throttle their bandwidth, potentially interfering with devices requiring continuous connections, such as CCTV monitoring and thermostats. The kind of conflict this suggests - copyright laws designed for "cyberspace" touching our physical ability to stay warm and alive in a cold snap - is what awaits us now.

27 days ago

[1710.02034] Weight = nonlinearity for all small weight Boolean functions

27 days ago

Given a Boolean function f, the (Hamming) weight wt(f) and the nonlinearity N(f) are well known to be important in designing functions that are useful in cryptography. The nonlinearity is expensive to compute, in general, so any shortcuts for doing that for particular functions f are significant. It is clear from the definitions that wt(f) \leq N(f). We give a useful and practical sufficient condition for the weight and nonlinearity to be equal, namely wt(f) \leq (2 to the power n-2) if f has n variables. It is shown that this inequality cannot be improved, in general. This condition can also be used to actually compute the nonlinearity in some cases. As an application, we give a simple proof for the value of the nonlinearity of the well known majority function. Previous proofs relied on many detailed results for the Krawtchouk polynomials.

information-theory
Boolean-functions
representation
distance
cryptography
feature-construction
rather-interesting
to-write-about
consider:genetic-programming
27 days ago

[1705.08524] Designs for estimating the treatment effect in networks with interference

27 days ago

In this paper we introduce new, easily implementable designs for drawing causal inference from randomized experiments on networks with interference. Inspired by the idea of matching in observational studies, we introduce the notion of considering a treatment assignment as a quasi-coloring" on a graph. Our idea of a perfect quasi-coloring strives to match every treated unit on a given network with a distinct control unit that has identical number of treated and control neighbors. For a wide range of interference functions encountered in applications, we show both by theory and simulations that the classical Neymanian estimator for the direct effect has desirable properties for our designs. This further extends to settings where homophily is present in addition to interference.

experimental-design
graph-theory
statistics
planning
inference
rather-interesting
to-understand
27 days ago

[1702.04690] Simple rules for complex decisions

27 days ago

From doctors diagnosing patients to judges setting bail, experts often base their decisions on experience and intuition rather than on statistical models. While understandable, relying on intuition over models has often been found to result in inferior outcomes. Here we present a new method, select-regress-and-round, for constructing simple rules that perform well for complex decisions. These rules take the form of a weighted checklist, can be applied mentally, and nonetheless rival the performance of modern machine learning algorithms. Our method for creating these rules is itself simple, and can be carried out by practitioners with basic statistics knowledge. We demonstrate this technique with a detailed case study of judicial decisions to release or detain defendants while they await trial. In this application, as in many policy settings, the effects of proposed decision rules cannot be directly observed from historical data: if a rule recommends releasing a defendant that the judge in reality detained, we do not observe what would have happened under the proposed action. We address this key counterfactual estimation problem by drawing on tools from causal inference. We find that simple rules significantly outperform judges and are on par with decisions derived from random forests trained on all available features. Generalizing to 22 varied decision-making domains, we find this basic result replicates. We conclude with an analytical framework that helps explain why these simple decision rules perform as well as they do.

decision-making
planning
heuristics
algorithms
probability-theory
representation
to-write-about
problematic-at-best
rather-interesting
philosophy-of-science
27 days ago

[1707.05994] Computing Tutte Paths

27 days ago

Tutte paths are one of the most successful tools for attacking Hamiltonicity problems in planar graphs. Unfortunately, results based on them are non-constructive, as their proofs inherently use an induction on overlapping subgraphs and these overlaps hinder to bound the running time to a polynomial. For special cases however, computational results of Tutte paths are known: For 4-connected planar graphs, Tutte paths are in fact Hamiltonian paths and Chiba and Nishizeki showed how to compute such paths in linear time. For 3-connected planar graphs, Tutte paths have a more complicated structure, and it has only recently been shown that they can be computed in polynomial time. However, Tutte paths are defined for general 2-connected planar graphs and this is what most applications need. Unfortunately, no computational results are known. We give the first efficient algorithm that computes a Tutte path (for the general case of 2-connected planar graphs). One of the strongest existence results about such Tutte paths is due to Sanders, which allows to prescribe the end vertices and an intermediate edge of the desired path. Encompassing and strengthening all previous computational results on Tutte paths, we show how to compute this special Tutte path efficiently. Our method refines both, the results of Thomassen and Sanders, and avoids overlapping subgraphs by using a novel iterative decomposition along 2-separators. Finally, we show that our algorithm runs in quadratic time.

graph-theory
algorithms
representation
rather-interesting
to-understand
nudge-targets
consider:representation
consider:feature-discovery
27 days ago

[1706.02263] Graph Convolutional Matrix Completion

27 days ago

We consider matrix completion for recommender systems from the point of view of link prediction on graphs. Interaction data such as movie ratings can be represented by a bipartite user-item graph with labeled edges denoting observed ratings. Building on recent progress in deep learning on graph-structured data, we propose a graph auto-encoder framework based on differentiable message passing on the bipartite interaction graph. Our model shows competitive performance on standard collaborative filtering benchmarks. In settings where complimentary feature information or structured data such as a social network is available, our framework outperforms recent state-of-the-art methods.

matrix-completion
inference
machine-learning
representation
matrices
algorithms
nudge-targets
to-write-about
27 days ago

[1709.10061] Active Information Acquisition for Linear Optimization

27 days ago

We consider partially-specified optimization problems where the goal is to actively, but efficiently, acquire missing information about the problem in order to solve it. An algorithm designer wishes to solve a linear program (LP), maxcTx s.t. Ax≤b,x≥0, but does not initially know some of the parameters. The algorithm can iteratively choose an unknown parameter and gather information in the form of a noisy sample centered at the parameter's (unknown) value. The goal is to find an approximately feasible and optimal solution to the underlying LP with high probability while drawing a small number of samples. We focus on two cases. (1) When the parameters c of the objective are initially unknown, we take an information-theoretic approach and give roughly matching upper and lower sample complexity bounds, with an (inefficient) successive-elimination algorithm. (2) When the parameters b of the constraints are initially unknown, we propose an efficient algorithm combining techniques from the ellipsoid method for LP and confidence-bound approaches from bandit algorithms. The algorithm adaptively gathers information about constraints only as needed in order to make progress. We give sample complexity bounds for the algorithm and demonstrate its improvement over a naive approach via simulation.

operations-research
game-theory
experimental-design
information-theory
rather-interesting
nudge-targets
consider:looking-to-see
consider:planning-versions
27 days ago

[1510.06890] Generalized Transformation Design: metrics, speeds, and diffusion

27 days ago

We show that a unified and maximally generalized approach to spatial transformation design is possible, one that encompasses all second order waves, rays, and diffusion processes in anisotropic media. Until the final step, it is unnecessary to specify the physical process for which a specific transformation design is to be implemented. The principal approximation is the neglect of wave impedance, an attribute that plays no role in ray propagation, and is therefore irrelevant for pure ray devices; another constraint is that for waves the spatial variation in material parameters needs to be sufficiently small compared with the wavelength. The key link between our general formulation and a specific implementation is how the spatial metric relates to the speed of disturbance in a given medium, whether it is electromagnetic, acoustic, or diffusive. Notably, we show that our generalised ray theory, in allowing for anisotropic indexes (speeds), generates the same predictions as does a wave theory, and the results are closely related to those for diffusion processes.

engineering-design
rather-interesting
indistinguishable-from-magic
performance-measure
representation
nudge-targets
consider:looking-to-see
consider:symbolic-regression
to-write-about
27 days ago

[1712.07910] The advantages of interdisciplinarity in modern science

27 days ago

As the increasing complexity of large-scale research requires the combined efforts of scientists with expertise in different fields, the advantages and costs of interdisciplinary scholarship have taken center stage in current debates on scientific production. Here we conduct a comparative assessment of the scientific success of specialized and interdisciplinary researchers in modern science. Drawing on comprehensive data sets on scientific production, we propose a two-pronged approach to interdisciplinarity. For each scientist, we distinguish between background interdisciplinarity, rooted in knowledge accumulated over time, and social interdisciplinarity, stemming from exposure to collaborators' knowledge. We find that, while abandoning specialization in favor of moderate degrees of background interdisciplinarity deteriorates performance, very interdisciplinary scientists outperform specialized ones, at all career stages. Moreover, successful scientists tend to intensify the heterogeneity of collaborators and to match the diversity of their network with the diversity of their background. Collaboration sustains performance by facilitating knowledge diffusion, acquisition and creation. Successful scientists tend to absorb a larger fraction of their collaborators' knowledge, and at a faster pace, than less successful ones. Collaboration also provides successful scientists with opportunities for the cross-fertilization of ideas and the synergistic creation of new knowledge. These results can inspire scientists to shape successful careers, research institutions to develop effective recruitment policies, and funding agencies to award grants of enhanced impact.

big-science
diversity
system-of-professions
interdisciplinarity
collaboration
sociology
academic-culture
a-way-of-being-thought-shallow-in-two-fields-at-once
27 days ago

[1709.08878] Generating Sentences by Editing Prototypes

27 days ago

We propose a new generative model of sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.

machine-learning
natural-language-processing
rather-interesting
representation
to-write-about
nudge-targets
consider:representation
consider:performance-measures
generative-models
27 days ago

Stumbling and Mumbling: Why economists should look at horses

27 days ago

He is not atypical. This is an example of something common in economics – “as if” modelling. Clark describes the rise of factories as if workers chose them as a commitment device, just as (say) Murphy and Becker describe smackheads as if they were rational maximizers. But this is insufficient. You can describe a woman’s black eye as if she had walked into a door. But if in fact her husband had beaten her, you are missing the truth and overlooking genuine oppression and injustice.

economics
capitalism
narrative
the-stories-that-we-tell
to-write-about
27 days ago

How Access Codes Keep College-Textbook Costs High - The Atlantic

27 days ago

After settling into his dorm this past fall, John McGrath, a freshman at Rutgers University, took the campus shuttle to the school bookstore. He waited in line for 40 minutes clutching a list of four classes—including Microeconomics, Introduction to Calculus, and Expository Writing—and walked out later with an armful of books, some bundled with digital codes that he would use to access assignments on the publishers’ websites. He also exited the store with a bill for about $450.

academic-culture
publishing
disintermediation-targets
pedagogy
somebody-has-to-pay-the-rent-bullshit
open-access
27 days ago

Muuri - Responsive, sortable, filterable and draggable grid layouts

27 days ago

Responsive, sortable, filterable and draggable grid layouts

javascript
UI
user-interface
library
rather-interesting
mobile-design
web-applications
to-understand
27 days ago

Moshe Hoffman, Harvard Program for Evolutionary Dynamics

27 days ago

Moshe Hoffman is a Research Scientist & Lecturer at Harvard's Program for Evolutionary Dynamics.

Moshe applies game theory, models of learning and evolution, and experimental methods, to try to decipher the (sometimes subtle) incentives that shape our social behavior, preferences and ideologies (see his tweets for many examples, see his research statement for a justification and examples of this approach, & see this list of 5 big picture questions that drive him).

Moshe obtained his Ph.D. in Economics from the University of Chicago, Booth School of Business and his B.S. in Economics from the University of Chicago.

He co-designed and teaches "Game Theory and Social Behavior" which lays out a lot of the evidence and models behind this approach.

evolutionary-economics
game-theory
rather-interesting
consider:GPTP
Moshe applies game theory, models of learning and evolution, and experimental methods, to try to decipher the (sometimes subtle) incentives that shape our social behavior, preferences and ideologies (see his tweets for many examples, see his research statement for a justification and examples of this approach, & see this list of 5 big picture questions that drive him).

Moshe obtained his Ph.D. in Economics from the University of Chicago, Booth School of Business and his B.S. in Economics from the University of Chicago.

He co-designed and teaches "Game Theory and Social Behavior" which lays out a lot of the evidence and models behind this approach.

27 days ago

[1703.10651] Reliable Decision Support using Counterfactual Models

27 days ago

Making a good decision involves considering the likely outcomes under each possible action. For example, would drug A or drug B lead to a better outcome for this patient? Ideally, we answer these questions using an experiment, but this is not always possible (e.g., it may be unethical). As an alternative, we can use non-experimental data to learn models that make counterfactual predictions of what we would observe had we run an experiment. To learn such models for decision-making problems, we propose the use of counterfactual objectives in lieu of classical supervised learning objectives. We implement this idea in a challenging and frequently occurring context, and propose the counterfactual GP (CGP), a counterfactual model of continuous-time trajectories (time series) under sequences of actions taken in continuous-time. We develop our model within the potential outcomes framework of Neyman and Rubin. The counterfactual GP is trained using a joint maximum likelihood objective that adjusts for dependencies between observed actions and outcomes in the training data. We report two sets of experimental results. First, we show that the CGP's predictions are reliable; they are stable to changes in certain characteristics of the training data that are not relevant to the decision-making problem. Predictive models trained using classical supervised learning objectives, however, are not stable to such perturbations. In the second experiment, we use data from a real intensive care unit (ICU) and qualitatively demonstrate how the CGP's ability to answer "What if?" questions offers medical decision-makers a powerful new tool for planning treatment.

machine-learning
models-and-modes
rather-interesting
to-write-about
consider:symbolic-regression
27 days ago

academia
academic-culture
activism
advice
agent-based
agility
algorithms
amusing
ann-arbor
approximation
architecture
archive
art
artificial-intelligence
artificial-life
benchmarking
bioinformatics
biological-engineering
blogging
books
bushism
business
business-culture
business-model
cellular-automata
classification
clustering
collaboration
collective-intelligence
combinatorics
commons
communication
community
complex-systems
complexology
compressed-sensing
computational-complexity
computational-geometry
computer-science
conservatism
consider:feature-discovery
consider:looking-to-see
consider:performance-measures
consider:rediscovery
consider:representation
consider:stress-testing
constraint-satisfaction
copyright
corporatism
criticism
crowdsourcing
cultural-assumptions
cultural-dynamics
cultural-norms
data-analysis
data-mining
deep-learning
design
design-patterns
development
digital-humanities
digitization
disintermediation-in-action
distributed-processing
diversity
dynamical-systems
economics
education
emergence
emergent-design
engineering
engineering-design
evolutionary-algorithms
evolutionary-economics
experiment
feature-construction
feature-extraction
finance
financial-crisis
fitness-landscapes
formalization
game-theory
games
generative-art
generative-models
genetic-programming
geometry
google
government
graph-theory
graphic-design
heuristics
history
horse-races
humor
image-analysis
image-processing
image-segmentation
inference
information-theory
innovation
intellectual-property
interesting
inverse-problems
javascript
kauffmania
language
law
lawyers
learning
learning-by-doing
learning-from-data
library
local
machine-learning
macos
management
marketing
mathematical-recreations
mathematics
matrices
media
metaheuristics
metrics
modeling
models
models-and-modes
multiobjective-optimization
nanohistory
nanotechnology
natural-language-processing
network-theory
networks
neural-networks
nonlinear-dynamics
nudge
nudge-targets
number-theory
numerical-methods
open-access
open-questions
open-source
openness
operations-research
optimization
pedagogy
performance-measure
philosophy
philosophy-of-engineering
philosophy-of-science
photography
physics
planning
politics
prediction
probability-theory
programming
project-management
proof
public-policy
publishing
rather-interesting
representation
research
review
robotics
robustness
ruby
science
self-organization
signal-processing
simulation
social-dynamics
social-engineering
social-networks
social-norms
sociology
software
software-development
statistics
strings
sustainability
system-of-professions
systems-biology
technology
the-mangle-in-practice
theoretical-biology
tiling
time-series
to-read
to-understand
to-write-about
tools
trading
tutorial
typography
user-experience
via:cshalizi
video
visualization
web-design
web2.0
worklife
writing