11212
Heesch Numbers, Part 2: Polyforms – Isohedral
In the first post in this series, I introduced the concept of a shape’s Heesch number. In brief, if a shape doesn’t tile the plane, its Heesch number is a measure of the maximum number of times you can surround the shape with layers of copies of itself. (Shapes that do tile are defined to have a Heesch number of infinity.) Shapes with positive, finite Heesch numbers are entertaining mathematical curiosities. Far more mysterious—and infuriating—is the fact that we know of examples of Heesch numbers only up to five, and nothing higher. Learning more about shapes with high Heesch numbers could offer insights into deep unsolved problems in tiling theory.
tiling  combinatorics  mathematical-recreations  rather-interesting  number-theory  looking-to-see  nudge-targets  consider:feature-discovery  to-write-about
6 days ago
[1704.00630] Towards a property graph generator for benchmarking
The use of synthetic graph generators is a common practice among graph-oriented benchmark designers, as it allows obtaining graphs with the required scale and characteristics. However, finding a graph generator that accurately fits the needs of a given benchmark is very difficult, thus practitioners end up creating ad-hoc ones. Such a task is usually time-consuming, and often leads to reinventing the wheel. In this paper, we introduce the conceptual design of DataSynth, a framework for property graphs generation with customizable schemas and characteristics. The goal of DataSynth is to assist benchmark designers in generating graphs efficiently and at scale, saving from implementing their own generators. Additionally, DataSynth introduces novel features barely explored so far, such as modeling the correlation between properties and the structure of the graph. This is achieved by a novel property-to-node matching algorithm for which we present preliminary promising results.
graph-theory  generative-models  benchmarking  database  data-synthesis  rather-interesting  algorithms  inverse-problems  nudge-targets  consider:evolutionary-algorithms  constraint-satisfaction
9 days ago
[1703.05105] A Data Driven Approach for Compound Figure Separation Using Convolutional Neural Networks
A key problem in automatic analysis and understanding of scientific papers is to extract semantic information from non-textual paper components like figures, diagrams, tables, etc. This research always requires a very first preprocessing step: decomposing compound multi-part figures into individual subfigures. Previous work in compound figure separation has been based on manually designed features and separation rules, which often fail for less common figure types and layouts. Moreover, no implementation for compound figure decomposition is publicly available.
This paper proposes a data driven approach to separate compound figures using modern deep Convolutional Neural Networks (CNNs) to train the separator in an end-to-end manner. CNNs eliminate the need for manually designing features and separation rules, but require large amount of annotated training data. We overcome this challenge using transfer learning as well as automatically synthesizing training exemplars. We evaluate our technique on the ImageCLEF Medical dataset, achieving 85.9% accuracy and outperforming manually engineered previous techniques. We made the resulting approach available as an easy-to-use Python library, aiming to promote further research in scientific figure mining.
OCR  neural-networks  image-processing  page-structure  learning-from-data  rather-interesting  algorithms  machine-learning  feature-extraction  nudge-targets  consider:looking-to-see
9 days ago
[1705.00759] Controllability of Conjunctive Boolean Networks with Application to Gene Regulation
A Boolean network is a finite state discrete time dynamical system. At each step, each variable takes a value from a binary set. The value update rule for each variable is a local function which depends only on a selected subset of variables. Boolean networks have been used in modeling gene regulatory networks. We focus in this paper on a special class of Boolean networks, namely the conjunctive Boolean networks (CBNs), whose value update rule is comprised of only logic AND operations. It is known that any trajectory of a Boolean network will enter a periodic orbit. Periodic orbits of a CBN have been completely understood. In this paper, we investigate the orbit-controllability and state-controllability of a CBN: We ask the question of how one can steer a CBN to enter any periodic orbit or to reach any final state, from any initial state. We establish necessary and sufficient conditions for a CBN to be orbit-controllable and state-controllable. Furthermore, explicit control laws are presented along the analysis.
boolean-networks  Kauffmania  engineering-design  emergent-design  rather-interesting  to-write-about  nudge-targets  consider:feature-discovery  dynamical-systems  complexology
9 days ago
From a logical point of view … — Crooked Timber
I have now read that “google manifesto”. I read it more out of a desire to forestall people saying “but have you ACTUALLY READ IT?” than out of any expectation that it would contain new or unfamiliar information, and indeed it was your fairly standard evo-psych “just asking questions”, genus differences-in-tails-of-distributions. It’s a mulberry bush that was already pretty well circumnavigated when Larry Summers was still President of Harvard. But what really struck me was that I have changed in my old age; I used to be depressed at the generally very poor level of statistical education, now I’m depressed at the extent to which people with an excellent education in statistics still don’t really understand anything about the subject. I’m beginning to think that mathematical training in many cases is actually damaging; simple and robust metrics, usually drawn from the early days of industrial quality control, are what people need to understand. Let’s talk about distributions of programming ability.
10 days ago
[1704.01565] Charging changes contact composition in binary sphere packings
Equal volume mixtures of small and large polytetrafluorethylene (PTFE) spheres are shaken in an atmosphere of controlled humidity which allows to also control their tribo-charging. We find that the contact numbers are charge-dependent: as the charge density of the beads increases, the number of same-type contacts decreases and the number of opposite-type contacts increases. This change is not caused by a global segregation of the sample. Hence, tribo-charging can be a way to tune the local composition of a granular material.
packing  condensed-matter  looking-to-see  experiment  rather-interesting  granular-materials  to-write-about  it's-more-complicated-than-you-think
10 days ago
[1708.03216] Coarsening and Aging of Lattice Polymers: Influence of Bond Fluctuations
We present results for the nonequilibrium dynamics of collapse for a model flexible homopolymer on simple cubic lattices with fixed and fluctuating bonds between the monomers. Results from our Monte Carlo simulations show that, phenomenologically, the sequence of events observed during the collapse are independent of the bond criterion. While the growth of the clusters (of monomers) at different temperatures exhibits a nonuniversal power-law behavior when the bonds are fixed, the introduction of fluctuations in the bonds by considering the existence of diagonal bonds produces a temperature independent growth, which can be described by a universal nonequilibrium finite-size scaling function with a non-universal metric factor. We also examine the related aging phenomenon, probed by a suitable two-time density-density autocorrelation function showing a simple power-law scaling with respect to the growing cluster size. Unlike the cluster-growth exponent αc, the nonequilibrium autocorrelation exponent λC governing the aging during the collapse, however, is independent of the bond type and strictly follows the bounds proposed by two of us in Phys. Rev. E 93, 032506 (2016) at all temperatures.
lattice-polymers  physics  simulation  rather-interesting  dynamical-systems  to-write-about
10 days ago
[1607.00363] Using smartphone pressure sensors to measure vertical velocities of elevators, stairways, and drones
We measure the vertical velocities of elevators, pedestrians climbing stairs, and drones (flying unmanned aerial vehicles), by means of smartphone pressure sensors. The barometric pressure obtained with the smartphone is related to the altitude of the device via the hydrostatic approximation. From the altitude values, vertical velocities are derived. The approximation considered is valid in the first hundred meters of the inner layers of the atmosphere. In addition to pressure, acceleration values were also recorded using the built-in accelerometer. Numerical integration was performed, obtaining both vertical velocity and altitude. We show that data obtained using the pressure sensor is significantly less noisy than that obtained using the accelerometer. Error accumulation is also evident in the numerical integration of the acceleration values. In the proposed experiments, the pressure sensor also outperforms GPS, because this sensor does not receive satellite signals indoors and, in general, the operating frequency is considerably lower than that of the pressure sensor. In the cases in which it is possible, comparison with reference values taken from the architectural plans of buildings validates the results obtained using the pressure sensor. This proposal is ideally performed as an external or outreach activity with students to gain insight about fundamental questions in mechanics, fluids, and thermodynamics.
physics  looking-to-see  experiment  rather-interesting  want  to-write-about  to-do
10 days ago
[1706.04791] Obstacle-shape effect in a two-dimensional granular silo flow field
We conducted simple experiment and numerical simulation of two-dimensional granular discharge flow driven by gravity under the influence of an obstacle. According to the previous work (Zuriguel {\it et al.,\,Phys.\,Rev.\,Lett.}\,{\bf 107}: 278001, 2011), the clogging of granular discharge flow can be suppressed by putting a circular obstacle at a proper position. In order to investigate the details of obstacle effect in granular flow, we focused on particle dynamics in this study. From the experimental and numerical data, we found that the obstacle remarkably affects the horizontal-velocity distribution and packing fraction at the vicinity of the exit. In addition to the circular obstacle, we utilized triangular, inverted-triangular, and horizontal-bar obstacles to discuss the obstacle-shape effect in granular discharge flow. Based on the investigation of dynamical quantities such as velocity distributions, granular temperature, and volume fraction, we found that the triangular obstacle or horizontal bar could be very effective to prevent the clogging. From the obtained result, we consider that the detouring of particles around the obstacle and resultant low packing fraction at the exit region effectively prevent the clogging in a certain class of granular discharge flow.
granular-materials  looking-to-see  experiment  physics  rather-interesting  to-write-about
10 days ago
[1705.00692] Diffusion limited aggregation in the Boolean lattice
In the Diffusion Limited Aggregation (DLA) process on on ℤ2, or more generally ℤd, particles aggregate to an initially occupied origin by arrivals on a random walk. The scaling limit of the result, empirically, is a fractal with dimension strictly less than d. Very little has been shown rigorously about the process, however.
We study an analogous process on the Boolean lattice {0,1}n, in which particles take random decreasing walks from (1,…,1), and stick at the last vertex before they encounter an occupied site for the first time; the vertex (0,…,0) is initially occupied. In this model, we can rigorously prove that lower levels of the lattice become full, and that the process ends by producing an isolated path of unbounded length reaching (1,…,1).
10 days ago
[1704.08997] A Case Study on the Parametric Occurrence of Multiple Steady States
We consider the problem of determining multiple steady states for positive real values in models of biological networks. Investigating the potential for these in models of the mitogen-activated protein kinases (MAPK) network has consumed considerable effort using special insights into the structure of corresponding models. Here we apply combinations of symbolic computation methods for mixed equality/inequality systems, specifically virtual substitution, lazy real triangularization and cylindrical algebraic decomposition. We determine multistationarity of an 11-dimensional MAPK network when numeric values are known for all but potentially one parameter. More precisely, our considered model has 11 equations in 11 variables and 19 parameters, 3 of which are of interest for symbolic treatment, and furthermore positivity conditions on all variables and parameters.
nonlinear-dynamics  dynamical-systems  inverse-problems  theoretical-biology  reaction-networks  to-write-about  rather-interesting  parameter-scanning
10 days ago
[1605.09304] Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right - similar to why we study the human brain - and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization (AM), which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN). The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).
hey-I-know-this-guy  neural-networks  generative-models  machine-learning  GPTP  nudge-targets  to-write-about
11 days ago
[1705.00744] A Strategy for an Uncompromising Incremental Learner
Multi-class supervised learning systems require the knowledge of the entire range of labels they predict. Often when learnt incrementally, they suffer from catastrophic forgetting. To avoid this, generous leeways have to be made to the philosophy of incremental learning that either forces a part of the machine to not learn, or to retrain the machine again with a selection of the historic data. While these hacks work to various degrees, they do not adhere to the spirit of incremental learning. In this article, we redefine incremental learning with stringent conditions that do not allow for any undesirable relaxations and assumptions. We design a strategy involving generative models and the distillation of dark knowledge as a means of hallucinating data along with appropriate targets from past distributions. We call this technique, phantom sampling.We show that phantom sampling helps avoid catastrophic forgetting during incremental learning. Using an implementation based on deep neural networks, we demonstrate that phantom sampling dramatically avoids catastrophic forgetting. We apply these strategies to competitive multi-class incremental learning of deep neural networks. Using various benchmark datasets and through our strategy, we demonstrate that strict incremental learning could be achieved. We further put our strategy to test on challenging cases, including cross-domain increments and incrementing on a novel label space. We also propose a trivial extension to unbounded-continual learning and identify potential for future development.
rather-interesting  neural-networks  learning  data-synthesis  hallucination  to-write-about  coevolution
11 days ago
[1705.04098] A Generative Model of People in Clothing
We present the first image-based generative model of people in clothing for the full body. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.
generative-art  generative-models  image-processing  machine-learning  rather-interesting  to-bot  to-write-about
11 days ago
[1704.00568] A parametric level-set method for partially discrete tomography
This paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We represent the geometry of the anomaly using a level-set function, which we represent using radial basis functions. We pose the reconstruction problem as a bi-level optimization problem in terms of the background and coefficients for the level-set function. To constrain the background reconstruction we impose smoothness through Tikhonov regularization. The bi-level optimization problem is solved in an alternating fashion; in each iteration we first reconstruct the background and consequently update the level-set function. We test our method on numerical phantoms and show that we can successfully reconstruct the geometry of the anomaly, even from limited data. On these phantoms, our method outperforms Total Variation reconstruction, DART and P-DART.
tomography  inverse-problems  benchmarking  rather-interesting  to-write-about  to-simulate  nudge-targets
11 days ago
[1704.08586] Analytic Approach to Activity-dependent Adaptive Boolean Networks
We propose new activity-dependent adaptive Boolean networks inspired by the cis-regulatory mechanism in gene regulatory networks. We analytically show that our model can be solved for stationary in-degree distribution for a wide class of update rules by employing the annealed approximation of Boolean network dynamics and that evolved Boolean networks have a preassigned average sensitivity that can be set independently of update rules. In particular, when it is set to 1, our theory predicts that the proposed network rewiring algorithm drives Boolean networks towards criticality. We verify that these analytic results agree well with numerical simulations for four representative update rules. We also discuss the relationship between sensitivity of update rules and stationary in-degree distributions and compare it with that in real-world gene regulatory networks.
boolean-networks  Kauffmania  engineering-design  emergent-design  rather-interesting  to-write-about
11 days ago
The Critical Engineering Manifesto
0. The Critical Engineer considers Engineering to be the most transformative language of our time, shaping the way we move, communicate and think. It is the work of the Critical Engineer to study and exploit this language, exposing its influence.
dammit  engineering  now-what-will-I-call-it?
11 days ago
Extractor attractor – Almost looks like work
Recently the extractor fan in my bathroom has started malfunctioning, occasionally grinding and stalling. The infuriating thing is that the grinding noise isn’t perfectly periodic – it is approximately so, but there are occasionally long gaps and the short gaps vary slightly. This lack of predictability makes the noise incredibly annoying, and hard to tune out. Before getting it fixed, I decided to investigate it a bit further.

The terminally curious may listen to the sound here:

https://www.dropbox.com/s/4xh1gmrjry10eky/FanSound.ts?dl=0

This was recorded from my phone, you can also hear me puttering around in the background.

After dumping the audio data, I looked at the waveform and realised it was quite difficult to extract the temporal locations of the grinding noises from the volume alone. As a good physicist I therefore had another look in the frequency domain, making a spectrogram.
mathematical-recreations  looking-to-see  data-analysis  visualization  physics  nonlinear-dynamics  amusing
11 days ago
Swype right – Almost looks like work
In this post I’ll discuss optimising the layout of an English QWERTY keyboard in an effort to minimise the average distance a digit must travel to type a word. Let’s have a look.
mathematical-recreations  optimization  natural-language-processing  user-interface  amusing
11 days ago
How Silicon Valley’s Workplace Culture Produced James Damore’s Google Memo | The New Yorker
“What You Can’t Say” is by no means a seminal text, but it is the sort of text that has, historically, spoken to a tech audience. “Google’s Ideological Echo Chamber,” with its veneer of cool rationalism, echoes Graham’s essay in certain ways. But, where Graham’s argument is made thoughtfully and in good faith—he is a proponent of intellectual inquiry, even if the outcome is controversial—Damore’s is a sort of performance. His memo shows a deep misunderstanding of what constitutes power in Silicon Valley, and where that power lies. True, Google and its peers have put money and other company resources toward diversity efforts, and they very likely will continue to do so. But today, in mid-2017, men—white men—are still very much in the majority. It is still largely white men who make decisions, and largely white men who prosper. By positioning diversity programs as discriminatory, Damore paints exactly the opposite picture. He frames employees like himself as a silenced minority, and his contrarian opinions as a kind of Galilean heresy.
It is conceivable, of course, that Damore distributed his memo to thousands of his colleagues because he genuinely thought that it was the best way to strike up a conversation. “Open and honest discussion with those who disagree can highlight our blind spots and help us grow,” he writes. Perhaps he expected that the ensuing dialogue would be akin to a debate over a chunk of code. But, given the memo’s various denigrating assertions about his co-workers, it is difficult to imagine that it was offered in good faith. Damore wasn’t fired for his political views; he was fired for how (and where) he applied them. The memo also hints at a larger anxiety—a fear, possibly, of the future. But technological advancement and social change move at different velocities; someone like Damore might sooner be automated out of a job than replaced by a woman.
politics  bro-culture  startup-culture-must-die
11 days ago
[1705.08971] Optimal Cooperative Inference
Cooperative transmission of data fosters rapid accumulation of knowledge by efficiently combining experience across learners. Although well studied in human learning, there has been less attention to cooperative transmission of data in machine learning, and we consequently lack strong formal frameworks through which we may reason about the benefits and limitations of cooperative inference. We present such a framework. We introduce a novel index for measuring the effectiveness of probabilistic information transmission, and cooperative information transmission specifically. We relate our cooperative index to previous measures of teaching in deterministic settings. We prove conditions under which optimal cooperative inference can be achieved, including a representation theorem which constrains the form of inductive biases for learners optimized for cooperative inference. We conclude by demonstrating how these principles may inform the design of machine learning algorithms and discuss implications for human learning, machine learning, and human-machine learning systems.
hey-I-know-this-guy  machine-learning  relevance-theory  philosophy  rather-interesting  to-write-about  pedagogy  communication-and-learning
11 days ago
How the Imagined “Rationality” of Engineering Is Hurting Diversity — and Engineering
Just how common are the views on gender espoused in the memo that former Google engineer James Damore was recently fired for distributing on an internal company message board? The flap has women and men in tech — and elsewhere — wondering what their colleagues really think about diversity. Research we’ve conducted shows that while most people don’t share Damore’s views, male engineers are more likely to.
13 days ago
HPS: The Myth of the Boiling Point
The old thermometer whose photo I have put on the cover of the book speaks volumes (click on the picture for a larger version). This instrument, dating from the 1750s, is preserved at the Science Museum in London; the glass stems have broken off, so all we have is the frame, which shows four different scales on it. The third one is the familiar Fahrenheit scale. (The second one, due to Delisle, is "upside down", with 0° at the boiling point and increasing numbers as it gets colder; read more about such scales on pp.160-162 in Inventing Temperature.)

There are two boiling points marked on this thermometer. At the familiar 212°F it says "water boyles vehemently". Down at about 204°F it says "begins to boyle". What is going on here?

You may think that the artisan who made this thermometer must have been pretty incompetent on scientific matters. But it turns out that this thermometer was the work of George Adams, official scientific instrument-maker to King George III. And the idea of two boiling points actually came straight from Isaac Newton, whose temperature scale published in 1701 was indeed the first of Adams's four scales.
history-of-science  philosophy-of-science  the-mangle-in-practice  to-write-about  nanohistory  pragmatism
13 days ago
Radical Book Club: the Decentralized Left | Status 451
This failure of the mainstream has opened doors for more radical individuals to show value. The College Republicans and the Leadership Institute could have organized conservative students to peacefully disrupt Leftist speakers and Leftist activities in response to Leftist disruption, but they didn’t, so that opened a door for rougher guys like Based Stickman (on the off chance you don’t know, he’s a fellow named Kyle Chapman, who got internet famous when he was filmed breaking a wooden dowel over the head of a charging antifa at the First Battle of Berkeley; he has since worked to organize Righties as defensive streetfighters). Any number of mainstream Righty organizations could have organized Righty lawyers, but none did, so now Based Stickman is being aided in organizing the Based Lawyers’ Guild by Augustus Invictus, a former Libertarian congressional candidate who is — and I can’t stress this enough — absolutely garking insane.
organization  politics  activism
14 days ago
Men Have Always Used 'Science' to Explain Why They're Better Than Women
And while Damore wasn’t so extreme as to claim women should be extirpated from the tech world, some of his pseudoscientific notions about why men are inherently better suited to certain jobs ring strongly of eugenics, a school of thinking premised on the idea that certain groups are biologically superior to others. Damore argues that “highly heritable” personality traits (including higher “agreeableness” and a preference for “artistic” jobs among women) are responsible for gender gaps in tech, ignoring cultural explanations. By this logic, attempting to level the playing field for women is thus misguided—we should be selecting candidates (read: men) with the most desirable traits for high-stress, technically-demanding jobs.
assholes  techbro-culture  history-of-science
15 days ago
[1707.06300] Untangling the hairball: fitness based asymptotic reduction of biological networks
Complex mathematical models of interaction networks are routinely used for prediction in systems biology. However, it is difficult to reconcile network complexities with a formal understanding of their behavior. Here, we propose a simple procedure (called φ¯) to reduce biological models to functional submodules, using statistical mechanics of complex systems combined with a fitness-based approach inspired by in silico evolution. φ¯ works by putting parameters or combination of parameters to some asymptotic limit, while keeping (or slightly improving) the model performance, and requires parameter symmetry breaking for more complex models. We illustrate φ¯ on biochemical adaptation and on different models of immune recognition by T cells. An intractable model of immune recognition with close to a hundred individual transition rates is reduced to a simple two-parameter model. φ¯ extracts three different mechanisms for early immune recognition, and automatically discovers similar functional modules in different models of the same process, allowing for model classification and comparison. Our procedure can be applied to biological networks based on rate equations using a fitness function that quantifies phenotypic performance.
systems-biology  approximation  simplification  network-theory  representation  rather-interesting  algorithms  theoretical-biology  philosophy-of-science  to-write-about
16 days ago
[1707.06446] Sequential Lifted Bayesian Filtering in Multiset Rewriting Systems
Bayesian Filtering for plan and activity recognition is challenging for scenarios that contain many observation equivalent entities (i.e. entities that produce the same observations). This is due to the combinatorial explosion in the number of hypotheses that need to be tracked. However, this class of problems exhibits a certain symmetry that can be exploited for state space representation and inference. We analyze current state of the art methods and find that none of them completely fits the requirements arising in this problem class. We sketch a novel inference algorithm that provides a solution by incorporating concepts from Lifted Inference algorithms, Probabilistic Multiset Rewriting Systems, and Computational State Space Models. Two experiments confirm that this novel algorithm has the potential to perform efficient probabilistic inference on this problem class.
representation  rather-interesting  rewriting-systems  formulations-and-reformulation  to-understand
16 days ago
[1705.04665] A Formal Characterization of the Local Search Topology of the Gap Heuristic
The pancake puzzle is a classic optimization problem that has become a standard benchmark for heuristic search algorithms. In this paper, we provide full proofs regarding the local search topology of the gap heuristic for the pancake puzzle. First, we show that in any non-goal state in which there is no move that will decrease the number of gaps, there is a move that will keep the number of gaps constant. We then classify any state in which the number of gaps cannot be decreased in a single action into two groups: those requiring 2 actions to decrease the number of gaps, and those which require 3 actions to decrease the number of gaps.
optimization  benchmarking  heuristics  planning  nudge-targets  consider:looking-to-see  to-write-about
16 days ago
[1707.06631] Two Results on Slime Mold Computations
In this paper, we present two results on slime mold computations. The first one treats a biologically-grounded model, originally proposed by biologists analyzing the behavior of the slime mold Physarum polycephalum. This primitive organism was empirically shown by Nakagaki et al. to solve shortest path problems in wet-lab experiments (Nature'00). We show that the proposed simple mathematical model actually generalizes to a much wider class of problems, namely undirected linear programs with a non-negative cost vector.
For our second result, we consider the discretization of a biologically-inspired model. This model is a directed variant of the biologically-grounded one and was never claimed to describe the behavior of a biological system. Straszak and Vishnoi showed that it can ϵ-approximately solve flow problems (SODA'16) and even general linear programs with positive cost vector (ITCS'16) within a finite number of steps. We give a refined convergence analysis that improves the dependence on ϵ from polynomial to logarithmic and simultaneously allows to choose a step size that is independent of ϵ. Furthermore, we show that the dynamics can be initialized with a more general set of (infeasible) starting points.
collective-intelligence  emergent-design  artificial-life  operations-research  performance-measure  to-write-about  to-simulate
16 days ago
[1705.01568] Adaptive Fitness Landscape for Replicator Systems: To Maximize or not to Maximize
Sewall Wright's adaptive landscape metaphor penetrates a significant part of evolutionary thinking. Supplemented with Fisher's fundamental theorem of natural selection and Kimura's maximum principle, it provides a unifying and intuitive representation of the evolutionary process under the influence of natural selection as the hill climbing on the surface of mean population fitness. On the other hand, it is also well known that for many more or less realistic mathematical models this picture is a sever misrepresentation of what actually occurs. Therefore, we are faced with two questions. First, it is important to identify the cases in which adaptive landscape metaphor actually holds exactly in the models, that is, to identify the conditions under which system's dynamics coincides with the process of searching for a (local) fitness maximum. Second, even if the mean fitness is not maximized in the process of evolution, it is still important to understand the structure of the mean fitness manifold and see the implications of this structure on the system's dynamics. Using as a basic model the classical replicator equation, in this note we attempt to answer these two questions and illustrate our results with simple well studied systems.
fitness-landscapes  replicators  rather-interesting  nonlinear-dynamics  theoretical-biology  philosophy-of-science  to-write-about
16 days ago
Riddled: Still working through the backlog of irritating mockademic-journal spam
Rajesh Varma -- the egregious fuckknuckle who came up with the respect-inspiring title "PeerTechz" when he leapt aboard the parasitic-publishing band-wagon juggernaut -- is evidently making so little money from the scam that he cannot afford last names for his "Managing Editor" sockpuppets. Leaving them to languish in initial-letter anonymity.
16 days ago
[1704.08676] A quantitative assessment of the effect of different algorithmic schemes to the task of learning the structure of Bayesian Networks
One of the most challenging tasks when adopting Bayesian Networks (BNs) is the one of learning their structure from data. This task is complicated by the huge search space of possible solutions and turned out to be a well-known NP-hard problem and, hence, approximations are required. However, to the best of our knowledge, a quantitative analysis of the performance and characteristics of the different heuristics to solve this problem has never been done before.
For this reason, in this work, we provide a detailed study of the different state-of-the-arts methods for structural learning on simulated data considering both BNs with discrete and continuous variables, and with different rates of noise in the data. In particular, we investigate the characteristics of different widespread scores proposed for the inference and the statistical pitfalls within them.
learning-from-data  machine-learning  statistics  algorithms  rather-interesting  inference  nudge-targets  consider:looking-to-see  consider:representation
16 days ago
[1706.04671] Feature Enhancement in Visually Impaired Images
One of the major open problems in computer vision is detection of features in visually impaired images. In this paper, we describe a potential solution using Phase Stretch Transform, a new computational approach for image analysis, edge detection and resolution enhancement that is inspired by the physics of the photonic time stretch technique. We mathematically derive the intrinsic nonlinear transfer function and demonstrate how it leads to (1) superior performance at low contrast levels and (2) a reconfigurable operator for hyper-dimensional classification. We prove that the Phase Stretch Transform equalizes the input image brightness across the range of intensities resulting in a high dynamic range in visually impaired images. We also show further improvement in the dynamic range by combining our method with the conventional techniques. Finally, our results show a method for computation of mathematical derivatives via group delay dispersion operations.
image-processing  signal-processing  algorithms  rather-interesting  performance-measure  to-write-about
16 days ago
Better Skills – An und für sich
It’s as though the Democrats are Chigurh from No Country for Old Men: you’re most likely going to die, but you do have the option of a coin toss. The Republicans don’t offer the coin toss. Which one is better? The Democrats, obviously! But if you were someone in a dying community that had been starved for jobs for a generation, the kind of place where everyone leaves if they can, would you bother getting up in the morning to pull the lever for that option?
political-economy  politics  alas
16 days ago
What is Kong? | Kong - Open-Source API Management and Microservice Management
Kong is a scalable, open source API Layer (also known as an API Gateway, or API Middleware). Kong runs in front of any RESTful API and is extended through Plugins, which provide extra functionality and services beyond the core platform.

Kong was originally built at Mashape to secure, manage and extend over 15,000 APIs & Microservices for its API Marketplace, which generates billions of requests per month for over 200,000 developers. Today Kong is used in mission critical deployments at small and large organizations.
software-development-is-not-programming  library  API  to-understand
17 days ago
Breakdown Of Modularity In Complex Networks | bioRxiv
The presence of modular organisation is a common property of a wide range of complex systems, from cellular or brain networks to technological graphs. Modularity allows some degree of segregation between different parts of the network and has been suggested to be a prerequisite for the evolvability of biological systems. In technology, modularity defines a clear division of tasks and it is an explicit design target. However, many natural and artificial systems experience a breakdown in their modular pattern of connections, which has been associated to failures in hub nodes or the activation of global stress responses. In spite of its importance, no general theory of the breakdown of modularity and its implications has been advanced yet. Here we propose a new, simple model of network landscape where it is possible to exhaustively characterise the breakdown of modularity in a well-defined way. We found that evolution cannot reach maximally modular networks under the presence of functional and cost constraints, implying the breakdown of modularity is an adaptive feature.
fitness-landscapes  network-theory  modularity  rather-interesting  boolean-networks  to-write-about  complexology  simple-models  nudge-targets  evolvability
17 days ago
[1707.09627] Learning to Infer Graphics Programs from Hand-Drawn Images
We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of LATEX. The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. This set of drawing primitives is like an execution trace for a graphics program. From this trace we use program synthesis techniques to recover a graphics program with constructs such as variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network, cluster drawings by use of similar high-level geometric structures, and extrapolate drawings. Taken together these results are a step towards agents that induce useful, human-readable programs from perceptual input.
generative-models  learning-by-watching  rather-interesting  machine-learning  algorithms  benchmarking  consider:looking-to-see  nudge-targets  performance-measure
18 days ago
Evidence that ancient farms had very different origins than previously thought | Ars Technica
École française d'Extrême-Orient archaeologist Damian Evans, a co-author on the Nature paper, said that it wasn't until a recent conference brought international researchers together that they realized they'd discovered a global pattern. Very similar evidence for ancient farming could be seen in equatorial Africa, South Asia, and Southeast Asia. Much later, people began building "garden cities" in these same regions, where they lived in low-density neighborhoods surrounded by cultivated land.

Evans, Roberts, and their colleagues aren't just raising questions about where cities originated. More importantly, Roberts told Ars via email, they are challenging the idea of a "Neolithic revolution" in which the shift to city life happened in just a few hundred years. In the tropics, there was no bright line between a nomadic existence and agricultural life. When humans first arrived in South Asia, Southeast Asia, and Melanesia, they spent millennia adapting to the tropics, eventually "shaping environments to meet their own needs," he said. "So rather than huge leaps, what we see is a continuation of this local knowledge and adaptation in these regions through time."
18 days ago
Vega-Lite: A High-Level Visualization Grammar
Vega-Lite is a high-level visualization grammar. It provides a concise JSON syntax for supporting rapid generation of visualizations to support analysis. Vega-Lite specifications can be compiled to Vega specifications.

Vega-Lite specifications describe visualizations as mappings from data to properties of graphical marks (e.g., points or bars). It automatically produces visualization components including axes, legends, and scales. It then determines properties of these components based on a set of carefully designed rules. This approach allows Vega-Lite specifications to be succinct and expressive, but also provide user control. As Vega-Lite is designed for analysis, it supports data transformations such as aggregation, binning, filtering, sorting, and visual transformations including stacking and faceting.
Get started
Try online

Read our introduction article to Vega-Lite 1 on Medium, look at our talk about the new features in Vega-Lite 2, check out the documentation and take a look at our example gallery.
visualization  javascript  DSL  charts  rather-interesting  to-learn
18 days ago
This is the journal of the Society for Judgment and Decision Making (SJDM) and the European Association for Decision Making (EADM). It is open access, published on the World Wide Web, at least every two months. We have no author fees so far.
18 days ago
The relationship between crowd majority and accuracy for binary decisions
We consider the wisdom of the crowd situation in which individuals make binary decisions, and the majority answer is used as the group decision. Using data sets from nine different domains, we examine the relationship between the size of the majority and the accuracy of the crowd decisions. We find empirically that these calibration curves take many different forms for different domains, and the distribution of majority sizes over decisions in a domain also varies widely. We develop a growth model for inferring and interpreting the calibration curve in a domain, and apply it to the same nine data sets using Bayesian methods. The modeling approach is able to infer important qualitative properties of a domain, such as whether it involves decisions that have ground truths or are inherently uncertain. It is also able to make inferences about important quantitative properties of a domain, such as how quickly the crowd accuracy increases as the size of the majority increases. We discuss potential applications of the measurement model, and the need to develop a psychological account of the variety of calibration curves that evidently exist.
via:?  wisdom-of-crowds  psychology  collective-intelligence  statistics  to-understand
18 days ago
Why Optimistic Merging Works Better - Hintjens.com
My top tips were, for what it's worth:

People before code: build the right community and it will build the right code.
Progress before consensus: look for processes that work without upfront consensus (except on rules).
Problems before solutions: use a problem-driven process.
Contracts before internals: use contracts to test behavior, not inspection of internals.
Rules before hope: write down your development process or use C4.1.
Merit before power: treat everyone fairly and equitably.
Market before product: aim for a market of competing and interoperating projects, not a single product.
software-development-is-not-programming  software-development  social-norms  social-engineering  to-try
18 days ago
Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses. - PubMed - NCBI
Scientists should be able to provide support for the absence of a meaningful effect. Currently, researchers often incorrectly conclude an effect is absent based a nonsignificant result. A widely recommended approach within a frequentist framework is to test for equivalence. In equivalence tests, such as the two one-sided tests (TOST) procedure discussed in this article, an upper and lower equivalence bound is specified based on the smallest effect size of interest. The TOST procedure can be used to statistically reject the presence of effects large enough to be considered worthwhile. This practical primer with accompanying spreadsheet and R package enables psychologists to easily perform equivalence tests (and power analyses) by setting equivalence bounds based on standardized effect sizes and provides recommendations to prespecify equivalence bounds. Extending your statistical tool kit with equivalence tests is an easy way to improve your statistical and theoretical inferences.
18 days ago
Paths aren't strings — The blog of Rob Miller, Ruby developer
Pathname is part of the standard library in Ruby; it’s not an external dependency like a Gem, so you can safely rely on it being present in all your scripts. Once we’ve required the library, we can create a Pathname in Ruby by passing a string to Pathname.new:
ruby  programming-language  library  to-learn
18 days ago
[1708.00214] Natural Language Processing with Small Feed-Forward Networks
We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.
neural-networks  natural-language-processing  machine-learning  amusing  not-so-deep  to-write-about  metaheuristics
18 days ago
Building account systems – Mike’s blog
Troy Hunt recently published a blog post titled “Authentication guidance for the modern era”. It has a big pile of solid advice on what password rules your website should use, with references to formal government recommendations — always useful for convincing colleagues or a boss.
One of the projects I worked on during my time at Google was their unified account system (specifically, anti-hijacking). Login systems are a part of most websites, so reading Troy’s article inspired me to put together some advice for building them.
software-development-is-not-programming  best-practices  authentication  to-read  to-learn  security  reference
18 days ago
Natural Language Processing in Artificial Intelligence | Sigmoidal
Back in the days when a Neural Network was that scary, hard-to-learn thing which was rather a mathematical curiosity than a powerful Machine Learning or Artificial Intelligence tool - there were surprisingly many relatively successful applications of classical data mining algorithms in Natural Language Processing (NLP) domain. It seemed that problems like spam filtering or Part of Speech Tagging could be solved using rather easy and understandable models.

But not every problem can be solved this way. Simple models fail to properly capture linguistic subtleties like irony (although humans often fail at that one too), idioms or context. Algorithms based on overall summarization (e.g. bag-of-words) turned out to be not powerful enough to capture sequential nature of text data, whereas n-grams struggled to model general context and suffered severely from a curse of dimensionality. Even HMM-based models had trouble overcoming these issues due to their Markovian nature (memorylessness). Of course, these methods were also used when tackling more complex NLP tasks, but not to a great success.
natural-language-processing  neural-networks  representation  machine-learning  rather-interesting  to-write-about
19 days ago
Why Coase’s Penguin didn’t fly * — Crooked Timber
This is a very simple model, but it arguably represents many social relationships. One business can extract much better terms from another if it is the only customer (or supplier) for a service. Peasants reportedly did much better in relations with lords after the Black Death since there were fewer of them, and lords had less opportunity to play them off each other. Very often, breakdown values depend on exit options. The more exit options you have, the less likely you are to be badly hurt if coordination fails. And the more exit options you have, the better able you are to bargain, so that you end up at the outcome that you prefer, rather than the outcome that the other party prefers.

What this means, if you take it seriously, is that Coaseian coordination is a special case of bargaining. Broadly speaking, Coaseian processes will lead to efficient outcomes only under very specific circumstances – when the actors have symmetrical breakdown values, as in the first game, so that neither of them is able to prevail over the other. More simply put, the Coase transaction cost account of how efficient institutions emerge will only work when all actors are more or less equally powerful. Under these conditions, it is perfectly alright to assume as Coase (and Benkler by extension) do, that efficiency considerations rather than power relations will drive change. In contrast, where there are significant differences of power, actors will converge on the institutions that reflect the preferences of powerful actors, even if those institutions are not the most efficient possible.
economics  markets(again)  game-theory  via:?  theory-and-practice-sitting-in-a-tree  social-dynamics
19 days ago
Stumbling and Mumbling: The ideology of "the market"
For example, (many) capitalists have bargaining power and (many) workers do not. This means that, generally speaking, capitalists exploit labour.

And the “market” has given us a relative decline in low-skilled pay since the 1980s. This isn’t wholly due to technical developments but to changes in power, such as the decline of trades-unions and welfare state and adoptions of surveillance technologies that have reduced the efficiency wage element of their pay.

Similarly, “market forces” have given us stagnating real wages over the last ten years. But again talk of the “market” disguises what are in fact dysfunctional emergent features of capitalism – the stagnant labour productivity that has arisen from, among other things, low innovation and capital spending.

What’s more, “demand” is in part an ideological construct. Bosses are well-paid in part because of an ideological belief in the transformative power of leadership – a belief that isn’t wholly backed by facts. And carers and cleaners are poorly paid because of an ideology which devalues their work.

Talk of “the market” is often question-begging; it begs the question of how, exactly, prices and wages are determined in the market. The answer usually involves some element of power.

Now, Hall, Damazer and Purnell are not stupid men and they probably fancy themselves – not perhaps wholly without justification - as among the more liberal and humane members of the ruling cadre. And yet they guilty of an unreflecting inability to see that market relationships are also power relationships. In this of course, they are not unusual. I’ve long complained that centrists and “liberals” have a blindspot about power; we saw just this in this week’s Taylor report for example.

This blindspot, of course, serves the interests of the rich well. The BBC is not impartial.
history  economics  markets  neoliberalism
19 days ago
[1707.00044] Learning Fair Classifiers: A Regularization-Inspired Approach
We present a regularization-inspired approach for reducing bias in learned classifiers. In particular, we focus on binary classification tasks over individuals from two populations, where, as our criterion for fairness, we wish to achieve similar false positive rates in both populations, and similar false negative rates in both populations. As a proof of concept, we implement our approach and empirically evaluate its ability to achieve both fairness and accuracy, using the COMPAS scores data for prediction of recidivism.
performance-measure  machine-learning  classification  rather-interesting  define-your-terms  via:cshalizi  to-write-about  benchmarking  constraint-satisfaction
19 days ago
[1703.02261] An annotated bibliography on 1-planarity
The notion of 1-planarity is among the most natural and most studied generalizations of graph planarity. A graph is 1-planar if it has an embedding where each edge is crossed by at most another edge. The study of 1-planar graphs dates back to more than fifty years ago and, recently, it has driven increasing attention in the areas of graph theory, graph algorithms, graph drawing, and computational geometry. This annotated bibliography aims to provide a guiding reference to researchers who want to have an overview of the large body of literature about 1-planar graphs. It reviews the current literature covering various research streams about 1-planarity, such as characterization and recognition, combinatorial properties, and geometric representations. As an additional contribution, we offer a list of open problems on 1-planar graphs.
graph-theory  graph-layout  rather-interesting  computational-complexity  open-problems  nudge-targets  consider:looking-to-see
21 days ago
[1705.00055] Charting the Complexity Landscape of Waypoint Routing
Modern computer networks support interesting new routing models in which traffic flows from a source s to a destination t can be flexibly steered through a sequence of waypoints, such as (hardware) middleboxes or (virtualized) network functions, to create innovative network services like service chains or segment routing. While the benefits and technological challenges of providing such routing models have been articulated and studied intensively over the last years, much less is known about the underlying algorithmic traffic routing problems. This paper shows that the waypoint routing problem features a deep combinatorial structure, and we establish interesting connections to several classic graph theoretical problems. We find that the difficulty of the waypoint routing problem depends on the specific setting, and chart a comprehensive landscape of the computational complexity. In particular, we derive several NP-hardness results, but we also demonstrate that exact polynomial-time algorithms exist for a wide range of practically relevant scenarios.
network-theory  routing  networks  traffic  rather-interesting  to-write-about  to-simulate  computational-complexity
22 days ago
Scholarly Context Adrift: Three out of Four URI References Lead to Changed Content
Increasingly, scholarly articles contain URI references to “web at large” resources including project web sites, scholarly wikis, ontologies, online debates, presentations, blogs, and videos. Authors reference such resources to provide essential context for the research they report on. A reader who visits a web at large resource by following a URI reference in an article, some time after its publication, is led to believe that the resource’s content is representative of what the author originally referenced. However, due to the dynamic nature of the web, that may very well not be the case. We reuse a dataset from a previous study in which several authors of this paper were involved, and investigate to what extent the textual content of web at large resources referenced in a vast collection of Science, Technology, and Medicine (STM) articles published between 1997 and 2012 has remained stable since the publication of the referencing article. We do so in a two-step approach that relies on various well-established similarity measures to compare textual content. In a first step, we use 19 web archives to find snapshots of referenced web at large resources that have textual content that is representative of the state of the resource around the time of publication of the referencing paper. We find that representative snapshots exist for about 30% of all URI references. In a second step, we compare the textual content of representative snapshots with that of their live web counterparts. We find that for over 75% of references the content has drifted away from what it was when referenced. These results raise significant concerns regarding the long term integrity of the web-based scholarly record and call for the deployment of techniques to combat these problems.
28 days ago
Laudator Temporis Acti: Criticism
The civilized mind is naturally critical: bred by the interaction of various studies, criticism is the peculiar mark of high civilization. But criticism is itself a composite thing: restlessness of intellect is a part of it, but so is a wariness against delusion: curiosity and suspicion are both necessary elements.
28 days ago
[1608.06993] Densely Connected Convolutional Networks
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and models are available at this https URL .
4 weeks ago
Humboldt’s New World Landscape | The Hudson Review
Church must have felt, in reading these lines, as if Humboldt were addressing him directly. In fact, as my italics attempt to suggest, he was sketching what would become Church’s quintessential subject and theme: the torrid zones of the Amazon and Orinoco, the snowcapped volcanoes of the Andes, and all those gigantic panoramas of inexhaustible fecundity and diversity that were to Humboldt at the very center of the Creation. He went on, even more explicitly, “Are we not justified in hoping that land­scape painting will flourish with a new and hitherto unknown brilliancy when artists of merit shall . . . [venture], far in the interior of continents, in the humid mountain valleys of the tropical world, to seize, with the genuine freshness of a pure and youthful spirit, on the true image of the varied forms of nature?”
aesthetics  social-networks  history  criticism  art  influence-and-the-anxiety-thereof
4 weeks ago
3quarksdaily: The Origins of Hunter S. Thompson’s Loathing and Fear
And I was thinking, God damn you nazi bastards I really hope you win it, because letting your kind of human garbage flood the system is about the only way to really clean it out. Another four years of Ike would have brought on a national collapse, but one year of Goldwater would have produced a revolution.
politics  history  fascism  here-we-are
4 weeks ago
Directory of Open Access Journals
Directory of Open Access Journals (DOAJ)
DOAJ is a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals.
4 weeks ago
[1703.02826] A Linear Extrinsic Calibration of Kaleidoscopic Imaging System from Single 3D Point
This paper proposes a new extrinsic calibration of kaleidoscopic imaging system by estimating normals and distances of the mirrors. The problem to be solved in this paper is a simultaneous estimation of all mirror parameters consistent throughout multiple reflections. Unlike conventional methods utilizing a pair of direct and mirrored images of a reference 3D object to estimate the parameters on a per-mirror basis, our method renders the simultaneous estimation problem into solving a linear set of equations. The key contribution of this paper is to introduce a linear estimation of multiple mirror parameters from kaleidoscopic 2D projections of a single 3D point of unknown geometry. Evaluations with synthesized and real images demonstrate the performance of the proposed algorithm in comparison with conventional methods.
generative-art  linear-algebra  image-processing  rather-interesting  algorithms  performance-measure  to-write-about
4 weeks ago
OpenType-SVG color fonts
OpenType-SVG is a font format in which an OpenType font has all or just some of its glyphs represented as SVG (scalable vector graphics) artwork. This allows the display of multiple colors and gradients in a single glyph. Because of these features, we also refer to OpenType-SVG fonts as “color fonts”.

OpenType-SVG fonts allow text to be shown with these graphic qualities, while still allowing it to be edited, indexed, or searched. They may also contain OpenType features that allow glyph substitution or alternate glyph styles.

Color fonts like Trajan Color Concept and EmojiOne Color will appear just like typical fonts in your programs’ font menus — but they may not display their full potential, since many programs don’t yet have full support for the color components. If your software program doesn’t support the SVG artwork within the fonts, glyphs will fall back to a solid black style. Color can still be applied to this fallback style, as it will work like a typical OpenType font.
typography  oh-dear  SVG  opentype  to-learn  to-make
4 weeks ago
fonts, typefaces and all things typographical — I love Typography (ILT)
Color fonts or chromatic type are not new. The first production types appeared in the 1840s,1 reaching a peak of precision and complexity a few decades later as efficiencies in printing enabled greater creative freedom. In 1874 William H. Page of Greeneville, Connecticut, published his 100-page Specimens of Chromatic Type & Borders2 that still has the power to mesmerize designers today.
typography  design  lovely  technical-desires
4 weeks ago
Privilege-Centered Design: Design Observer
Yes, and that imagined reader holds sway in your head whether you consciously construct them or not. Opt out of crafting an image of your reader, and your brain will step in as a proxy. You’ll end up writing for yourself. A romantic notion that runs counter to the primary circuitry of the written word. Language evolved from a hardwired need to connect with others. That doesn't mean internalizing your audience is easy. Pulling them in requires focused effort. They are not you.

We also talk about customer empathy in product design. Crowing like we invented the sun, products MUST be built in direct response to a customer’s needs! We stand quietly in their space, hoping our gravity doesn't alter their orbit. Straining to notice what it feels like to notice them. Running our discoveries back to cold conference rooms. But experiences are overflowing with data—more than our heads could possibly hold. The details splash out until our own story is all that remains. Just like every other survivable trait, memory’s lead gene is efficiency. We forget details, but we also invent them. Whatever it takes to shore up our narrative. The customer we construct in our imagination bears the planetary weight of shaping what will be. We are responsible for the butterflies in our brain and the hurricanes they create in the world.
user-experience  design  features  empathy  privilege  the-mangle-is-also-people
4 weeks ago
Empathy in Book Publishing: Design Observer

My classroom experiences didn't uncover any 80 million dollar ideas, although if I ever have the opportunity to design a teacher’s grading book, I will lobby for lay-flat binding—or, better yet, an app. Looking at the world through my customer’s eyes did, however, change the way I view my job. Back in the office discussions about “the customer” were no longer abstract. I now felt a responsibility to advocate for the teachers and students I had met. The act of immersing in my customer’s experience suddenly felt as fundamental to my charge as the act of kerning.

You don't need the letters U and X in your job title to adopt a customer-needs perspective. All designers, no matter their level, should count fostering customer empathy in themselves and others as a baseline job requirement—doubly so if you work in book publishing where human-centered design is seldom discussed. Invest energy into spotting your readers’ peas. Internalize their perspective. Champion their needs. I can’t promise it will make you rich, but it will imbue your work with a greater sense of service and purpose.
books  user-experience  publishing  learning-by-doing  empathy  the-mangle-is-other-people
4 weeks ago
[1707.06374] Document Listing on Repetitive Collections with Guaranteed Performance
We consider document listing on string collections, that is, finding in which strings a given pattern appears. In particular, we focus on repetitive collections: a collection of size N over alphabet [1,σ] is composed of D copies of a string of size n, and s single-character or block edits are applied on ranges of copies. We introduce the first document listing index with size Õ (n+s), precisely O((nlogσ+slog2N)logD) bits, and with useful worst-case time guarantees: Given a pattern of length m, the index reports the $\ndoc$ strings where it appears in time $O(m^2 + m\log^{1+\epsilon} N \cdot \ndoc)$, for any constant ϵ>0. Our technique is to augment a range data structure that is commonly used on grammar-based indexes, so that instead of retrieving all the pattern occurrences, it computes useful summaries on them. We show that the idea has independent interest: we introduce the first grammar-based index that, on a text T[1,N] with a grammar of size r, uses O(rlogN) bits and counts the number of occurrences of a pattern P[1,m] in time O(m2+mlog2+ϵr), for any constant ϵ>0.
indexing  databases  computational-complexity  algorithms  pattern-finding  rather-interesting  to-write-about  benchmarking
4 weeks ago
Publication, Power, and Patronage: On Inequality and Academic Publishing – Critical Inquiry
Historically, university reformers from the eighteenth to the twenty-first century have touted publication as a corrective to concentrations of power and patronage networks. An increased emphasis on more purportedly transparent or objective measures provided by publication have long been cast as an antidote to cronyism and connections. As we will show, however, current data suggest that publication patterns largely reproduce significant power imbalances within the system of academic publishing. Systems of academic patronage as well as those of cultural and social capital seem not only to have survived but flourished in the modern bureaucratic university, even if in different form.[5] When, as our data show, Harvard University and Yale University exercise such a disproportionate influence on both hiring and publishing patterns, academic publishing seems less a democratic marketplace of ideas and more a tightly controlled network of patronage and cultural capital. Just as output-focused advancement is older than we might expect, patronage-based advancement is more persistent than we might like to acknowledge.
4 weeks ago
[1705.10201] Machine Learned Learning Machines
There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. Though these are methods that typically operate separately, we combine evolutionary adaptation and machine learning into one approach. Our focus is on machines that can learn during their lifetime, but instead of equipping them with a machine learning algorithm we aim to let them evolve their ability to learn by themselves. We use evolvable networks of probabilistic and deterministic logic gates, known as Markov Brains, as our computational model organism. The ability of Markov Brains to learn is augmented by a novel adaptive component that can change its computational behavior based on feedback. We show that Markov Brains can indeed evolve to incorporate these feedback gates to improve their adaptability to variable environments. By combining these two methods, we now also implemented a computational model that can be used to study the evolution of learning.
hey-I-know-this-person  machine-learning  local  evolutionary-algorithms  metaheuristics  to-write-about
4 weeks ago
“Neoliberalism” isn’t an empty epithet. It’s a real, powerful set of ideas. - Vox
You may not believe in neoliberalism, but neoliberalism believes in you
Why does this matter if you couldn’t care less about either the IMF or subjectivity? The 2016 election brought forward real disagreements in the Democratic Party, disagreements that aren’t reducible to empirical arguments, or arguments about what an achievable political agenda might be. These disagreements will become more important as we move forward, and they can only be answered with an understanding of what the Democratic Party stands for.

One highly salient conflict was the fight over free college during the Democratic primary. It wasn’t about the price tag; it was about the role the government should play in helping to educate the citizenry. Clinton originally argued that a universal program would help people who didn’t need help — why pay for Donald Trump’s kids? This reflects the focus on means-tested programs that dominated Democratic policymaking over the past several decades. (Some of the original people who wanted to reinvent the Democratic Party, such as Charles Peters in his 1983 article “A Neoliberal’s Manifesto,” called for means-testing Social Security so it served only the very poor.)
neoliberalism  capitalism  politics  postnormality  economics  public-policy  discourse
4 weeks ago
n-gate.com. we can't both be right.
An internet lectures passersby about webshit. The lectures are sprinkled with advertisements for an HTTP server that runs as root. We are expected to take security advice from this person seriously.

We do not.
web-applications  to-do  best-practices  devops  le-sigh
4 weeks ago
Graph Convolutional Networks | Thomas Kipf | PhD Student @ Univ. of Amsterdam
Many important real-world datasets come in the form of graphs or networks: social networks, knowledge graphs, protein-interaction networks, the World Wide Web, etc. (just to name a few). Yet, until recently, very little attention has been devoted to the generalization of neural network models to such structured datasets.

In the last couple of years, a number of papers re-visited this problem of generalizing neural networks to work on arbitrarily structured graphs (Bruna et al., ICLR 2014; Henaff et al., 2015; Duvenaud et al., NIPS 2015; Li et al., ICLR 2016; Defferrard et al., NIPS 2016; Kipf & Welling, ICLR 2017), some of them now achieving very promising results in domains that have previously been dominated by, e.g., kernel-based methods, graph-based regularization techniques and others.

In this post, I will give a brief overview of recent developments in this field and point out strengths and drawbacks of various approaches.
representation  graphs  neural-networks  machine-learning  nudge-targets  to-write-about
4 weeks ago
How printed books entered a new chapter of fortune
In February, Waterstones returned to profit for the first time in seven years, citing a return to “traditional bookselling” as the key to its resurgence. WH Smith has also found joy in books, regularly highlighting the success of spoof humour titles, such as Enid Blyton parody Five on Brexit Island, as a key sales driver.

“The print book revival continues as consumers, young and old, appear to have established a new appreciation for this traditional format,” said Rebecca McGrath, Mintel’s senior media analyst.
publishing  books  markets-in-everything-that-is-boring  disintermediation-responses  artisanal-world  postnormality  to-write-about
4 weeks ago
‘Predatory’ Open Access Journals as Parody: Exposing the Limitations of ‘Legitimate’ Academic Publishing | Bell | tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society
Abstract: The concept of the 'predatory' publisher has today become a standard way of characterising a new breed of open access journals that seem to be more concerned with making a profit than disseminating academic knowledge. This essay presents an alternative view of such publishers, arguing that if we treat them as parody instead of predator, a far more nuanced reading emerges. Viewed in this light, such journals destabilise the prevailing discourse on what constitutes a 'legitimate' journal, and, indeed, the nature of scholarly knowledge production itself. Instead of condemning them outright, their growth should therefore encourage us to ask difficult but necessary questions about the commercial context of knowledge production, prevailing conceptions of quality and value, and the ways in which they privilege scholarship from the 'centre' and exclude that from the 'periphery'.
4 weeks ago
Improving the Realism of Synthetic Images - Apple
Most successful examples of neural nets today are trained with supervision. However, to achieve high accuracy, the training sets need to be large, diverse, and accurately annotated, which is costly. An alternative to labelling huge amounts of data is to use synthetic images from a simulator. This is cheap as there is no labeling cost, but the synthetic images may not be realistic enough, resulting in poor generalization on real test images. To help close this performance gap, we’ve developed a method for refining synthetic images to make them look more realistic. We show that training models on these refined images leads to significant improvements in accuracy on various machine learning tasks.
apple  machine-learning  image-processing  generative-art  rather-interesting  to-write-about
4 weeks ago
Glitchet: All Issues
PAST ISSUES
Review any issues you missed, or just see what sort of stuff we send out.
art  digital-art  criticism  to-subscribe  rather-interesting  essays
4 weeks ago
Using Metal 2 for Compute - WWDC 2017 - Videos - Apple Developer
Metal Performance Shaders (MPS) provides a highly tuned library of functions that extend the power of the GPU for more than just graphics. With Metal 2, MPS comes to the Mac along with an expanded set of capabilities. Learn how to tap into the latest image processing operations, perform linear algebra operations, and accelerate machine learning algorithms via new primitives and a graph API to build and execute neural networks on the GPU.
software-development  neural-networks  library  have-watched  to-write-about  nudge-targets  consider:integrating
6 weeks ago
Beyond subjective and objective in statistics [PDF]
Decisions in statistical data analysis are often justified, criticized, or avoided using concepts of objectivity and subjectivity. We argue that the words “objective” and “subjective” in statis- tics discourse are used in a mostly unhelpful way, and we propose to replace each of them with broader collections of attributes, with objectivity replaced by transparency, consensus, im- partiality, and correspondence to observable reality, and subjectivity replaced by awareness of multiple perspectives and context dependence. Together with stability, these make up a collection of virtues that we think is helpful in discussions of statistical foundations and practice. The advantage of these reformulations is that the replacement terms do not oppose each other and that they give more specific guidance about what statistical science strives to achieve. Instead of debating over whether a given statistical method is subjective or objective (or normatively debating the relative merits of subjectivity and objectivity in statistical practice), we can rec- ognize desirable attributes such as transparency and acknowledgment of multiple perspectives as complementary goals. We demonstrate the implications of our proposal with recent applied examples from pharmacology, election polling, and socioeconomic stratification. The aim of this paper is to push users and developers of statistical methods toward more effective use of diverse sources of information and more open acknowledgement of assumptions and goals.
statistics  philosophy-of-science  data-analysis  looking-to-see  hypothesis-testing  learning  to-read
6 weeks ago
The Rise of the Thought Leader | New Republic
In his book The Ideas Industry, the political scientist and foreign policy blogger Daniel W. Drezner broadens the focus to include the conditions in which ideas are formed, funded, and expressed. Describing the public sphere in the language of markets, he argues that three major factors have altered the fortunes of today’s intellectuals: the evaporation of public trust in institutions, the polarization of American society, and growing economic inequality. He correctly identifies the last of these as the most important: the extraordinary rise of the American superrich, a class interested in supporting a particular genre of “ideas.”

The rich have, Drezner writes, empowered a new kind of thinker—the “thought leader”—at the expense of the much-fretted-over “public intellectual.” Whereas public intellectuals like Noam Chomsky or Martha Nussbaum are skeptical and analytical, thought leaders like Thomas Friedman and Sheryl Sandberg “develop their own singular lens to explain the world, and then proselytize that worldview to anyone within earshot.” While public intellectuals traffic in complexity and criticism, thought leaders burst with the evangelist’s desire to “change the world.” Many readers, Drezner observes, prefer the “big ideas” of the latter to the complexity of the former. In a marketplace of ideas awash in plutocrat cash, it has become “increasingly profitable for thought leaders to hawk their wares to both billionaires and a broader public,” to become “superstars with their own brands, sharing a space previously reserved for moguls, celebrities, and athletes.”
think-leading  cultural-norms  cultural-assumptions  culture-war  public-policy  discourse-and-dialectic-sittin-in-a-tree
6 weeks ago
[1706.08224] Do GANs actually learn the distribution? An empirical study
Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of (Goodfellow et al 2014) suggested they do, if they were given sufficiently large deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al (to appear at ICML 2017) raised doubts whether the same holds when discriminator has finite size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support ---in other words, the training objective is unable to prevent mode collapse. The current note reports experiments suggesting that such problems are not merely theoretical. It presents empirical evidence that well-known GANs approaches do learn distributions of fairly low support, and thus presumably are not learning the target distribution. The main technical contribution is a new proposed test, based upon the famous birthday paradox, for estimating the support size of the generated distribution.
deep-learning  neural-networks  looking-to-see  probability-theory  algorithms  experiment  rather-interesting  to-write-about
6 weeks ago

Copy this bookmark:

description:

tags: