12057
Sturgeon’s Biases – Quietstars – Medium
Some smart social psychologist, anthropologist or sociologist probably wrote about this seventy years ago. So I’m just going to ramble about it here for a bit, and hope that somebody smarter than I am can point me to that paper. So I know what to call it.
communities-of-practice  cultural-assumptions  cultural-norms  essay  social-norms  rather-interesting 
3 days ago
[1804.01622] Image Generation from Scene Graphs
To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.
deep-learning  image-processing  generative-art  generative-models  turning-the-handle-the-other-way  to-write-about  to-do 
4 days ago
Exactly how bad is the 13 times table? | The Aperiodical
Along the way, OEIS editor Charles R Greathouse IV added this intriguing conjecture:

Conjecture: a(n)≤N
for all n
. Perhaps N
can be taken as 81
.
number-theory  mathematical-recreations  open-questions  to-write-about  consider:feature-discovery 
4 days ago
Letting neural networks be weird
Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.

This often works really well - machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.
machine-learning  noteworthy  to-invite-to-GPTP 
4 days ago
How 'Deaf President Now' Changed America - Pacific Standard
The lessons from Deaf President Now should be clear. Its success, fueled by direct action, shows that rights have to be claimed rather than given. Deaf President Now stands as a watershed moment in the history not just of Deaf and disability rights, but also of American civil rights more broadly. As I spoke to people who had been instrumental in the protests, and the current president of Gallaudet, I heard one additional message: a fear that too few Americans even remember this story.
civil-rights  protest  public-policy  collective-action  activism 
4 days ago
Laudator Temporis Acti: The Poligs of the Oern Vent
'The Poligs of the Oern Vent in dugard to the Brounincinl Coutrick is the colic of the unscrifulouse Gawler.' So ran the printed slip technically known as a 'rough proof'. The Aryan had surpassed himself; but, as he read, light filled the mind of the Reader. He had written — 'The policy of the Government in regard to the Provincial Contract is the policy of the unscrupulous lawyer', and, behold, with a mere turn of his wrist, the Aryan had glorified, and enriched with the wealth of an exuberant Orientalism that simple sentence, till it stood forth a gem, or rather a collection of gems! 'The Poligs of the Oern Vent' — George Meredith might have woven those words into the Shaving of Shagpat, and so made that dazzling piece of broidery yet more gorgeous. 'Brounincinl Coutrick' would suit admirably the manager of a travelling-circus. Conceive the effect, on white and red posters of: — 'To-night! To-night!! To-night!!! The Brounincinl Coutrick!' The words would draw thousands — millions. 'Unscrifulouse Gawler' again would furnish an absolutely unique and startling title for a semi-humourous, semi-grotesque, wholly-horrible story, of the American school, let us say. Think for a moment what fashion of ghoulo-demoniacal, triple-Quilpian, Jekyll-and-Hydeous character, the 'unscrifulouse Gawler' would be. Out of the incult wantonings of a Punjabi Mahommedan with a box of type, had been born the suggestions of three Brilliant Notions, did any man care to use them, exactly as ideas for patterns are conveyed to the designer by the chance-ruled twists of the Kaleidescope.
nanohistory  generative-art  inspiration  unexpected-similarities  Lewis-Carroll  seek-ye-the-original 
10 days ago
Myth & Moor: The Wild Time of the Sickbed
Yet the loss is not really of time itself, but of one particular form of it: the "productive" time prized in our commerical culture, which priviliges results and finished products over process. "Time is money," as the old saying goes, and a sick person's time is not worth a bad penny. Yet paradoxically, when we're in poor health we are often rich in time, but in the wrong kind of time: the "unproductive" time of the sickbed. After a lifetime lived in the liminal space between disability and good health, I have come to believe "unproductive" time has its place and its value as well.
art  creativity  essay 
10 days ago
mathrecreation: Tigers and Treasure
The graph below shows all 196 possible puzzles. The puzzles that lead to contradictions are coloured as white squares, the puzzles that do not lead to contradictions are coloured in black. We can see a white strip across the bottom: these are all the puzzles where door 2 is labelled with statement 1. Can you find the statement that always leads to contradictions for door 1? There are 131 puzzles that are "good" in the sense that they do not lead to contradictions.
mathematical-recreations  logic  looking-to-see  to-write-about  consider:robustness  nudge-targets 
10 days ago
[1803.01422] DAGs with NO TEARS: Smooth Optimization for Structure Learning
Estimating the structure of directed acyclic graphs (DAGs, also known as Bayesian networks) is a challenging problem since the search space of DAGs is combinatorial and scales superexponentially with the number of nodes. Existing approaches rely on various local heuristics for enforcing the acyclicity constraint and are not well-suited to general purpose optimization packages for their solution. In this paper, we introduce a fundamentally different strategy: We formulate the structure learning problem as a smooth, constrained optimization problem over real matrices that avoids this combinatorial constraint entirely. This is achieved by a novel characterization of acyclicity that is not only smooth but also exact. The resulting nonconvex, constrained program involves smooth functions whose gradients are easy to compute and only involve elementary matrix operations. By using existing black-box optimization routines, our method uses global search to find an optimal DAG and can be implemented in about 50 lines of Python and outperforms existing methods without imposing any structural constraints.
graph-theory  combinatorics  algorithms  rather-interesting  computational-complexity  to-understand  to-write-about  nudge-targets  consider:looking-to-see  representation 
12 days ago
The Woman Who Bested the Men at Math | History | Smithsonian
To be a woman in the Victorian age was to be weak: the connection was that definite. To be female was also to be fragile, dependent, prone to nerves and—not least—possessed of a mind that was several degrees inferior to a man’s. For much of the 19th century, women were not expected to shine either academically or athletically, and those who attempted to do so were cautioned that they were taking an appalling risk. Mainstream medicine was clear on this point: to dream of studying at the university level was to chance madness or sterility, if not both.

It took generations to transform this received opinion; that, a long series of scientific studies, and the determination and hard work of many thousands of women. For all that, though, it is still possible to point to one single achievement, and one single day, and say: this is when everything began to change. That day was June 7, 1890, when—for the first and only time—a woman ranked first in the mathematical examinations held at the University of Cambridge. It was the day that Philippa Fawcett placed “above the Senior Wrangler.”
nanohistory  sexism  biography  mathematics  academic-culture 
12 days ago
History Dependent Cellular Automata | Softology's Blog
I have been exploring a variety of cellular automata lately and here is another one.

This is from another idea I had. Andrew Adamatzky let me know there has been work done using previous states before referred to as “Cellular Automata with Memory”. See these papers by Ramon Alonso-Sanz for other examples of 1D and 2D CAs using memory from previous generations.

This is a totalistic CA that uses the usual 8 immediate neighbor cells as well as the last step’s current cell and 8 neighbors. This gives a total of 17 neighbor cells that can influence the birth and survival of the cells.

I call them “History Dependent Cellular Automata” because they depend on the previous cycles’ neighbor cells as well as the usual 8 immediate neighbor cells.
cellular-automata  generative-art  to-write-about 
13 days ago
Context Free Art
Context Free is a program that generates images from written instructions called a grammar. The program follows the instructions in a few seconds to create images that can contain millions of shapes.
generative-art  rather-interesting  graphics  programming-language  to-understand 
13 days ago
[1804.00222] Learning Unsupervised Learning Rules
A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.
machine-learning  unsupervised-learning  rather-interesting  feature-extraction  clustering  algorithms  to-write-about 
13 days ago
[1803.05316] Seven Sketches in Compositionality: An Invitation to Applied Category Theory
This book is an invitation to discover advanced topics in category theory through concrete, real-world examples. It aims to give a tour: a gentle, quick introduction to guide later exploration. The tour takes place over seven sketches, each pairing an evocative application, such as databases, electric circuits, or dynamical systems, with the exploration of a categorical structure, such as adjoint functors, enriched categories, or toposes.
No prior knowledge of category theory is assumed.
category-theory  book  to-read  to-write-about  via:several  rather-interesting 
15 days ago
[1802.01396] To understand deep learning we need to understand kernel learning
Generalization performance of classifiers in deep learning has recently become a subject of intense study. Heavily over-parametrized deep models tend to fit training data exactly. Despite overfitting, they perform well on test data, a phenomenon not yet fully understood.
The first point of our paper is that strong performance of overfitted classifiers is not a unique feature of deep learning. Using real-world and synthetic datasets, we establish that kernel classifiers trained to have zero classification error (overfitting) or even zero regression error (interpolation) perform very well on test data.
We proceed to prove lower bounds on the norm of overfitted solutions for smooth kernels, showing that they increase nearly exponentially with the data size. Since most generalization bounds depend polynomially on the norm of the solution, this result implies that they diverge as data increases. Furthermore, the existing bounds do not apply to interpolated classifiers.
We also show experimentally that (non-smooth) Laplacian kernels easily fit random labels using a version of SGD, a finding that parallels results reported for ReLU neural networks. In contrast, fitting noisy data requires many more epochs for smooth Gaussian kernels. The observation that the performance of overfitted Laplacian and Gaussian classifiers on the test is quite similar, suggests that generalization is tied to the properties of the kernel function rather than the optimization process.
We see that some key phenomena of deep learning are manifested similarly in kernel methods in the overfitted regime. We argue that progress on understanding deep learning will be difficult, until more analytically tractable "shallow" kernel methods are better understood. The experimental and theoretical results presented in this paper indicate a need for new theoretical ideas for understanding classical kernel methods.
machine-learning  philosophy-of-engineering  looking-to-see  statistics  algorithms  explanation  to-write-about  explanatory-power-and-narrative-discovery 
15 days ago
The moving sofa problem — Dan Romik's home page
The mathematician Leo Moser posed in 1966 the following curious mathematical problem: what is the shape of largest area in the plane that can be moved around a right-angled corner in a two-dimensional hallway of width 1? This question became known as the moving sofa problem, and is still unsolved fifty years after it was first asked.
mathematical-recreations  computational-geometry  engineering-design  constraint-satisfaction  rather-interesting  nudge-targets  to-write-about 
15 days ago
Image analysis driven single-cell analytics for systems microbiology | BMC Systems Biology | Full Text
BaSCA can be used to analyze accurately and efficiently cell movies both at a high resolution (single-cell level) and at a large scale (communities with many dense colonies) as needed to shed light on e.g. how bacterial community effects and epigenetic information transfer play a role on important phenomena for human health, such as biofilm formation, persisters’ emergence etc. Moreover, it enables studying the role of single-cell stochasticity without losing sight of community effects that may drive it.
machine-learning  image-processing  image-segmentation  cell-biology  indistinguishable-from-magic  experimental-biology  wow  algorithms 
17 days ago
ties and insight | sara hendren
Carson’s father died in 1935, followed, two years later, by her older sister, leaving Carson to care for her mother and her nieces, ages eleven and twelve; she later adopted her grandnephew, when he was orphaned at the age of four. These obligations sometimes frustrated Carson, but not half as much as they frustrate her biographers. For [these biographers], Carson’s familial obligations—in particular, the children—are nothing but burdens that “deprived her of privacy and drained her physical and emotional energy.” They mean this generously, as a way of accounting for why Carson didn’t write more, and why, except for her Sun articles, she never once submitted a manuscript on time.But caring for other people brings its own knowledge. Carson came to see the world as beautiful, wild, animal, and vulnerable, each part attached to every other part, not only through prodigious scientific research but also through a lifetime of caring for the very old and the very young, wiping a dying man’s brow, tucking motherless girls into bed, heating up dinners for a lonely little boy. The domestic pervades Carson’s understanding of nature. “Wildlife, it is pointed out, is dwindling because its home is being destroyed,” she wrote in 1938, “but the home of the wildlife is also our home.” If she’d had fewer ties, she would have had less insight.
insight  aging  caregiving  lovely  nanohistory  biography 
17 days ago
Biomimetics | Free Full-Text | A Parallel Modular Biomimetic Cilia Sorting Platform
The aquatic unicellular organism Paramecium caudatum uses cilia to swim around its environment and to graze on food particles and bacteria. Paramecia use waves of ciliary beating for locomotion, intake of food particles and sensing. There is some evidence that Paramecia pre-sort food particles by discarding larger particles, but intake the particles matching their mouth cavity. Most prior attempts to mimic cilia-based manipulation merely mimicked the overall action rather than the beating of cilia. The majority of massive-parallel actuators are controlled by a central computer; however, a distributed control would be far more true-to-life. We propose and test a distributed parallel cilia platform where each actuating unit is autonomous, yet exchanging information with its closest neighboring units. The units are arranged in a hexagonal array. Each unit is a tileable circuit board, with a microprocessor, color-based object sensor and servo-actuated biomimetic cilia actuator. Localized synchronous communication between cilia allowed for the emergence of coordinated action, moving different colored objects together. The coordinated beating action was capable of moving objects up to 4 cm/s at its highest beating frequency; however, objects were moved at a speed proportional to the beat frequency. Using the local communication, we were able to detect the shape of objects and rotating an object using edge detection was performed; however, lateral manipulation using shape information was unsuccessful. View Full-Text
biological-engineering  microbiology  nanotechnology  rather-interesting  to-write-about  sorting  emergent-design 
17 days ago
Three representations of the Ising model | Scientific Reports
Statistical models that analyse (pairwise) relations between variables encompass assumptions about the underlying mechanism that generated the associations in the observed data. In the present paper we demonstrate that three Ising model representations exist that, although each proposes a distinct theoretical explanation for the observed associations, are mathematically equivalent. This equivalence allows the researcher to interpret the results of one model in three different ways. We illustrate the ramifications of this by discussing concepts that are conceived as problematic in their traditional explanation, yet when interpreted in the context of another explanation make immediate sense.
representation  modeling  philosophy-of-science  rather-interesting  to-write-about  demonstrations-of-the-mangle 
17 days ago
A Review of Perl 6 – Evan Miller
In case you missed it, and I think everyone did, Perl 6 was released to the world a year and a half ago — on Christmas of 2015, incidentally — after an effort nominally spanning almost sixteen years. (I think, but am not certain, my uncle’s dissertation was also completed at some point.) The market for new programming languages, as I write this in 2017, is competitive but not impenetrable — but like a freshly minted Ph.D., no one seems to be sure of Perl 6’s market prospects, and like a just-turned-in dissertation, no one seems to know whether the fruit of many years’ labor is actually worth a damn.
perl  programming-language  engineering-criticism  review  rather-interesting 
17 days ago
Robot Skill Learning: From Reinforcement Learning to Evolution Strategies : Paladyn, Journal of Behavioral Robotics
Policy improvement methods seek to optimize the parameters of a policy with respect to a utility function. Owing to current trends involving searching in parameter space (rather than action space) and using reward-weighted averaging (rather than gradient estimation), reinforcement learning algorithms for policy improvement, e.g. PoWER and PI2, are now able to learn sophisticated high-dimensional robot skills. A side-effect of these trends has been that, over the last 15 years, reinforcement learning (RL) algorithms have become more and more similar to evolution strategies such as (μW , λ)-ES and CMA-ES. Evolution strategies treat policy improvement as a black-box optimization problem, and thus do not leverage the problem structure, whereas RL algorithms do. In this paper, we demonstrate how two straightforward simplifications to the state-of-the-art RL algorithm PI2 suffice to convert it into the black-box optimization algorithm (μW, λ)-ES. Furthermore, we show that (μW , λ)-ES empirically outperforms PI2 on the tasks in [36]. It is striking that PI2 and (μW , λ)-ES share a common core, and that the simpler algorithm converges faster and leads to similar or lower final costs. We argue that this difference is due to a third trend in robot skill learning: the predominant use of dynamic movement primitives (DMPs). We show how DMPs dramatically simplify the learning problem, and discuss the implications of this for past and future work on policy improvement for robot skill learning
robotics  machine-learning  metaheuristics  algorithms  learning-by-doing  engineering-design  planning  to-write-about 
17 days ago
The meaning of model equivalence: Network models, latent variables, and the theoretical space in between | Psych Networks
Recently, an important set of equivalent representations of the Ising model was published by Joost Kruis and Gunter Maris in Scientific Reports. The paper constructs elegant representations of the Ising model probability distribution in terms of a network model (which consists of direct relations between observables), a latent variable model (which consists of relations between a latent variable and observables, in which the latent variable acts as a common cause), and a common effect model (which also consists of relations between a latent variable and observables, but here the latent variable acts as a common effect). The latter equivalence is a novel contribution to the literature and a quite surprising finding, because it means that a formative model can be statistically equivalent to a reflective model, which one may not immediately expect (do note that this equivalence need not maintain dimensionality, so a model with a single common effect may translate in a higher-dimensional latent variable model).

However, the equivalence between the ordinary (reflective) latent variable models and network models has been with us for a long time, and I therefore was rather surprised at some people’s reaction to the paper and the blog post that accompanies it. Namely, it appears that some think that (a) the fact that network structures can mimic reflective latent variables and vice versa is a recent discovery, that (b) somehow spells trouble for the network approach itself (because, well, what’s the difference?). The first of these claims is sufficiently wrong to go through the trouble of refuting it, if only to set straight the historical record; the second is sufficiently interesting to investigate it a little more deeply. Hence the following notes.
dynamical-systems  models  models-and-modes  representation  philosophy-of-science  (in-practice)  to-write-about  via:several 
17 days ago
"Brahmin Left vs Merchant Right" (PDF)
Abstract. Using post-electoral surveys from France, Britain and the US, this paper documents a striking long-run evolution in the structure of political cleavages. In the 1950s-1960s, the vote for left-wing (socialist-labour-democratic) parties was associated with lower education and lower income voters. It has gradually become associated with higher education voters, giving rise to a “multiple-elite” party system in the 2000s-2010s: high-education elites now vote for the “left”, while high- income/high-wealth elites still vote for the “right” (though less and less so). I argue that this can contribute to explain rising inequality and the lack of democratic response to it, as well as the rise of “populism”. I also discuss the origins of this evolution (rise of globalization/migration cleavage, and/or educational expansion per se) as well as future prospects: “multiple-elite” stabilization; complete realignment of the party system along a “globalists” (high-education, high-income) vs “nativists” (low- education, low-income) cleavage; return to class-based redistributive conflict (either from an internationalist or nativist perspective). Two main lessons emerge. First, with multi-dimensional inequality, multiple political equilibria and bifurcations can occur. Next, without a strong egalitarian-internationalist platform, it is difficult to unite low- education, low-income voters from all origins within the same party.
via:several  inequality  sociology  economics  political-economy  cultural-dynamics  cultural-norms 
17 days ago
Be Wary of Silicon Valley’s Guilty Conscience: on The Center for Humane Technology | LibrarianShipwreck
Part of what makes the worldview particularly jarring is that the distaste for common consumers is juxtaposed with a hopeful adoration for those who work in tech. If consumers want this and that, then the “talented employees” who “are the greatest asset of technology companies…genuinely want to build products that improve society, and no one wants participate [sic] in an extraction-based system that ruins society.”[7] It is worth noting that CHT is simultaneously ignoring the level of agency that most people have (reducing them to consumers), while overestimating the power that “talented employees” have. Granted, this folds back into CHT’s foundational myth. If tech companies, make “amazing products” and if those products have “benefited the world enormously,” it’s obvious that “engineers and technologists” are noble people driven by altruistic purposes. But, if those companies make “shiny garbage” that has “warped the world in many bad ways” it breaks the halo worn by “engineers and technologists.” And it diminishes the view of the technologist as savior to point out that, throughout history, there are no shortage of “engineers and technologists” who thought their efforts would “improve society” when in fact just the opposite occurred. This is not to cast aspersions at all “engineers and technologists” – but do people want to work at Google because they’re idealistic or because it pays well (true, these are not mutually exclusive)? If those lining up to work at Google and Facebook truly buy the publicity spin put forward by these companies, it is a compelling reason to have less faith in these people – not more.
sociology  politics  philosophy-of-engineering  political-economy  to-write-about 
17 days ago
Why We Can’t Have Humane Technology | L.M. Sacasas
The modern liberal order abets technology’s formative power to the degree that it disavows any strong claims about ethics and human flourishing. It is in the space of that disavowal that technology as an implicit anthropology and an implicit politics takes root and expands, framing and conditioning any subsequent efforts to subject it to ethical critique. Our understanding of the human is already conditioned by our technological milieu. Fundamental to this tacit anthropology, or account of the human, is the infinite malleability of human nature. Malleable humanity is a precondition to the unfettered expansion of technology. (This is why transhumanism is the proper eschatology of our technological order. Ultimately, humanity must adapt and conform, even if it means the loss of humanity as we have known it. As explicit ideology, this may still seem like a fringe position; as implicit practice, however, it is widely adopted.)
technology  philosophy-of-engineering  engineering-criticism  social-dynamics  capitalism  to-write-about  the-collective-mangle 
17 days ago
[1803.09574] Long short-term memory and Learning-to-learn in networks of spiking neurons
Networks of spiking neurons (SNNs) are frequently studied as models for networks of neurons in the brain, but also as paradigm for novel energy efficient computing hardware. In principle they are especially suitable for computations in the temporal domain, such as speech processing, because their computations are carried out via events in time and space. But so far they have been lacking the capability to preserve information for longer time spans during a computation, until it is updated or needed - like a register of a digital computer. This function is provided to artificial neural networks through Long Short-Term Memory (LSTM) units. We show here that SNNs attain similar capabilities if one includes adapting neurons in the network. Adaptation denotes an increase of the firing threshold of a neuron after preceding firing. A substantial fraction of neurons in the neocortex of rodents and humans has been found to be adapting. It turns out that if adapting neurons are integrated in a suitable manner into the architecture of SNNs, the performance of these enhanced SNNs, which we call LSNNs, for computation in the temporal domain approaches that of artificial neural networks with LSTM-units. In addition, the computing and learning capabilities of LSNNs can be substantially enhanced through learning-to-learn (L2L) methods from machine learning, that have so far been applied primarily to LSTM networks and apparently never to SSNs.
This preliminary report on arXiv will be replaced by a more detailed version in about a month.
neural-networks  biological-engineering  dynamical-systems  rather-interesting  architecture  machine-learning  simulation  learning-by-watching  to-write-about 
17 days ago
SocArXiv Papers | Exposure to Opposing Views can Increase Political Polarization: Evidence from a Large-Scale Field Experiment on Social Media
There is mounting concern that social media sites contribute to political polarization by creating ``echo chambers" that insulate people from opposing views about current events. We surveyed a large sample of Democrats and Republicans who visit Twitter at least three times each week about a range of social policy issues. One week later, we randomly assigned respondents to a treatment condition in which they were offered financial incentives to follow a Twitter bot for one month that exposed them to messages produced by elected officials, organizations, and other opinion leaders with opposing political ideologies. Respondents were re-surveyed at the end of the month to measure the effect of this treatment, and at regular intervals throughout the study period to monitor treatment compliance. We find that Republicans who followed a liberal Twitter bot became substantially more conservative post-treatment, and Democrats who followed a conservative Twitter bot became slightly more liberal post-treatment. These findings have important implications for the interdisciplinary literature on political polarization as well as the emerging field of computational social science.
sociology  the-madness-of-crowds  polarization  social-norms  social-media  opinion  le-sigh 
17 days ago
Digitising books as objects: The invisible made visible - Collection Care blog
Lights for digitisation are carefully positioned to avoid shadows and they help to reduce surface irregularities and anomalies. This is all to the benefit of the written text and/or of the decorations, but with much loss for the lovers of the book as an object!

Here I want to describe some very practical ways to achieve different results and show some ‘behind the scenes’ of items I have been working on and how these very interesting results can be achieved with simple straight-forward techniques that do not require any high-tech equipment.
philosophy-of-engineering  books  digitization  digital-humanities  physicality  contingency  nice 
19 days ago
[1802.10031] The Mirage of Action-Dependent Baselines in Reinforcement Learning
Policy gradient methods are a widely used class of model-free reinforcement learning algorithms where a state-dependent baseline is used to reduce gradient estimator variance. Several recent papers extend the baseline to depend on both the state and action and suggest that this significantly reduces variance and improves sample efficiency without introducing bias into the gradient estimates. To better understand this development, we decompose the variance of the policy gradient estimator and numerically show that learned state-action-dependent baselines do not in fact reduce variance over a state-dependent baseline in commonly tested benchmark domains. We confirm this unexpected result by reviewing the open-source code accompanying these prior papers, and show that subtle implementation decisions cause deviations from the methods presented in the papers and explain the source of the previously observed empirical gains. Furthermore, the variance decomposition highlights areas for improvement, which we demonstrate by illustrating a simple change to the typical value function parameterization that can significantly improve performance.
reinforcement-learning  machine-learning  performance-measure  to-understand  algorithms 
19 days ago
[1802.08995] Using Information Invariants to Compare Swarm Algorithms and General Multi-Robot Algorithms: A Technical Report
Robotic swarms are decentralized multi-robot systems whose members use local information from proximal neighbors to execute simple reactive control laws that result in emergent collective behaviors. In contrast, members of a general multi-robot system may have access to global information, all- to-all communication or sophisticated deliberative collabora- tion. Some algorithms in the literature are applicable to robotic swarms. Others require the extra complexity of general multi- robot systems. Given an application domain, a system designer or supervisory operator must choose an appropriate system or algorithm respectively that will enable them to achieve their goals while satisfying mission constraints (e.g. bandwidth, energy, time limits). In this paper, we compare representative swarm and general multi-robot algorithms in two application domains - navigation and dynamic area coverage - with respect to several metrics (e.g. completion time, distance trav- elled). Our objective is to characterize each class of algorithms to inform offline system design decisions by engineers or online algorithm selection decisions by supervisory operators. Our contributions are (a) an empirical performance comparison of representative swarm and general multi-robot algorithms in two application domains, (b) a comparative analysis of the algorithms based on the theory of information invariants, which provides a theoretical characterization supported by our empirical results.
swarms  robotics  planning  collective-intelligence  information-theory  feature-construction  rather-interesting  to-write-about 
19 days ago
Voyages in sentence space
Imagine a sentence. “I went looking for adventure.”

Imagine another one. “I never returned.”

Now imagine a sentence gradient between them—not a story, but a smooth interpolation of meaning. This is a weird thing to ask for! I’d never even bothered to imagine an interpolation between sentences before encountering the idea in a recent academic paper. But as soon as I did, I found it captivating, both for the thing itself—a sentence… gradient?—and for the larger artifact it suggested: a dense cloud of sentences, all related; a space you might navigate and explore.
natural-language-processing  rather-interesting  via:several  algorithms  feature-extraction  interpolation  to-write-about  consider:embedding-space 
19 days ago
[1709.04109] Empower Sequence Labeling with Task-Aware Neural Language Model
Linguistic sequence labeling is a general modeling approach that encompasses a variety of problems, such as part-of-speech tagging and named entity recognition. Recent advances in neural networks (NNs) make it possible to build reliable models without handcrafted features. However, in many cases, it is hard to obtain sufficient annotations to train these models. In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task. Besides word-level knowledge contained in pre-trained word embeddings, character-aware neural language models are incorporated to extract character-level knowledge. Transfer learning techniques are further adopted to mediate different components and guide the language model towards the key knowledge. Comparing to previous methods, these task-specific knowledge allows us to adopt a more concise model and conduct more efficient training. Different from most transfer learning methods, the proposed framework does not rely on any additional supervision. It extracts knowledge from self-contained order information of training sequences. Extensive experiments on benchmark datasets demonstrate the effectiveness of leveraging character-level knowledge and the efficiency of co-training. For example, on the CoNLL03 NER task, model training completes in about 6 hours on a single GPU, reaching F1 score of 91.71±0.10 without using any extra annotation.
natural-language-processing  deep-learning  neural-networks  nudge-targets  consider:feature-discovery  consider:representation  to-write-about 
19 days ago
[1712.08566] Extended Product and Integrated Interleaved Codes
A new class of codes, Extended Product (EPC) Codes, consisting of a product code with a number of extra parities added, is presented and applications for erasure decoding are discussed. An upper bound on the minimum distance of EPC codes is given, as well as constructions meeting the bound for some relevant cases. A special case of EPC codes, Extended Integrated Interleaved (EII) codes, which naturally unify Integrated Interleaved (II) codes and product codes, is defined and studied in detail. It is shown that EII codes often improve the minimum distance of II codes with the same rate, and they enhance the decoding algorithm by allowing decoding on columns as well as on rows. It is also shown that EII codes allow for encoding II codes with an uniform distribution of the parity symbols.
information-theory  codes  integer-programming  rather-interesting  robustness  performance-measure  algorithms  representation  to-write-about  nudge-targets  consider:performance-measures 
19 days ago
Estimating barriers to gene flow from distorted isolation by distance patterns | bioRxiv
In continuous populations with local migration, nearby pairs of individuals have on average more similar genotypes than geographically well separated pairs. A barrier to gene flow distorts this classical pattern of isolation by distance. Genetic similarity is decreased for sample pairs on different sides of the barrier and increased for pairs on the same side near the barrier. Here, we introduce an inference scheme that utilizes this signal to detect and estimate the strength of a linear barrier to gene flow in two-dimensions. We use a diffusion approximation to model the effects of a barrier on the geographical spread of ancestry backwards in time. This approach allows us to calculate the chance of recent coalescence and probability of identity by descent. We introduce an inference scheme that fits these theoretical results to the geographical covariance structure of bialleleic genetic markers. It can estimate the strength of the barrier as well as several demographic parameters. We investigate the power of our inference scheme to detect barriers by applying it to a wide range of simulated data. We also showcase an example application to a Antirrhinum majus (snapdragon) flower color hybrid zone, where we do not detect any signal of a strong genome wide barrier to gene flow.
population-biology  theoretical-biology  rather-interesting  simulation  to-write-about  consider:feature-discovery 
19 days ago
[1801.02356] Efficiently Disassemble-and-Pack for Mechanism
In this paper, we present a disassemble-and-pack approach for a mechanism to seek a box which contains total mechanical parts with high space utilization. Its key feature is that mechanism contains not only geometric shapes but also internal motion structures which can be calculated to adjust geometric shapes of the mechanical parts. Our system consists of two steps: disassemble mechanical object into a group set and pack them within a box efficiently. The first step is to create a hierarchy of possible group set of parts which is generated by disconnecting the selected joints and adjust motion structures of parts in groups. The aim of this step is seeking total minimum volume of each group. The second step is to exploit the hierarchy based on breadth-first-search to obtain a group set. Every group in the set is inserted into specified box from maximum volume to minimum based on our packing strategy. Until an approximated result with satisfied efficiency is accepted, our approach finish exploiting the hierarchy.
operations-research  optimization  rather-interesting  consider:generalizations  consider:for-code  nudge-targets  to-write-about 
19 days ago
[1801.00663] A Sharp Estimate for Probability Distributions
We consider absolutely continuous probability distributions f(x)dx on ℝ≥0. A result of Feldheim and Feldheim shows, among other things, that if the distribution is not compactly supported, then there exist z>0 such that most events in {X+Y≥2z} are comprised of a 'small' term satisfying min(X,Y)≤z and a 'large' term satisfying max(X,Y)≥z (as opposed to two 'large' terms that are both larger than z)
limsupz→∞ ℙ(min(X,Y)≤z|X+Y≥2z)=1.
The result fails if the distribution is compactly supported. We prove
supz>0 ℙ(min(X,Y)≤z|X+Y≥2z)≥124+8log2(med(X)‖f‖L∞),
where med(X) denotes the median. Interestingly, the logarithm is necessary and the result is sharp up to constants; we also discuss some open problems.
probability-theory  statistics  rather-interesting  nudge-targets  consider:looking-to-see 
19 days ago
[1406.1886v1] The Z1: Architecture and Algorithms of Konrad Zuse's First Computer
This paper provides the first comprehensive description of the Z1, the mechanical computer built by the German inventor Konrad Zuse in Berlin from 1936 to 1938. The paper describes the main structural elements of the machine, the high-level architecture, and the dataflow between components. The computer could perform the four basic arithmetic operations using floating-point numbers. Instructions were read from punched tape. A program consisted of a sequence of arithmetical operations, intermixed with memory store and load instructions, interrupted possibly by input and output operations. Numbers were stored in a mechanical memory. The machine did not include conditional branching in the instruction set. While the architecture of the Z1 is similar to the relay computer Zuse finished in 1941 (the Z3) there are some significant differences. The Z1 implements operations as sequences of microinstructions, as in the Z3, but does not use rotary switches as micro-steppers. The Z1 uses a digital incrementer and a set of conditions which are translated into microinstructions for the exponent and mantissa units, as well as for the memory blocks. Microinstructions select one out of 12 layers in a machine with a 3D mechanical structure of binary mechanical elements. The exception circuits for mantissa zero, necessary for normalized floating-point, were lacking; they were first implemented in the Z3. The information for this article was extracted from careful study of the blueprints drawn by Zuse for the reconstruction of the Z1 for the German Technology Museum in Berlin, from some letters, and from sketches in notebooks. Although the machine has been in exhibition since 1989 (non-operational), no detailed high-level description of the machine's architecture had been available. This paper fills that gap.
computer  history  reverse-engineering  engineering-design  nanohistory  to-write-about 
20 days ago
The Infopocalypse – Eudaimonia and Co
But radicalization is radicalization. And now it has happened in the West, ironically, just as it did in the East. And that is the great question we are not facing. Some significant portion of society has now been thoroughly radicalized. They believe in outlandish and foolish things. They have become ignorant and blind and petty and easily provoked. That isn’t an insult — they can undo all that. It is just an observation of empirical reality.
essay  yup 
24 days ago
No Moods, Ads or Cutesy Fucking Icons » Arc Weld
There’s this gene, Arc, active in our neurons. It’s essential for cognition and longterm memory in mammals; knockout mice who lack it can’t remember from one day to the next where they left the cheese. It looks and acts an awful lot like something called a gag— a “group-specific antigen”, something which codes for the core structural proteins of retroviruses. Like a gag, Arc codes for a protein that assembles into  capsids (basically, shuttles containing messenger RNA). These accumulate in the dendrites, cross the synaptic junction in little vesicles: a payload from one neuron to another.
via:mymarkup  theoretical-biology  experiment  to-write-about  neuroscience  rather-interesting 
24 days ago
Bird Migration Patterns - Western Hemisphere Dataset | Science On a Sphere
This dataset shows the migration of 118 species of terrestrial bird populations in the Western Hemisphere. Each dot represents the estimated location of the center of each species’ population for each day of the year. These estimations come from millions of observations from the eBird citizen-science database. eBird is a real-time, online checklist program, launched in 2002 by the Cornell Lab of Ornithology and National Audubon Society, that allows birdwatchers to enter their observations.
via:twitter  ethology  migration  visualization  biology  rather-interesting 
26 days ago
Home | Open Problem Garden
Welcome to the Open Problem Garden, a collection of unsolved problems in mathematics. Here you may:

Read descriptions of open problems.
Post comments on them.
Create and edit open problems pages (please contact us and we will set you up an account. Unfortunately, the automatic process is too prone to spammers at this moment.)
open-problems  mathematical-recreations  mathematics  computer-science  to-write-about  nudge-targets 
26 days ago
ekoontz/dag-unify: Unification of Directed Acyclic Graphs in Clojure
A Clojure library for combining directed acyclic graphs (DAGs) via unification. Unification is similar to merge (http://clojure.github.io/clojure/clojure.core-api.html#clojure.core/merge), except that if arguments have the same keys, the arguments' values for those keys will be recursively combined via unification to yield the value for the key in the combined map.
Clojure  library  graph-theory  algorithms 
26 days ago
Engelberg/ubergraph: An all-purpose Clojure graph data structure that implements Loom protocols and more.
Ubergraph is a versatile, general-purpose graph data structure for Clojure. It is designed to complement and extend Loom, a popular Clojure collection of graph protocols and algorithms.
graph-theory  algorithms  Clojure  library  to-understand 
26 days ago
[1803.00823] The "No Justice in the Universe" phenomenon: why honesty of effort may not be rewarded in tournaments
In 2000 Allen Schwenk, using a well-known mathematical model of matchplay tournaments in which the probability of one player beating another in a single match is fixed for each pair of players, showed that the classical single-elimination, seeded format can be "unfair" in the sense that situations can arise where an indisputibly better (and thus higher seeded) player may have a smaller probability of winning the tournament than a worse one. This in turn implies that, if the players are able to influence their seeding in some preliminary competition, situations can arise where it is in a player's interest to behave "dishonestly", by deliberately trying to lose a match. This motivated us to ask whether it is possible for a tournament to be both honest, meaning that it is impossible for a situation to arise where a rational player throws a match, and "symmetric" - meaning basically that the rules treat everyone the same - yet unfair, in the sense that an objectively better player has a smaller probability of winning than a worse one. After rigorously defining our terms, our main result is that such tournaments exist and we construct explicit examples for any number n >= 3 of players. For n=3, we show (Theorem 3.6) that the collection of win-probability vectors for such tournaments form a 5-vertex convex polygon in R^3, minus some boundary points. We conjecture a similar result for any n >= 4 and prove some partial results towards it.
performance-measure  probability-theory  rather-interesting  what-gets-measured-gets-fudged  to-write-about 
26 days ago
The Artificial Intelligentsia | Aaron Timms
After a year in the company I discovered, to my horror, that I had begun to think about politics and policy almost exclusively in this bizarre, engineer-created jargon. “A spike in the sector rollup suggests heightened political risk over the coming two weeks, though judging by the relatively muted activity in the microeconomic subsector signal, it’s unlikely economic policy will be a driver”—that kind of thing. Just as the laborers working on Penn Station succumbed to a physical distortion at the end of each day—the bends—I began to feel the effects of a mental distortion: the delusion that says, yes, this is an acceptable way to analyze and think about politics. Inexorably, I was turning into a sandhog. Contact with technology was stripping away my creativity, my capacity for independent reason. I was becoming less, not more, capable. I was becoming less human. With time I understood that the point of the company was not to train and optimize the algorithm on human modes of thinking. It was not to humanize the machine, as the classical vision of strong AI would have it. Rather, it was to mechanize the human—to make the human learn to be more like the machine, a rigid taskmaster stripped of initiative and organic thought.
28 days ago
The Left is now too weak for democracy to survive | Aeon Essays
Crouch lays some blame for this at the feet of the usual suspects. As markets globalise, businesses grow more powerful (they can relocate their activities, or threaten to relocate) and governments are weakened. Yet the real lessons of his book are about more particular forms of disconnection.

Neo-liberalism, which was supposed to replace grubby politics with efficient, market-based competition, has led not to the triumph of the free market but to the birth of new and horrid chimeras. The traditional firm, based on stable relations between employer, workers and customers, has spun itself out into a complicated and ever-shifting network of supply relationships and contractual forms. The owners remain the same but their relationship to their employees and customers is very different. For one thing, they cannot easily be held to account. As the American labour lawyer Thomas Geoghegan and others have shown, US firms have systematically divested themselves of inconvenient pension obligations to their employees, by farming them out to subsidiaries and spin-offs. Walmart has used hands-off subcontracting relationships to take advantage of unsafe working conditions in the developing world, while actively blocking efforts to improve industry safety standards until 112 garment workers died in a Bangladesh factory fire in November last year. Amazon uses subcontractors to employ warehouse employees in what can be unsafe and miserable working conditions, while minimising damage to its own brand.
28 days ago
Carrie Cuinn | Cake History Month 4: Palace Cakes from the city of Ur (Mesopotamia)
1 sila of butter, 1/3 sila of white cheese, 3 sila of first-quality dates, and 1/3 sila of raisins. – Recipe for “palace cakes” from records recovered during excavation of Ur
If I told you that a sila was a unit of measurement that equaled about 3 cups, would you know how to make a cake from this recipe? Chances are, if you mixed soft white cheese, butter, dates, and raisins, and then put that mixture into an oven to bake, the result would be a crispy goo of cheesy dried fruits, nothing resembling a cake at all.
history-is-a-feature-not-a-bug  recipes  archives  philosophy-in-practice  context  to-write-about  the-whole-objective-history-thing-in-a-nutshell 
4 weeks ago
Nnedi Okorafor’s Binti series – The Pinocchio Theory
The Binti series is Afrofuturist science fiction. The eponymous protagonist and narrator is a young woman from the Himba people of Namibia. This group is known today for the way that they strongly maintain their traditional practices and ways of life, while at the same time engaging fully with social and technological modernity, and the other peoples with whom they interact. This is still the case in the future world of the novellas: the Himba rarely leave their desert territory, and they keep to something like their traditional ways of life; but at the same time they are known for their cutting edge technology, which they sell to other peoples on Earth, and even to beings from other planets.
have-read  reviews  good-reviews  philosophy  literary-criticism  to-share 
4 weeks ago
A hybrid placement strategy for the three-dimensional strip packing problem - ScienceDirect
This paper presents a hybrid placement strategy for the three-dimensional strip packing problem which involves packing a set of cuboids (‘boxes’) into a three-dimensional bin (parallelepiped) of fixed width and height but unconstrained length (the ‘container’). The goal is to pack all of the boxes into the container, minimising its resulting length. This problem has potential industry application in stock cutting (wood, polystyrene, etc. – minimising wastage) and also cargo loading, as well as other applications in areas such as multi-dimensional resource scheduling. In addition to the proposed strategy a number of test results on available literature benchmark problems are presented and analysed. The results of empirical testing of the algorithm show that it out-performs other methods from the literature, consistently in terms of speed and solution quality-producing 28 best known results from 35 test cases.
operations-research  hyperheuristics  metaheuristics  optimization  constraint-satisfaction  nudge-targets  consider:representation  consider:looking-to-see  to-write-about 
4 weeks ago
[1712.08373] Notes on complexity of packing coloring
A packing k-coloring for some integer k of a graph G=(V,E) is a mapping
φ:V→{1,…,k} such that any two vertices u,v of color φ(u)=φ(v) are in distance at least φ(u)+1. This concept is motivated by frequency assignment problems. The \emph{packing chromatic number} of G is the smallest k such that there exists a packing k-coloring of G.
Fiala and Golovach showed that determining the packing chromatic number for chordal graphs is \NP-complete for diameter exactly 5. While the problem is easy to solve for diameter 2, we show \NP-completeness for any diameter at least 3. Our reduction also shows that the packing chromatic number is hard to approximate within n1/2−ε for any ε>0.
In addition, we design an \FPT algorithm for interval graphs of bounded diameter. This leads us to exploring the problem of finding a partial coloring that maximizes the number of colored vertices.
graph-theory  algorithms  combinatorics  proof  approximation  nudge-targets  consider:looking-to-see  consider:feature-discovery 
4 weeks ago
[1801.06150] Jamming of Deformable Polygons
There are two main classes of physics-based models for two-dimensional cellular materials: packings of repulsive disks and the vertex model. These models have several disadvantages. For example, disk interactions are typically a function of particle overlap, yet the model assumes that the disks remain circular during overlap. The shapes of the cells can vary in the vertex model, however, the packing fraction is fixed at ϕ=1. Here, we describe the deformable particle model (DPM), where each particle is a polygon composed of a large number of vertices. The total energy includes three terms: two quadratic terms to penalize deviations from the preferred particle area a0 and perimeter p0 and a repulsive interaction between DPM polygons that penalizes overlaps. We performed simulations to study the onset of jamming in packings of DPM polygons as a function of asphericity, =p20/4πa0. We show that the packing fraction at jamming onset ϕJ() grows with increasing , reaching confluence at ≈1.16. ∗ corresponds to the value at which DPM polygons completely fill the cells obtained from a surface-Voronoi tessellation. Further, we show that DPM polygons develop invaginations for >∗ with excess perimeter that grows linearly with −∗. We confirm that packings of DPM polygons are solid-like over the full range of  by showing that the shear modulus is nonzero.
packing  condensed-matter  simulation  rather-interesting  algorithms  representation  to-write-about  multiobjective-optimization  consider:simulation 
4 weeks ago
[1712.02473] Relevance of Packing to Colloidal Self-Assembly
Since the 1920s, packing arguments have been used to rationalize crystal structures in systems ranging from atomic mixtures to colloidal crystals. Packing arguments have recently been applied to complex nanoparticle structures, where they often, but not always, work. We examine when, if ever, packing is a causal mechanism in hard particle approximations of colloidal crystals. We investigate three crystal structures comprised of their ideal packing shapes. We show that, contrary to expectations, the ordering mechanism cannot be packing, even when the thermodynamically self-assembled structure is the same as that of the densest packing. We also show that the best particle shapes for hard particle colloidal crystals in the infinite pressure limit are imperfect versions of the ideal packing shape.
materials-science  nanotechnology  crystals  packing  self-organization  physics!  rather-interesting  consider:parametrization 
4 weeks ago
[1709.07097] Persistence Flamelets: multiscale Persistent Homology for kernel density exploration
In recent years there has been noticeable interest in the study of the "shape of data". Among the many ways a "shape" could be defined, topology is the most general one, as it describes an object in terms of its connectivity structure: connected components (topological features of dimension 0), cycles (features of dimension 1) and so on. There is a growing number of techniques, generally denoted as Topological Data Analysis, aimed at estimating topological invariants of a fixed object; when we allow this object to change, however, little has been done to investigate the evolution in its topology. In this work we define the Persistence Flamelets, a multiscale version of one of the most popular tool in TDA, the Persistence Landscape. We examine its theoretical properties and we show how it could be used to gain insights on KDEs bandwidth parameter.
data-analysis  feature-extraction  representation  topology  rather-interesting  algorithms  visualization  to-understand  exploratory-data-analysis 
4 weeks ago
[1709.08359] On the expressive power of query languages for matrices
We investigate the expressive power of 𝖬𝖠𝖳𝖫𝖠𝖭𝖦, a formal language for matrix manipulation based on common matrix operations and linear algebra. The language can be extended with the operation 𝗂𝗇𝗏 of inverting a matrix. In 𝖬𝖠𝖳𝖫𝖠𝖭𝖦+𝗂𝗇𝗏 we can compute the transitive closure of directed graphs, whereas we show that this is not possible without inversion. Indeed we show that the basic language can be simulated in the relational algebra with arithmetic operations, grouping, and summation. We also consider an operation 𝖾𝗂𝗀𝖾𝗇 for diagonalizing a matrix, which is defined so that different eigenvectors returned for a same eigenvalue are orthogonal. We show that 𝗂𝗇𝗏 can be expressed in 𝖬𝖠𝖳𝖫𝖠𝖭𝖦+𝖾𝗂𝗀𝖾𝗇. We put forward the open question whether there are boolean queries about matrices, or generic queries about graphs, expressible in 𝖬𝖠𝖳𝖫𝖠𝖭𝖦+𝖾𝗂𝗀𝖾𝗇 but not in 𝖬𝖠𝖳𝖫𝖠𝖭𝖦+𝗂𝗇𝗏. The evaluation problem for 𝖬𝖠𝖳𝖫𝖠𝖭𝖦+𝖾𝗂𝗀𝖾𝗇 is shown to be complete for the complexity class ∃R.
matrices  programming-language  representation  rather-interesting  nudge  consider:representation  to--do 
4 weeks ago
[1605.06919] The correlation theory of the chemical bond
The quantum mechanical description of the chemical bond is generally given in terms of delocalized bonding orbitals, or, alternatively, in terms of correlations of occupations of localised orbitals. However, in the latter case, multiorbital correlations were treated only in terms of two-orbital correlations, although the structure of multiorbital correlations is far richer; and, in the case of bonds established by more than two electrons, multiorbital correlations represent a more natural point of view. Here, for the first time, we introduce the true multiorbital correlation theory, consisting of a framework for handling the structure of multiorbital correlations, a toolbox of true multiorbital correlation measures, and the formulation of the multiorbital correlation clustering, together with an algorithm for obtaining that. These make it possible to characterise quantitatively, how well a bonding picture describes the chemical system. As proof of concept, we apply the theory for the investigation of the bond structures of several molecules. We show that the non-existence of well-defined multiorbital correlation clustering provides a reason for debated bonding picture.
theoretical-chemistry  rather-interesting  combinatorics  combinatorics-in-practice  probability-theory  enumeration  to-write-about 
4 weeks ago
[1711.03532] Co-Optimization Generation and Distribution Planning in Microgrids
This paper proposes a co-optimization generation and distribution planning model in microgrids in which simultaneous investment in generation, i.e., distributed generation (DG) and distributed energy storage (DES), and distribution, i.e., upgrading the existing distribution network, is considered. The objective of the proposed model is to minimize the microgrid total planning cost which comprises the investment cost of installed generation assets and lines, the microgrid operation cost, and the cost of unserved energy. The microgrid planning solution determines the optimal generation size, location, and mix, as well as required network upgrade. To consider line flow and voltage limits, a linearized power flow model is proposed and used, allowing further application of mixed integer linear programming (MILP) in problem modeling. The proposed model is applied to the IEEE 33-bus standard test system to demonstrate the acceptable performance and the effectiveness of the proposed model.
optimization  network-theory  mathematical-programming  multiobjective-optimization  rather-interesting  operations-research  utilities  nudge-targets  consider:looking-to-see 
4 weeks ago
[1709.04594] Revisiting Spectral Graph Clustering with Generative Community Models
The methodology of community detection can be divided into two principles: imposing a network model on a given graph, or optimizing a designed objective function. The former provides guarantees on theoretical detectability but falls short when the graph is inconsistent with the underlying model. The latter is model-free but fails to provide quality assurance for the detected communities. In this paper, we propose a novel unified framework to combine the advantages of these two principles. The presented method, SGC-GEN, not only considers the detection error caused by the corresponding model mismatch to a given graph, but also yields a theoretical guarantee on community detectability by analyzing Spectral Graph Clustering (SGC) under GENerative community models (GCMs). SGC-GEN incorporates the predictability on correct community detection with a measure of community fitness to GCMs. It resembles the formulation of supervised learning problems by enabling various community detection loss functions and model mismatch metrics. We further establish a theoretical condition for correct community detection using the normalized graph Laplacian matrix under a GCM, which provides a novel data-driven loss function for SGC-GEN. In addition, we present an effective algorithm to implement SGC-GEN, and show that the computational complexity of SGC-GEN is comparable to the baseline methods. Our experiments on 18 real-world datasets demonstrate that SGC-GEN possesses superior and robust performance compared to 6 baseline methods under 7 representative clustering metrics.
community-detection  network-theory  algorithms  looking-to-see  clustering  to-write-about  the-pragmatics-of-the-thing 
4 weeks ago
Church vs Curry Types - LispCast
My ambitious hope is that this perspective will quiet a lot of the fighting as people recognize that they are just perpetuating a rift in the field of mathematics that happened a long time ago. The perspectives are irreconcilable now, but that could change. A paper called Church and Curry: Combining Intrinsic and Extrinsic Typing builds a language with both kinds of types. And Gradual Typing and Blame Calculus are investigating the intersection of static and dynamic typing. Let’s stop fighting, make some cool tools and use them well.
type-theory  computer-science  models-and-modes  dichotomy-or-not?  to-write-about 
4 weeks ago
What Role for Antitrust in the Era of Rising Inequality? The Importance of Power in Supply Chains -
Recently, there’s been a resurgence of interest in antitrust law as a remedy to rising inequality, on the grounds that high corporate profits may be caused by anti-competitive behavior that is only permissible under the more relaxed antitrust standards of recent decades, compared with the previous era of lower tail inequality, concentration indices, and corporate profits. Daniel Crane wrote a paper objecting to this idea in expansive terms, and it’s fair to say the consensus among expert antitrust practitioners is that the tools of antitrust law are ill-suited to the problem of rising inequality. From this perspective, inequality is a social problem—if it is one—that exists outside the question of whether markets are functioning properly to consumers’ benefit, which is the sole concern of antitrust regulators, the judiciary, and of the body of law they enforce and interpret. But a look at some of the root causes of rising inequality belies this clear delineation and shows that concentration of market power in supply chains is in fact a core contributing factor that antitrust law was originally designed to remedy and reverse.
 
Consider the outsourcing of labor, a strategy to make firms more profitable for their owners and managers by pushing workers outside the walls of “lead firms.” David Weil wrote about this in his book The Fissured Workplace, and Alan Krueger and Lawrence Katz documented the sharp rise in the share of the workforce that labors in what they call “alternative work arrangements,” namely as temps, contractors, freelancers, and so on, has increased from 10.1 to 15.8 percent of the workforce over just the past 10 years. A recently revised study of interfirm inequality by Song, Price, Guvenen, Bloom, and von Wachter concludes that segregation of workers into low-wage and high-wage firms—and not productivity heterogeneity at the firm level—accounts for the bulk of rising inter-personal labor income inequality. Together, these studies imply that a major factor in rising inequality is indeed that power in supply chains has concentrated, because workers are increasingly being pushed into subordinated upstream firms.
monopsony  economics  inequality  public-policy  capitalism 
4 weeks ago
Opinion | Corporate America Is Suppressing Wages for Many Workers - The New York Times
For a long time, economists believed that labor-market monopsony rarely existed, at least outside old-fashioned company towns where a single factory employs most of the residents. But in recent decades, several compelling studies have revealed that monopsony is omnipresent. Professionals like doctors and nurses, workers in factories and meat processing plants, and sandwich makers and other low-skill workers earn far less — thousands of dollars less — than they would if employers did not dominate labor markets.

The studies show that common features of the labor market give enormous bargaining advantages to employers. Because most people sink roots in their communities, they are reluctant to quit their job and move to a job that is far away. Because workplaces differ in terms of their location and conditions, people have trouble comparing them, which means that one cannot easily “comparison shop” for jobs. And thanks to a wave of consolidation, industries are increasingly dominated by a small number of huge companies, which means that workers have fewer choices among employers in their area.

When employers exercise monopsonistic power, wages are suppressed, jobs are left unfilled, and economic growth suffers. Unions used to offset employer monopsony power, but unions now represent only 7 percent of private sector workers, down from a peak of 35 percent in the 1950s. Combating the practices that employers use to monopsonize the labor market can lead to higher wages, more jobs and faster economic growth.
worklife  economics  models-and-modes  public-policy  power-relations  to-write-about  capitalism 
4 weeks ago
[1801.01922] Vectorization of Line Drawings via PolyVector Fields
Image tracing is a foundational component of the workflow in graphic design, engineering, and computer animation, linking hand-drawn concept images to collections of smooth curves needed for geometry processing and editing. Even for clean line drawings, modern algorithms often fail to faithfully vectorize junctions, or points at which curves meet; this produces vector drawings with incorrect connectivity. This subtle issue undermines the practical application of vectorization tools and accounts for hesitance among artists and engineers to use automatic vectorization software. To address this issue, we propose a novel image vectorization method based on state-of-the-art mathematical algorithms for frame field processing. Our algorithm is tailored specifically to disambiguate junctions without sacrificing quality.
graphics  algorithms  inference  rather-interesting  feature-construction  nudge-targets  consider:representation  consider:looking-to-see  consider:performance-measures 
4 weeks ago
[1801.07008] INKA: An Ink-based Model of Graph Visualization
Common quality metrics of graph drawing have been about the readability criteria, such as small number of edge crossings, small drawing area and small total edge length. Bold graph drawing considers more realistic drawings consisting of vertices as disks of some radius and edges as rectangles of some width. However, the relationship that links these readability criteria with the rendering criteria in node-link diagrams has still not been well-established.
This paper introduces a model, so-called INKA (Ink-Active), that encapsulates mathematically the relationship between all common drawing factors. Consequently, we investigate our INKA model on several common drawing algorithms and real-world graphs.
computational-geometry  graph-layout  LaTeX  algorithms  maybe-try-multiobjective  to-write-about 
4 weeks ago
[1709.06217] Deterministic meeting of sniffing agents in the plane
Two mobile agents, starting at arbitrary, possibly different times from arbitrary locations in the plane, have to meet. Agents are modeled as discs of diameter 1, and meeting occurs when these discs touch. Agents have different labels which are integers from the set of 0 to L-1. Each agent knows L and knows its own label, but not the label of the other agent. Agents are equipped with compasses and have synchronized clocks. They make a series of moves. Each move specifies the direction and the duration of moving. This includes a null move which consists in staying inert for some time, or forever. In a non-null move agents travel at the same constant speed, normalized to 1. We assume that agents have sensors enabling them to estimate the distance from the other agent (defined as the distance between centers of discs), but not the direction towards it. We consider two models of estimation. In both models an agent reads its sensor at the moment of its appearance in the plane and then at the end of each move. This reading (together with the previous ones) determines the decision concerning the next move. In both models the reading of the sensor tells the agent if the other agent is already present. Moreover, in the monotone model, each agent can find out, for any two readings in moments t1 and t2, whether the distance from the other agent at time t1 was smaller, equal or larger than at time t2. In the weaker binary model, each agent can find out, at any reading, whether it is at distance less than \r{ho} or at distance at least \r{ho} from the other agent, for some real \r{ho} > 1 unknown to them. Such distance estimation mechanism can be implemented, e.g., using chemical sensors. Each agent emits some chemical substance (scent), and the sensor of the other agent detects it, i.e., sniffs. The intensity of the scent decreases with the distance.
agent-based  rather-interesting  random-processes  sensors  emergent-design  proof  to-write-about  consider:looking-to-see 
4 weeks ago
[1603.08270] Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that i) approach state-of-the-art classification accuracy across 8 standard datasets, encompassing vision and speech, ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1200 and 2600 frames per second and using between 25 and 275 mW (effectively > 6000 frames / sec / W) and iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. For the first time, the algorithmic power of deep learning can be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
deep-learning  neural-networks  review  optimization  algorithms  hardware 
4 weeks ago
The article removed from Forbes, “Why White Evangelicalism Is So Cruel” – Political Orphans
**This was originally posted to Forbes on Sunday, Mar 11. Forbes took it down today. This is the explanation I received from the editor. Here is the original article in full:

Robert Jeffress, Pastor of First Baptist Church in Dallas and an avid supporter of Donald Trump, earned headlines this week for his defense of the president’s adultery with a porn star. Regarding the affair and subsequent financial payments, Jeffress explained, “Even if it’s true, it doesn’t matter.”

Such a casual attitude toward adultery and prostitution might seem odd from a guy who blamed 9/11 on America’s sinfulness. However, seen through the lens of white evangelicals’ real priorities, Jeffress’ disinterest in Trump’s sordid lifestyle makes sense. Religion is inseparable from culture, and culture is inseparable from history. Modern, white evangelicalism emerged from the interplay between race and religion in the slave states. What today we call “evangelical Christianity,” is the product of centuries of conditioning, in which religious practices were adapted to nurture a slave economy. The calloused insensitivity of modern white evangelicals was shaped by the economic and cultural priorities that forged their theology over centuries.
politics  conservatism  religion  American-cultural-assumptions  history  essay  banned-essay 
5 weeks ago
A $1.6 billion Spotify lawsuit is based on a law made for player pianos - The Verge
So, what happened here? Did Spotify really fail to pay artists to the tune of a billion dollars all the while losing money? Is digital streaming just a black hole that sucks up money and spits it out into the cold vacuum of space?

SPOTIFY IS FUNDAMENTALLY BEING SUED FOR LITERAL PAPERWORK

The answer is complicated. The amount of money that songwriters are making through streaming services like Spotify is oddly low, but the Wixen lawsuit itself exists in a bizarre universe of convoluted legal provisions that have very little bearing to fairness, common sense, or even how the technology actually works. And as Spotify’s IPO filing notes in its section on risk factors, the company is dependent on third-party licenses, which makes its business model especially vulnerable to any hiccups in the bureaucracy of music licensing.
copyright  law  explanation  rather-interesting  doom 
5 weeks ago
Complex Systems
Founded by Stephen Wolfram in 1987 »
The original journal devoted to the science, mathematics and engineering of systems with simple components but complex overall behavior.

Hundreds of published papers, encompassing three decades of leading-edge complex systems research, are available for free and immediate download.
journal  complexology  nudge-targets  open-access 
5 weeks ago
[1611.05896] Answering Image Riddles using Vision and Reasoning through Probabilistic Soft Logic
In this work, we explore a genre of puzzles ("image riddles") which involves a set of images and a question. Answering these puzzles require both capabilities involving visual detection (including object, activity recognition) and, knowledge-based or commonsense reasoning. We compile a dataset of over 3k riddles where each riddle consists of 4 images and a groundtruth answer. The annotations are validated using crowd-sourced evaluation. We also define an automatic evaluation metric to track future progress. Our task bears similarity with the commonly known IQ tasks such as analogy solving, sequence filling that are often used to test intelligence.
We develop a Probabilistic Reasoning-based approach that utilizes probabilistic commonsense knowledge to answer these riddles with a reasonable accuracy. We demonstrate the results of our approach using both automatic and human evaluations. Our approach achieves some promising results for these riddles and provides a strong baseline for future attempts. We make the entire dataset and related materials publicly available to the community in ImageRiddle Website (this http URL).
machine-learning  image-processing  natural-language-processing  deep-learning  puzzles  rather-interesting  to-write-about  nudge-targets  consider:looking-to-see  consider:integrating-NLP 
5 weeks ago
[1801.06769] Deep joint rain and haze removal from single images
Rain removal from a single image is a challenge which has been studied for a long time. In this paper, a novel convolutional neural network based on wavelet and dark channel is proposed. On one hand, we think that rain streaks correspond to high frequency component of the image. Therefore, haar wavelet transform is a good choice to separate the rain streaks and background to some extent. More specifically, the LL subband of a rain image is more inclined to express the background information, while LH, HL, HH subband tend to represent the rain streaks and the edges. On the other hand, the accumulation of rain streaks from long distance makes the rain image look like haze veil. We extract dark channel of rain image as a feature map in network. By increasing this mapping between the dark channel of input and output images, we achieve haze removal in an indirect way. All of the parameters are optimized by back-propagation. Experiments on both synthetic and real- world datasets reveal that our method outperforms other state-of- the-art methods from a qualitative and quantitative perspective.
image-processing  machine-learning  deep-learning  algorithms  rather-interesting  consider:performance-measures 
5 weeks ago
[1606.05381] Deep Image Set Hashing
In applications involving matching of image sets, the information from multiple images must be effectively exploited to represent each set. State-of-the-art methods use probabilistic distribution or subspace to model a set and use specific distance measure to compare two sets. These methods are slow to compute and not compact to use in a large scale scenario. Learning-based hashing is often used in large scale image retrieval as they provide a compact representation of each sample and the Hamming distance can be used to efficiently compare two samples. However, most hashing methods encode each image separately and discard knowledge that multiple images in the same set represent the same object or person. We investigate the set hashing problem by combining both set representation and hashing in a single deep neural network. An image set is first passed to a CNN module to extract image features, then these features are aggregated using two types of set feature to capture both set specific and database-wide distribution information. The computed set feature is then fed into a multilayer perceptron to learn a compact binary embedding. Triplet loss is used to train the network by forming set similarity relations using class labels. We extensively evaluate our approach on datasets used for image matching and show highly competitive performance compared to state-of-the-art methods.
algorithms  machine-learning  rather-interesting  feature-construction  nudge-targets  consider:looking-at-code  consider:performance-measures 
5 weeks ago
[1801.00548] A Machine Learning Approach to Adaptive Covariance Localization
Data assimilation plays a key role in large-scale atmospheric weather forecasting, where the state of the physical system is estimated from model outputs and observations, and is then used as initial condition to produce accurate future forecasts. The Ensemble Kalman Filter (EnKF) provides a practical implementation of the statistical solution of the data assimilation problem and has gained wide popularity as. This success can be attributed to its simple formulation and ease of implementation. EnKF is a Monte-Carlo algorithm that solves the data assimilation problem by sampling the probability distributions involved in Bayes theorem. Because of this, all flavors of EnKF are fundamentally prone to sampling errors when the ensemble size is small. In typical weather forecasting applications, the model state space has dimension 109−1012, while the ensemble size typically ranges between 30−100 members. Sampling errors manifest themselves as long-range spurious correlations and have been shown to cause filter divergence. To alleviate this effect covariance localization dampens spurious correlations between state variables located at a large distance in the physical space, via an empirical distance-dependent function. The quality of the resulting analysis and forecast is greatly influenced by the choice of the localization function parameters, e.g., the radius of influence. The localization radius is generally tuned empirically to yield desirable results.This work, proposes two adaptive algorithms for covariance localization in the EnKF framework, both based on a machine learning approach. The first algorithm adapts the localization radius in time, while the second algorithm tunes the localization radius in both time and space. Numerical experiments carried out with the Lorenz-96 model, and a quasi-geostrophic model, reveal the potential of the proposed machine learning approaches.
modeling  machine-learning  prediction  rather-interesting  looking-to-see  approximation  algorithms  to-write-about 
5 weeks ago
[1802.03426] UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP as described has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.
clustering  visualization  machine-learning  algorithms  performance-measure  rather-interesting  to-write-about 
5 weeks ago
[1802.06415] Solving Large-Scale Minimum-Weight Triangulation Instances to Provable Optimality
We consider practical methods for the problem of finding a minimum-weight triangulation (MWT) of a planar point set, a classic problem of computational geometry with many applications. While Mulzer and Rote proved in 2006 that computing an MWT is NP-hard, Beirouti and Snoeyink showed in 1998 that computing provably optimal solutions for MWT instances of up to 80,000 uniformly distributed points is possible, making use of clever heuristics that are based on geometric insights. We show that these techniques can be refined and extended to instances of much bigger size and different type, based on an array of modifications and parallelizations in combination with more efficient geometric encodings and data structures. As a result, we are able to solve MWT instances with up to 30,000,000 uniformly distributed points in less than 4 minutes to provable optimality. Moreover, we can compute optimal solutions for a vast array of other benchmark instances that are not uniformly distributed, including normally distributed instances (up to 30,000,000 points), all point sets in the TSPLIB (up to 85,900 points), and VLSI instances with up to 744,710 points. This demonstrates that from a practical point of view, MWT instances can be handled quite well, despite their theoretical difficulty.
computational-geometry  algorithms  plane-geometry  optimization  to-write-about  nudge-targets  consider:representation 
5 weeks ago
[1112.2896] On the structure of Ammann A2 tilings
We establish a structure theorem for the family of Ammann A2 tilings of the plane. Using that theorem we show that every Ammann A2 tiling is self-similar in the sense of [B. Solomyak, Nonperiodicity implies unique composition for self-similar translationally finite tilings, Discrete and Computational Geometry 20 (1998) 265-279]. By the same techniques we show that Ammann A2 tilings are not robust in the sense of [B. Durand, A. Romashchenko, A. Shen. Fixed-point tile sets and their applications, Journal of Computer and System Sciences, 78:3 (2012) 731--764].
tiling  aperiodic-tiles  combinatorics  rather-interesting  mathematical-recreations 
5 weeks ago
[1802.06712] Multithreading for the expression-dag-based number type Real_algebraic
Many algorithms, especially in the field of computational geometry, are based on the premise that arithmetic operations are performed exactly. Real machines are based on inexact floating-point arithmetic. Various number types have been developed to close this gap by providing exact computation or ensuring exact decisions. In this report we describe the implementation of an extension to the exact-decisions number type Real_algebraic that enables us to take advantage of multiple processing units.
representation  programming-language  algebra  rather-interesting  mathematics  to-write-about  to-try 
5 weeks ago
« earlier      
academia academic-culture activism advice agent-based agility algorithms amusing ann-arbor approximation architecture archive art artificial-intelligence artificial-life benchmarking bioinformatics biological-engineering blogging books bushism business business-culture business-model cellular-automata classification clustering collaboration collective-intelligence combinatorics commons communication community complex-systems complexology compressed-sensing computational-complexity computational-geometry computer-science conservatism consider:feature-discovery consider:looking-to-see consider:performance-measures consider:rediscovery consider:representation consider:stress-testing constraint-satisfaction copyright corporatism criticism crowdsourcing cultural-assumptions cultural-dynamics cultural-norms data-analysis data-mining deep-learning define-your-terms design design-patterns development digital-humanities digitization disintermediation-in-action distributed-processing diversity dynamical-systems economics education emergence emergent-design engineering engineering-design evolutionary-algorithms evolutionary-economics experiment feature-construction feature-extraction finance financial-crisis fitness-landscapes formalization game-theory games generative-art generative-models genetic-programming geometry google government graph-theory graphic-design heuristics history horse-races humor image-analysis image-processing image-segmentation inference information-theory innovation intellectual-property interesting inverse-problems javascript kauffmania language law lawyers learning learning-by-doing learning-from-data library local looking-to-see machine-learning macos management marketing mathematical-recreations mathematics matrices media metaheuristics metrics modeling models models-and-modes multiobjective-optimization nanohistory nanotechnology natural-language-processing network-theory networks neural-networks nonlinear-dynamics nudge nudge-targets number-theory numerical-methods open-access open-questions open-source openness operations-research optimization pedagogy performance-measure philosophy philosophy-of-engineering philosophy-of-science photography physics planning politics prediction probability-theory programming project-management proof public-policy publishing puzzles rather-interesting representation research review robotics robustness ruby science self-organization signal-processing simulation social-dynamics social-engineering social-networks social-norms sociology software software-development statistics strings sustainability system-of-professions systems-biology technology the-mangle-in-practice theoretical-biology tiling time-series to-read to-understand to-write-about tools trading tutorial typography user-experience via:cshalizi video visualization web-design web2.0 worklife writing

Copy this bookmark:



description:


tags: