A Year in Reading: Namwali Serpell - The Millions

yesterday

The question of how and what we (ought to) read is political for me in this sense: If we believe in democracy and equality, why are our aesthetic priorities shaped by an elite minority? Why do we dismiss our engagement with genre works as “love-hate,” “hate-watching,” and “guilty pleasure” when we spend so much time doing it? Why do we refer to these works as “low” or “lite” when they are read by millions more people than the classics? In short, why don’t the numbers matter? Maybe these texts aren’t read much in academia because they don’t require scholars to explain or analyze them: The story we tell ourselves is that they aren’t difficult or ambiguous; they’re self-evident, simplistic even. But maybe that’s just some petty nonsense to justify the need for literary critics?

literary-criticism
genre
rather-good
self-definition
essay
yesterday

[1812.04948] A Style-Based Generator Architecture for Generative Adversarial Networks

2 days ago

We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.

generative-art
generative-models
neural-networks
multiscale
very-impressive
to-write-about
consider:performance-measures
2 days ago

SnapshotTesting 1.0: Delightful Swift snapshot testing

4 days ago

The iOS community has been a large proponent of snapshot testing, mostly thanks to the wonderful iOSSnapshotTestCase library (formerly known as FBSnapshotTestCase). It introduced a new kind of test coverage for iOS applications by allowing us to assert against an image screenshot of a UI component. This is a whole new level of testing that can catch regressions in the pixel data of our UI so that you can make sure that future changes and refactors do not introduce visual regressions into your views.

However, iOSSnapshotTestCase has not evolved much over the years, and its still largely written in Objective-C, which means the API isn’t as generic and composable as it could be in Swift. Also, it only allows snapshotting CALayers and UIViews into a PNG format, but there are many more types we might want to snapshot and many more formats we want to snapshot into.

That’s why today we are excited to officially announce SnapshotTesting 1.0: a modern, composable snapshot testing library built entirely in Swift!

testing
Swift
user-experience
to-do
iOS
software-development
However, iOSSnapshotTestCase has not evolved much over the years, and its still largely written in Objective-C, which means the API isn’t as generic and composable as it could be in Swift. Also, it only allows snapshotting CALayers and UIViews into a PNG format, but there are many more types we might want to snapshot and many more formats we want to snapshot into.

That’s why today we are excited to officially announce SnapshotTesting 1.0: a modern, composable snapshot testing library built entirely in Swift!

4 days ago

[1808.00453] The Erdos-Szekeres problem and an induced Ramsey question

4 days ago

Motivated by the Erdos-Szekeres convex polytope conjecture in Rd, we initiate the study of the following induced Ramsey problem for hypergraphs. Given integers n>k≥5, what is the minimum integer gk(n) such that any k-uniform hypergraph on gk(n) vertices with the property that any set of k+1 vertices induces 0, 2, or 4 edges, contains an independent set of size n. Our main result shows that gk(n)>2cnk−4, where c=c(k).

combinatorics
hypergraphs
constraint-satisfaction
consequences-of-the-rules
rather-interesting
to-write-about
4 days ago

[1810.00845] CHET: Compiler and Runtime for Homomorphic Evaluation of Tensor Programs

4 days ago

Fully Homomorphic Encryption (FHE) refers to a set of encryption schemes that allow computations to be applied directly on encrypted data without requiring a secret key. This enables novel application scenarios where a client can safely offload storage and computation to a third-party cloud provider without having to trust the software and the hardware vendors with the decryption keys. Recent advances in both FHE schemes and implementations have moved such applications from theoretical possibilities into the realm of practicalities.

This paper proposes a compact and well-reasoned interface called the Homomorphic Instruction Set Architecture (HISA) for developing FHE applications. Just as the hardware ISA interface enabled hardware advances to proceed independent of software advances in the compiler and language runtimes, HISA decouples compiler optimizations and runtimes for supporting FHE applications from advancements in the underlying FHE schemes.

This paper demonstrates the capabilities of HISA by building an end-to-end software stack for evaluating neural network models on encrypted data. Our stack includes an end-to-end compiler, runtime, and a set of optimizations. Our approach shows generated code, on a set of popular neural network architectures, is faster than hand-optimized implementations.

to-understand
machine-learning
algorithms
distributed-processing
seems-important
languages
neural-networks
This paper proposes a compact and well-reasoned interface called the Homomorphic Instruction Set Architecture (HISA) for developing FHE applications. Just as the hardware ISA interface enabled hardware advances to proceed independent of software advances in the compiler and language runtimes, HISA decouples compiler optimizations and runtimes for supporting FHE applications from advancements in the underlying FHE schemes.

This paper demonstrates the capabilities of HISA by building an end-to-end software stack for evaluating neural network models on encrypted data. Our stack includes an end-to-end compiler, runtime, and a set of optimizations. Our approach shows generated code, on a set of popular neural network architectures, is faster than hand-optimized implementations.

4 days ago

[1706.04290] A general method for lower bounds on fluctuations of random variables

4 days ago

There are many ways of establishing upper bounds on fluctuations of random variables, but there is no systematic approach for lower bounds. As a result, lower bounds are unknown in many important problems. This paper introduces a general method for lower bounds on fluctuations. The method is used to obtain new results for the stochastic traveling salesman problem, the stochastic minimal matching problem, the random assignment problem, the Sherrington-Kirkpatrick model of spin glasses, first-passage percolation and random matrices. A long list of open problems is provided at the end.

open-questions
extreme-values
probability-theory
representation
rather-odd
nudge-targets
consider:looking-to-see
to-write-about
4 days ago

QAnon and Pinterest Is Just the Beginning | Hapgood

4 days ago

How Pinterest’s Aggressive Recommendation Engine Makes This Worse

About a year ago I wrote an article on how Pinterest’s recommendation engine makes this situation far worse. I showed how after just 14 minutes of browsing, a new user with some questions about vaccines could move from pins on “How to Make the Perfect Egg” to something out of the Infowarverse:

social-media
recommendation-engines
spam
propaganda
rather-interesting
cultural-predation
About a year ago I wrote an article on how Pinterest’s recommendation engine makes this situation far worse. I showed how after just 14 minutes of browsing, a new user with some questions about vaccines could move from pins on “How to Make the Perfect Egg” to something out of the Infowarverse:

4 days ago

[1802.07029] On a fully fuzzy framework for minimax mixed integer linear programming

4 days ago

In this work, we present a modeling framework for minimax mixed 0-1 fuzzy linear problems. It is based on extending the usual rewriting of crisp minimax problems via auxiliary variables to model the maximum of a finite set of fuzzy linear functions. We establish that the considered problem can be equivalently formulated as a multiple objective mixed integer programming problem. The framework is applied to a fully fuzzy version of the capacitated center facility location problem.

representation
fuzzy
operations-research
integer-programming
models-and-modes
to-write-about
could-be-clearer
4 days ago

Democracy as an information system — Crooked Timber

4 days ago

Democracy is an information system.

That’s the starting place of our new paper: “Common-Knowledge Attacks on Democracy.” In it, we look at democracy through the lens of information security, trying to understand the current waves of Internet disinformation attacks. Specifically, we wanted to explain why the same disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.

The answer revolves around the different ways autocracies and democracies work as information systems. We start by differentiating between two types of knowledge that societies use in their political systems. The first is common political knowledge, which is the body of information that people in a society broadly agree on. People agree on who the rulers are and what their claim to legitimacy is. People agree broadly on how their government works, even if they don’t like it. In a democracy, people agree about how elections work: how districts are created and defined, how candidates are chosen, and that their votes count—even if only roughly and imperfectly.

social-norms
democracy
cultural-dynamics
propaganda
public-policy
political-economy
rather-interesting
epidemiology
feature-construction
discriminators
fascism
signaling
That’s the starting place of our new paper: “Common-Knowledge Attacks on Democracy.” In it, we look at democracy through the lens of information security, trying to understand the current waves of Internet disinformation attacks. Specifically, we wanted to explain why the same disinformation campaigns that act as a stabilizing influence in Russia are destabilizing in the United States.

The answer revolves around the different ways autocracies and democracies work as information systems. We start by differentiating between two types of knowledge that societies use in their political systems. The first is common political knowledge, which is the body of information that people in a society broadly agree on. People agree on who the rulers are and what their claim to legitimacy is. People agree broadly on how their government works, even if they don’t like it. In a democracy, people agree about how elections work: how districts are created and defined, how candidates are chosen, and that their votes count—even if only roughly and imperfectly.

4 days ago

[1811.08759] Using AI to Design Stone Jewelry

4 days ago

Jewelry has been an integral part of human culture since ages. One of the most popular styles of jewelry is created by putting together precious and semi-precious stones in diverse patterns. While technology is finding its way in the production process of such jewelry, designing it remains a time-consuming and involved task. In this paper, we propose a unique approach using optimization methods coupled with machine learning techniques to generate novel stone jewelry designs at scale. Our evaluation shows that designs generated by our approach are highly likeable and visually appealing.

generative-art
design
aesthetics
rather-interesting
performance-measure
to-write-about
user-centric-design
4 days ago

Knots and Narnias |

4 days ago

Say you’re walking north across a meadow surrounded by hills when you come across a solitary doorframe with no door inside it. Stranger still, through the doorway you see not the hills to the north of the field but a desert vista. Consumed by curiosity and heedless of danger, you cross the threshold into the desert. The sun beats down on your bare head; you see a vulture off in the distance. In sudden panic you spin around; fortunately the doorway is still there. You run through the doorway back into the field, grateful that the portal works both ways.

Now what?

knot-theory
mathematical-recreations
rather-interesting
inspiration
to-write-about
Now what?

4 days ago

[1804.02851] Whale swarm algorithm with the mechanism of identifying and escaping from extreme point for multimodal function optimization

4 days ago

Most real-world optimization problems often come with multiple global optima or local optima. Therefore, increasing niching metaheuristic algorithms, which devote to finding multiple optima in a single run, are developed to solve these multimodal optimization problems. However, there are two difficulties urgently to be solved for most existing niching metaheuristic algorithms: how to set the optimal values of niching parameters for different optimization problems, and how to jump out of the local optima efficiently. These two difficulties limited their practicality largely. Based on Whale Swarm Algorithm (WSA) we proposed previously, this paper presents a new multimodal optimizer named WSA with Iterative Counter (WSA-IC) to address these two difficulties. In the one hand, WSA-IC improves the iteration rule of the original WSA for multimodal optimization, which removes the need of specifying different values of attenuation coefficient for different problems to form multiple subpopulations, without introducing any niching parameter. In the other hand, WSA-IC enables the identification of extreme point during iterations relying on two new parameters (i.e., stability threshold Ts and fitness threshold Tf), to jump out of the located extreme point. Moreover, the convergence of WSA-IC is proved. Finally, the proposed WSA-IC is compared with several niching metaheuristic algorithms on CEC2015 niching benchmark test functions and five additional classical multimodal functions with high dimensions. The experimental results demonstrate that WSA-IC statistically outperforms other niching metaheuristic algorithms on most test functions.

metaheuristics
biological-metaphor-of-the-month
to-write-about
actually
rather-interesting
4 days ago

[1706.07900] Tree-Residue Vertex-Breaking: a new tool for proving hardness

4 days ago

In this paper, we introduce a new problem called Tree-Residue Vertex-Breaking (TRVB): given a multigraph $G$ some of whose vertices are marked "breakable," is it possible to convert $G$ into a tree via a sequence of "vertex-breaking" operations (replacing a degree-$k$ breakable vertex by $k$ degree-$1$ vertices, disconnecting the $k$ incident edges)?

We characterize the computational complexity of TRVB with any combination of the following additional constraints: $G$ must be planar, $G$ must be a simple graph, the degree of every breakable vertex must belong to an allowed list $B$, and the degree of every unbreakable vertex must belong to an allowed list $U$. The two results which we expect to be most generally applicable are that (1) TRVB is polynomially solvable when breakable vertices are restricted to have degree at most $3$; and (2) for any $k \ge 4$, TRVB is NP-complete when the given multigraph is restricted to be planar and to consist entirely of degree-$k$ breakable vertices. To demonstrate the use of TRVB, we give a simple proof of the known result that Hamiltonicity in max-degree-$3$ square grid graphs is NP-hard.

We also demonstrate a connection between TRVB and the Hypergraph Spanning Tree problem. This connection allows us to show that the Hypergraph Spanning Tree problem in $k$-uniform $2$-regular hypergraphs is NP-complete for any $k \ge 4$, even when the incidence graph of the hypergraph is planar.

feature-construction
graph-theory
computational-complexity
rather-interesting
explanation
to-write-about
We characterize the computational complexity of TRVB with any combination of the following additional constraints: $G$ must be planar, $G$ must be a simple graph, the degree of every breakable vertex must belong to an allowed list $B$, and the degree of every unbreakable vertex must belong to an allowed list $U$. The two results which we expect to be most generally applicable are that (1) TRVB is polynomially solvable when breakable vertices are restricted to have degree at most $3$; and (2) for any $k \ge 4$, TRVB is NP-complete when the given multigraph is restricted to be planar and to consist entirely of degree-$k$ breakable vertices. To demonstrate the use of TRVB, we give a simple proof of the known result that Hamiltonicity in max-degree-$3$ square grid graphs is NP-hard.

We also demonstrate a connection between TRVB and the Hypergraph Spanning Tree problem. This connection allows us to show that the Hypergraph Spanning Tree problem in $k$-uniform $2$-regular hypergraphs is NP-complete for any $k \ge 4$, even when the incidence graph of the hypergraph is planar.

4 days ago

Explainable AI won’t deliver. Here’s why. – Hacker Noon

4 days ago

Imagine choosing between two spaceships. Spaceship 1 comes with exact equations explaining how it works, but has never been flown. How Spaceship 2 flies is a mystery, but it has undergone extensive testing, with years of successful flights like the one you’re going on.

Which spaceship would you choose?

This is a philosophical question, so I can’t answer it for you. I know I have a personal preference — maybe that’s the statistician in me — but I would choose careful testing as a better basis for trust.

artificial-intelligence
explanation
philosophy-of-engineering
maintainability
models-and-modes
that-taco-bell-girl-says-what?
Which spaceship would you choose?

This is a philosophical question, so I can’t answer it for you. I know I have a personal preference — maybe that’s the statistician in me — but I would choose careful testing as a better basis for trust.

4 days ago

On authoritarian neoliberalism and poetic epistemology | Richard Hall's Space

5 days ago

As one response to the secular crisis of capitalism, higher education is being proletarianised. Its academics and students, increasingly encumbered by precarious employment, debt, and new levels of performance management, are shorn of autonomy beyond the sale of their labour-power. One heuristic for analysing this response is authoritarian neoliberalism, imposed as a means of enacting disciplinary practices in the name of the market with an anti-democratic rationale. This has a distinctly technocratic focus, rooted in techniques of performativity, including audits and assessments of teaching, research and scholarship, grounded in productivity, the management of time and value-creation. However, there are a range of intersectional and geographical responses to such an imposition, through which it is possible to describe alternatives to these architectures of subsumption. In particular, a second heuristic emerges which challenges the restructuring of the University in the global North, erupting from struggles for decolonisation. Here, Audre Lorde’s invocation to an integrated, poetic existence that situates bodies in places, and respects feelings and emotions as the site of epistemological development and understanding, underpins the possibility for dismantling hegemonic knowledge production. The article examines whether humanist narratives of solidarity, in particular from marginalised voices, might help academics and students to analyse their alienated labour and to imagine that another world is possible.

worklife
institutional-design
neoliberalism
humanities
academic-culture
just-what-is-it-you-do?
5 days ago

How many landmarks are enough to characterize shape and size variation?

5 days ago

Accurate characterization of morphological variation is crucial for generating reliable results and conclusions concerning changes and differences in form. Despite the prevalence of landmark-based geometric morphometric (GM) data in the scientific literature, a formal treatment of whether sampled landmarks adequately capture shape variation has remained elusive. Here, I introduce LaSEC (Landmark Sampling Evaluation Curve), a computational tool to assess the fidelity of morphological characterization by landmarks. This task is achieved by calculating how subsampled data converge to the pattern of shape variation in the full dataset as landmark sampling is increased incrementally. While the number of landmarks needed for adequate shape variation is dependent on individual datasets, LaSEC helps the user (1) identify under- and oversampling of landmarks; (2) assess robustness of morphological characterization; and (3) determine the number of landmarks that can be removed without compromising shape information. In practice, this knowledge could reduce time and cost associated with data collection, maintain statistical power in certain analyses, and enable the incorporation of incomplete, but important, specimens to the dataset. Results based on simulated shape data also reveal general properties of landmark data, including statistical consistency where sampling additional landmarks has the tendency to asymptotically improve the accuracy of morphological characterization. As landmark-based GM data become more widely adopted, LaSEC provides a systematic approach to evaluate and refine the collection of shape data––a goal paramount for accumulation and analysis of accurate morphological information.

inference
data-analysis
looking-to-see
rather-interesting
training-data
data-balancing
to-write-about
5 days ago

Elvis Presley's Pound Cake Recipe | SAVEUR

5 days ago

When we wrote our book Elvis World (Knopf, 1987), we often dined with fans of The King. As they discussed what they loved about him, it became clear that one of the reasons people felt so close to Elvis was that he never lost his down-home taste. You can see it in the decor at Graceland, a poor Mississippi boy's idea of how a rich person's house should look; and it is apparent in what he ate. He could afford filet mignon but preferred well-done burgers. Instead of champagne, he drank Pepsi. For dessert, he favored Deep South diner classics. One of Elvis's favorite sweets was the pound cake made by his childhood friend Janelle McComb. She gave us her recipe in 1987, on the 10th anniversary of Elvis's death. Every year at Christmas, she'd bake two loaves and bring them to Graceland. Elvis could eat one all by himself. Fans know about McComb and place her in the firmament of those who practice TCE ("Taking Care of Elvis"); to serve her cake is to keep the legend alive. —Jane and Michael Stern, authors of Roadfood.com

recipes
cake
to-do
5 days ago

[1811.09620] TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer

5 days ago

In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness. In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as high-quality waveform generation. We introduce TimbreTron, a method for musical timbre transfer which applies "image" domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer. We show that the Constant Q Transform (CQT) representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance. Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples.

style-transfer
neural-networks
feature-extraction
signal-processing
audio
to-write-about
consider:performance-measures
5 days ago

Celebrating 50 years of the Neutral Theory | Molecular Biology and Evolution | Oxford Academic

5 days ago

Celebrating 50 years of the Neutral Theory

journal
theoretical-biology
evolutionary-biology
5 days ago

[cs/0610153] Most Programs Stop Quickly or Never Halt

5 days ago

Since many real-world problems arising in the fields of compiler optimisation, automated software engineering, formal proof systems, and so forth are equivalent to the Halting Problem--the most notorious undecidable problem--there is a growing interest, not only academically, in understanding the problem better and in providing alternative solutions. Halting computations can be recognised by simply running them; the main difficulty is to detect non-halting programs. Our approach is to have the probability space extend over both space and time and to consider the probability that a random N-bit program has halted by a random time. We postulate an a priori computable probability distribution on all possible runtimes and we prove that given an integer k>0, we can effectively compute a time bound T such that the probability that an N-bit program will eventually halt given that it has not halted by T is smaller than 2^{-k}. We also show that the set of halting programs (which is computably enumerable, but not computable) can be written as a disjoint union of a computable set and a set of effectively vanishing probability. Finally, we show that ``long'' runtimes are effectively rare. More formally, the set of times at which an N-bit program can stop after the time 2^{N+constant} has effectively zero density.

halting-problem
computer-science
rather-interesting
looking-to-see
to-write-about
ReQ
computational-complexity
probability-theory
5 days ago

Reinforcement Learning with Prediction-Based Rewards

5 days ago

We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time1 exceeds average human performance on Montezuma’s Revenge. RND achieves state-of-the-art performance, periodically finds all 24 rooms and solves the first level without using demonstrations or having access to the underlying state of the game.

RND incentivizes visiting unfamiliar states by measuring how hard it is to predict the output of a fixed random neural network on visited states. In unfamiliar states it’s hard to guess the output, and hence the reward is high. It can be applied to any reinforcement learning algorithm, is simple to implement and efficient to scale. Below we release a reference implementation of RND that can reproduce the results from our paper.

machine-learning
reinforcement-learning
curiosity
the-mangle-in-practice
to-write-about
also-note-presentation
RND incentivizes visiting unfamiliar states by measuring how hard it is to predict the output of a fixed random neural network on visited states. In unfamiliar states it’s hard to guess the output, and hence the reward is high. It can be applied to any reinforcement learning algorithm, is simple to implement and efficient to scale. Below we release a reference implementation of RND that can reproduce the results from our paper.

5 days ago

Overcoming folk-physics: the case of projectile motion for Aristotle, John Philoponus, Ibn-Sina & Galileo | Theory, Evolution, and Games Group

5 days ago

It is not until Ibn-Sīnā’s — or Avicenna’s, as he is known in the Latin tradition — Book of Healing published around 1027 that we start to get close to what we’d now call Newton’s first law on motion (for a more comprehensive history of this, see Sayili, A. (1987). Ibn Sīnā and Buridan on the Motion of the Projectile. Annals of the New York Academy of Sciences, 500(1): 477-482). Ibn Sina builds on Yaḥyā al-Naḥwī’s account by accepting that a motile power was acquired by the stone from the thrower, but this spirit does not leave on its own account as unnatural, instead, this impressed virtue is dissipated through the influence of other agents such as the air. At this point, the inquiry can be said to have transformed the question from the ancient “what keeps pushing the stone?” to the modern “what causes the stone to stop moving?”. In the process, little to nothing has been learned about the problem-domain of projectile motion, but a new framework for thought was erected.

philosophy-of-science
representation
the-mangle-in-practice
history-of-science
asking-the-other-questions
5 days ago

No, it’s not The Incentives—it’s you – [citation needed]

5 days ago

A random bystander who happened to eavesdrop on a conversation between a group of scientists kvetching about The Incentives could be forgiven for thinking that maybe, just maybe, a bunch of very industrious people who generally pride themselves on their creativity, persistence, and intelligence could find some way to work around, or through, the problem. And I think they would be right. The fact that we collectively don’t see it as a colossal moral failing that we haven’t figured out a way to get our work done without having to routinely cut corners in the rush for fame and fortune is deeply troubling.

It’s also aggravating on an intellectual level, because the argument that we’re all being egregiously and continuously screwed over by The Incentives is just not that good. I think there are a lot of reasons why researchers should be very hesitant to invoke The Incentives as a justification for why any of us behave the way we do. I’ll give nine of them here, but I imagine there are probably others.

academic-culture
publishing
social-dynamics
social-norms
conservatism
attention-desert
ethics
It’s also aggravating on an intellectual level, because the argument that we’re all being egregiously and continuously screwed over by The Incentives is just not that good. I think there are a lot of reasons why researchers should be very hesitant to invoke The Incentives as a justification for why any of us behave the way we do. I’ll give nine of them here, but I imagine there are probably others.

5 days ago

thheller/shadow-cljs: ClojureScript compilation made easy

5 days ago

shadow-cljs provides everything you need to compile your ClojureScript code with a focus on simplicity and ease of use.

Clojure
clojurescript
software-development
user-experience
work-cycle
compiler
to-do
5 days ago

Remembering Roy Gold, Who was Not Excessively Interested in Books – The Public Domain Review

6 days ago

In this gentle memorial, Nicholas Jeeves takes us on a turn through a Borgesian library of defacements. Jeeves’ quarry, the (inventive and invented) Professor Roy Gold, would seem to have been an outsider artist of his books, and his dust-jacket daubings leave an ambiguous legacy. Should such biblio-graffiti be accounted irreverent mischief? Does it betray anti-bookishness in the secret heart of a bookish man? Or is something else afoot? Perhaps, under the right conditions, doodling can become something like a theory of reading. After all, what is to be done with all our paper books in an age of textual dematerialization? Roy Gold stands over our shoulders, brush in hand…

parataxis
the-use-of-books
reworks
to-write-about
rather-interesting
memoir
book-art
6 days ago

When Optimising Code Measure

6 days ago

This is a truism that lots of people quote, but it can be hard to remember, especially in the heat of battle (as it were). Rather fortunately it came to mind just when needed, as I found something completely unexpected.

I was writing a simple implementation of the Fermat difference of squares method of factoring. This involves writing the number to be factored as - you guessed it - the difference of two squares. If n=a2−b2

n

=

a

2

−

b

2

then n=(a−b)(a+b)

n

=

(

a

−

b

)

(

a

+

b

)

and we have a factorisation (provided we don't have a−b=1

a

−

b

=

1

).

the-mangle-in-practice
looking-to-see
learning-in-public
computer-science
computational-complexity
rather-interesting
to-write-about
contingency
I was writing a simple implementation of the Fermat difference of squares method of factoring. This involves writing the number to be factored as - you guessed it - the difference of two squares. If n=a2−b2

n

=

a

2

−

b

2

then n=(a−b)(a+b)

n

=

(

a

−

b

)

(

a

+

b

)

and we have a factorisation (provided we don't have a−b=1

a

−

b

=

1

).

6 days ago

Alan Turing, On computable numbers | Joel David Hamkins

6 days ago

What I was extremely surprised to find, however, and what I want to tell you about today, is that despite the title of the article, Turing adopts an incorrect approach to the theory of computable numbers. His central definition is what is now usually regarded as a mistaken way to proceed with this concept.

Let me explain. Turing defines that a computable real number is one whose decimal (or binary) expansion can be enumerated by a finite procedure, by what we now call a Turing machine. You can see this in the very first sentence of his paper, and he elaborates on and confirms this definition in detail later on in the paper.

computability
mathematics
number-theory
algorithms
rather-interesting
history-of-science
representation
to-write-about
ReQ
Let me explain. Turing defines that a computable real number is one whose decimal (or binary) expansion can be enumerated by a finite procedure, by what we now call a Turing machine. You can see this in the very first sentence of his paper, and he elaborates on and confirms this definition in detail later on in the paper.

6 days ago

Bending the Law of Sines | The Aperiodical

6 days ago

You can tile a plane with a repeated single triangle shape. But for most triangles you need to put same sides together, usually to group pairs of triangles into parallelograms, as shown above. You can’t just put any sides together. This of course reflects the Law of Sines, which says the side lengths are proportional to the sines of their opposite angles: you can’t get both the angles AND the side lengths in agreeable proportions.

But something unexpected happens when we add curves. For tiling we need equal amounts of concave and convex arc. The only way to do that is with two shorter concave sides joining a longer convex side, as shown below.

tiling
rather-interesting
learning-in-public
experimentation
construction
to-write-about
But something unexpected happens when we add curves. For tiling we need equal amounts of concave and convex arc. The only way to do that is with two shorter concave sides joining a longer convex side, as shown below.

6 days ago

DSHR's Blog: It Isn't About The Technology

6 days ago

In other words, for searches that are profitable, Google has moved all the results it thinks are relevant off the first page and replaced them with results that people have paid to put there. Which is pretty much the definition of "evil" in the famous "don't be evil" slogan notoriously dropped in 2015. I'm pretty sure that no-one at executive level in Google thought that building a paid-search engine was a good idea, but the internal logic of the "slow AI" they built forced them into doing just that.

federation
decentralization
social-dynamics
corporatism
activism
engineering-design
institutional-design
political-economy
6 days ago

Stability Criteria for Complex Microbial Communities | bioRxiv

6 days ago

Competition and mutualism are inevitable processes in microbial ecology, and a central question is which and how many taxa will persist in the face of these interactions. Ecological theory has demonstrated that when direct, pairwise interactions among a group of species are too numerous, or too strong, then the coexistence of these species will be unstable to any slight perturbation. This instability worsens when mutualistic interactions complement competition. Here, we refine and to some extent overturn that understanding, by considering explicitly the resources that microbes consume and produce. In contrast to more complex organisms, microbial cells consume primarily abiotic resources, and mutualistic interactions are often mediated by these same abiotic resources through the mechanism of cross-feeding. Our model therefore considers the consumption and production of a set of abiotic resources by a group of microbial species. We show that if microbes consume, but do not produce resources, then any positive equilibrium will always be stable to small perturbations. We go on to show that in the presence of crossfeeding, stability is no longer guaranteed. However, stability still holds when mutualistic interations are either symmetric, or sufficiently weak.

theoretical-biology
community-formation
ecology
competition
cooperation
rather-interesting
nonlinear-dynamics
to-write-about
6 days ago

[1803.09473] code2vec: Learning Distributed Representations of Code

6 days ago

We present a neural model for representing snippets of code as continuous distributed vectors ("code embeddings"). The main idea is to represent a code snippet as a single fixed-length code vector, which can be used to predict semantic properties of the snippet. This is performed by decomposing code to a collection of paths in its abstract syntax tree, and learning the atomic representation of each path simultaneously with learning how to aggregate a set of them. We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body. We evaluate our approach by training a model on a dataset of 14M methods. We show that code vectors trained on this dataset can predict method names from files that were completely unobserved during training. Furthermore, we show that our model learns useful method name vectors that capture semantic similarities, combinations, and analogies. Comparing previous techniques over the same data set, our approach obtains a relative improvement of over 75%, being the first to successfully predict method names based on a large, cross-project, corpus. Our trained model, visualizations and vector similarities are available as an interactive online demo at this http URL. The code, data, and trained models are available at this https URL.

representation
genetic-programming
(it-ain't)
deep-learning
neural-networks
feature-construction
to-write-about
discrete-and-continuous-sittin-in-a-tree
6 days ago

[1812.01717] Towards Accurate Generative Models of Video: A New Metric & Challenges

6 days ago

Recent advances in deep generative models have lead to remarkable progress in synthesizing high quality images. Following their successful application in image processing and representation learning, an important next step is to consider videos. Learning generative models of video is a much harder task, requiring a model to capture the temporal dynamics of a scene, in addition to the visual presentation of objects. Although recent attempts at formulating generative models of video have had some success, current progress is hampered by (1) the lack of qualitative metrics that consider visual quality, temporal coherence, and diversity of samples, and (2) the wide gap between purely synthetic video datasets and challenging real-world datasets in terms of complexity. To this extent we propose Fréchet Video Distance (FVD), a new metric for generative models of video based on FID, and StarCraft 2 Videos (SCV), a collection of progressively harder datasets that challenge the capabilities of the current iteration of generative models for video. We conduct a large-scale human study, which confirms that FVD correlates well with qualitative human judgment of generated videos, and provide initial benchmark results on SCV.

metrics
Frechet-distance
generative-models
representation
rather-interesting
video
feature-construction
to-write-about
6 days ago

"Mapping Imaginary Cities" by Mouse Reeve - YouTube

6 days ago

While the map is not the territory (to quote the semantician Alfred Korzybski), the map is still usually intended to correspond to one. But what about maps of nowhere at all? What can they represent and how can they be made?

Maps are a familiar part of daily life, with a deeply familiar and complex symbolic language, and a long history. They are also hugely varied in style and aesthetic, and often are works of art unto themselves. All this makes mapping a powerful creative tool for conveying ideas about a space, how it is used, and who inhabits it. But it also presents a mapmaker with what can feel like an overwhelming array of design choices and technical hurdles to overcome in order to create a generative map.

This talk will explore maps as a way to communicate about people and place in the context of fictional cities, and dive into algorithms and techniques for procedurally generating maps by building up topography, landscape, populations, and street plans.

Maps are a familiar part of daily life, with a deeply familiar and complex symbolic language, and a long history. They are also hugely varied in style and aesthetic, and often are works of art unto themselves. All this makes mapping a powerful creative tool for conveying ideas about a space, how it is used, and who inhabits it. But it also presents a mapmaker with what can feel like an overwhelming array of design choices and technical hurdles to overcome in order to create a generative map.

This talk will explore maps as a way to communicate about people and place in the context of fictional cities, and dive into algorithms and techniques for procedurally generating maps by building up topography, landscape, populations, and street plans.

video
cartography
strange-loop
rather-interesting
art
generative-art
generative-models
the-mangle-in-practice
learning-in-public
algorithms
Maps are a familiar part of daily life, with a deeply familiar and complex symbolic language, and a long history. They are also hugely varied in style and aesthetic, and often are works of art unto themselves. All this makes mapping a powerful creative tool for conveying ideas about a space, how it is used, and who inhabits it. But it also presents a mapmaker with what can feel like an overwhelming array of design choices and technical hurdles to overcome in order to create a generative map.

This talk will explore maps as a way to communicate about people and place in the context of fictional cities, and dive into algorithms and techniques for procedurally generating maps by building up topography, landscape, populations, and street plans.

Maps are a familiar part of daily life, with a deeply familiar and complex symbolic language, and a long history. They are also hugely varied in style and aesthetic, and often are works of art unto themselves. All this makes mapping a powerful creative tool for conveying ideas about a space, how it is used, and who inhabits it. But it also presents a mapmaker with what can feel like an overwhelming array of design choices and technical hurdles to overcome in order to create a generative map.

This talk will explore maps as a way to communicate about people and place in the context of fictional cities, and dive into algorithms and techniques for procedurally generating maps by building up topography, landscape, populations, and street plans.

6 days ago

[PDF] Derivation of the Variational Bayes Equations Alianna J. Maren

6 days ago

The derivation of key equations for the variational Bayes approach is well-known in certain circles. However, translating the fundamental derivations (e.g., as found in Beal (2003)) to the notation of Friston (2013, 2015) is somewhat delicate. Further, the notion of using varia- tional Bayes in the context of a system with Markov blankets requires special attention. This Technical Note presents the derivation in de- tail. It further illustrates how the variational Bayes method provides a framework for a new computational engine, incorporating the 2-D Cluster Variation Method (CVM), which provides a necessary free en- ergy equation that can be minimized across both the external and representational systems, respectively.

free-energy-model
representation
to-understand
explanation
cognition
6 days ago

How to Read Karl Friston (in the Original Greek) – Alianna J. Maren

6 days ago

What Friston offered, though, was three crucial points:

The brain minimizes free energy (although not specifying exactly what the free energy function is),

A variational Bayes process can help us model the free energy within the brain, and

We can separate out the so-called “latent” or “hidden” units that are in the actual external (brain) system from those in the model.

It’s this last point that is not terribly obvious in reading Friston’s papers for the first, second, or even tenth time. However, it is an essential and core insight. It also is one of those (small, detailed) things that makes Friston’s work impenetrable to all but the most determined efforts.

cognition
representation
explanation
to-understand
philosophy-of-science
The brain minimizes free energy (although not specifying exactly what the free energy function is),

A variational Bayes process can help us model the free energy within the brain, and

We can separate out the so-called “latent” or “hidden” units that are in the actual external (brain) system from those in the model.

It’s this last point that is not terribly obvious in reading Friston’s papers for the first, second, or even tenth time. However, it is an essential and core insight. It also is one of those (small, detailed) things that makes Friston’s work impenetrable to all but the most determined efforts.

6 days ago

[1710.00992] DimReader: Axis lines that explain non-linear projections

6 days ago

Non-linear dimensionality reduction (NDR) methods such as LLE and t-SNE are popular with visualization researchers and experienced data analysts, but present serious problems of interpretation. In this paper, we present DimReader, a technique that recovers readable axes from such techniques. DimReader is based on analyzing infinitesimal perturbations of the dataset with respect to variables of interest. The perturbations define exactly how we want to change each point in the original dataset and we measure the effect that these changes have on the projection. The recovered axes are in direct analogy with the axis lines (grid lines) of traditional scatterplots. We also present methods for discovering perturbations on the input data that change the projection the most. The calculation of the perturbations is efficient and easily integrated into programs written in modern programming languages. We present results of DimReader on a variety of NDR methods and datasets both synthetic and real-life, and show how it can be used to compare different NDR methods. Finally, we discuss limitations of our proposal and situations where further research is needed.

user-interface
visualization
dimension-reduction
rather-interesting
data-analysis
explanation
the-mangle-in-practice
to-write-about
to-do
6 days ago

wo's weblog: The lure of free energy

6 days ago

There's an exciting new theory in cognitive science. The theory began as an account of message-passing in the visual cortex, but it quickly expanded into a unified explanation of perception, action, attention, learning, homeostasis, and the very possibility of life. In its most general and ambitious form, the theory was mainly developed by Karl Friston -- see e.g. Friston 2006, Friston and Stephan 2007, Friston 2009, Friston 2010, or the Wikipedia page on the free-energy principle.

Unfortunately, Friston isn't very good at explaining what exactly the theory says. The unifying principle at its core is called the free-energy principle. It says that "any self-organizing system that is at equilibrium with its environment must minimize its free energy" (Friston 2010). Both perception and action are then characterized as serving this goal of minimizing free energy.

free-energy-theory
to-understand
philosophy-of-science
big-definitions
bad-explainers
theoretical-biology
Unfortunately, Friston isn't very good at explaining what exactly the theory says. The unifying principle at its core is called the free-energy principle. It says that "any self-organizing system that is at equilibrium with its environment must minimize its free energy" (Friston 2010). Both perception and action are then characterized as serving this goal of minimizing free energy.

6 days ago

[1811.01721] Rethinking floating point for deep learning

6 days ago

Reducing hardware overhead of neural networks for faster or lower power inference and training is an active area of research. Uniform quantization using integer multiply-add has been thoroughly investigated, which requires learning many quantization parameters, fine-tuning training or other prerequisites. Little effort is made to improve floating point relative to this baseline; it remains energy inefficient, and word size reduction yields drastic loss in needed dynamic range. We improve floating point to be more energy efficient than equivalent bit width integer hardware on a 28 nm ASIC process while retaining accuracy in 8 bits with a novel hybrid log multiply/linear add, Kulisch accumulation and tapered encodings from Gustafson's posit format. With no network retraining, and drop-in replacement of all math and float32 parameters via round-to-nearest-even only, this open-sourced 8-bit log float is within 0.9% top-1 and 0.2% top-5 accuracy of the original float32 ResNet-50 CNN model on ImageNet. Unlike int8 quantization, it is still a general purpose floating point arithmetic, interpretable out-of-the-box. Our 8/38-bit log float multiply-add is synthesized and power profiled at 28 nm at 0.96x the power and 1.12x the area of 8/32-bit integer multiply-add. In 16 bits, our log float multiply-add is 0.59x the power and 0.68x the area of IEEE 754 float16 fused multiply-add, maintaining the same signficand precision and dynamic range, proving useful for training ASICs as well.

numerical-methods
machine-learning
representation
the-mangle-in-practice
to-write-about
to-cite
motivation
6 days ago

Stochastic Computing Systems | SpringerLink

6 days ago

The invention of the steam engine in the late eighteenth century made it possible to replace the muscle-power of men and animals by the motive power of machines. The invention of the stored-program digital computer during the second world war made it possible to replace the lower-level mental processes of man, such as arithmetic computation and information storage, by electronic data-processing in machines. We are now coming to the stage where it is reasonable to contemplate replacing some of the higher mental processes of man, such as the ability to recognize patterns and to learn, with similar capabilities in machines. However, we lack the “steam engine” or “digital computer” which will provide the necessary technology for learning and pattern recognition by machines.

representation
numerical-methods
floating-point
computer-science
rather-interesting
to-write-about
nudge-targets
consider:ReQ
6 days ago

Beating Floating Point at its Own Game

6 days ago

A new data type called a posit is designed as a direct drop-in replacement for IEEE Standard 754 floating-point numbers floats. Unlike earlier forms of universal number unum arithmetic, posits do not require interval arithmetic or variable size operands; like floats, they round if an answer is inexact. However, they provide compelling advantages over floats, including larger dynamic range, higher accuracy, better closure, bitwise identical results across systems, simpler hardware, and simpler exception handling. Posits never overflow to infinity or underflow to zero, and "Nota-Number" NaN indicates an action instead of a bit pattern. A posit processing unit takes less circuitry than an IEEE float FPU. With lower power use and smaller silicon footprint, the posit operations per second POPS supported by a chip can be significantly higher than the FLOPS using similar hardware resources. GPU accelerators and Deep Learning processors, in particular, can do more per watt and per dollar with posits, yet deliver superior answer quality. A comprehensive series of benchmarks compares floats and posits for decimals of accuracy produced for a set precision. Low precision posits provide a better solution than "approximate computing" methods that try to tolerate decreased answer quality. High precision posits provide more correct decimals than floats of the same size; in some cases, a 32-bit posit may safely replace a 64-bit float. In other words, posits beat floats at their own game.

representation
floating-point
numerical-methods
rather-interesting
the-mangle-in-practice
machine-learning
to-write-about
nudge-targets
6 days ago

Making floating point math highly efficient for AI hardware - Facebook Code

6 days ago

We have made radical changes to floating point to make it as much as 16 percent more efficient than int8/32 math. Our approach is still highly accurate for convolutional neural networks, and it offers several additional benefits:

Our technique can improve the speed of AI research and development. When applied to higher-precision floating point used in AI model training, it is as much as 69 percent more efficient.

Today, models are typically trained using floating point, but then they must be converted to a more efficient quantized format that can be deployed to production. With our approach, nothing needs to be retrained or relearned to deploy a model. AI developers can thus deploy efficient new models more easily.

Integer quantization schemes today are growing ever more complicated and in some cases might be “overfitting” on a particular task (and thereby not retaining their general-purpose application). An efficient, general-purpose floating point arithmetic that preserves accuracy can avoid this issue.

representation
machine-learning
the-mangle-in-practice
algorithms
rather-interesting
to-write-about
nudge-targets
Our technique can improve the speed of AI research and development. When applied to higher-precision floating point used in AI model training, it is as much as 69 percent more efficient.

Today, models are typically trained using floating point, but then they must be converted to a more efficient quantized format that can be deployed to production. With our approach, nothing needs to be retrained or relearned to deploy a model. AI developers can thus deploy efficient new models more easily.

Integer quantization schemes today are growing ever more complicated and in some cases might be “overfitting” on a particular task (and thereby not retaining their general-purpose application). An efficient, general-purpose floating point arithmetic that preserves accuracy can avoid this issue.

6 days ago

PsyArXiv Preprints | Multiple Perspectives on Inference for Two Simple Statistical Scenarios

6 days ago

When data analysts operate within different statistical frameworks (e.g., frequentist versus Bayesian, emphasis on estimation versus emphasis on testing), how does this impact the qualitative conclusions that are drawn for real data? To study this question empirically we selected from the literature two simple scenarios --involving a comparison of two proportions and a Pearson correlation-- and asked four teams of statisticians to provide a concise analysis and a qualitative interpretation of the outcome. The results showed considerable overall agreement; nevertheless, this agreement did not appear to diminish the intensity of the subsequent debate over which statistical framework is more appropriate to address the questions at hand.

statistics
looking-to-see
the-mangle-in-practice
science-studies
anthropology-of-data
rather-interesting
to-write-about
6 days ago

[1808.09357] Rational Recurrences

6 days ago

Despite the tremendous empirical success of neural models in natural language processing, many of them lack the strong intuitions that accompany classical machine learning approaches. Recently, connections have been shown between convolutional neural networks (CNNs) and weighted finite state automata (WFSAs), leading to new interpretations and insights. In this work, we show that some recurrent neural networks also share this connection to WFSAs. We characterize this connection formally, defining rational recurrences to be recurrent hidden state update functions that can be written as the Forward calculation of a finite set of WFSAs. We show that several recent neural models use rational recurrences. Our analysis provides a fresh view of these models and facilitates devising new neural architectures that draw inspiration from WFSAs. We present one such model, which performs better than two recent baselines on language modeling and text classification. Our results demonstrate that transferring intuitions from classical models like WFSAs can be an effective approach to designing and understanding neural models.

automata
representation
neural-networks
recurrent-networks
architecture
rather-interesting
ReQ
to-write-about
6 days ago

[1810.06758] Discriminator Rejection Sampling

6 days ago

We propose a rejection sampling scheme using the discriminator of a GAN to approximately correct errors in the GAN generator distribution. We show that under quite strict assumptions, this will allow us to recover the data distribution exactly. We then examine where those strict assumptions break down and design a practical algorithm - called Discriminator Rejection Sampling (DRS) - that can be used on real data-sets. Finally, we demonstrate the efficacy of DRS on a mixture of Gaussians and on the SAGAN model, state-of-the-art in the image generation task at the time of developing this work. On ImageNet, we train an improved baseline that increases the Inception Score from 52.52 to 62.36 and reduces the Frechet Inception Distance from 18.65 to 14.79. We then use DRS to further improve on this baseline, improving the Inception Score to 76.08 and the FID to 13.75.

neural-networks
machine-learning
algorithms
generative-models
schemes
rather-interesting
to-understand
6 days ago

The Oulipo of the 1980s? Why it’s time to reappraise the humble Choose Your Own Adventure book | Prospect Magazine

7 days ago

The allure of nostalgia is powerful, especially in an uncertain, unstable age. Nostalgia is a soothing form of selective amnesia of how things actually were. However forward-thinking and ostensibly unsentimental we might be, there are very few of us who are not moved in some way by these jolts of recognition and the comforting, if illusory, thought that a golden age existed in the past when life was more certain and more stable.

With Generation X beginning to reach middle age in slow horrified disbelief, it’s little surprise that 1980s revivalisms are big business, from Stranger Things and Ready Player One to the recent Star Wars resurrection. A joyless cynic might see this trend as an example of a culture paralysed by conservatism, cowardice and infantilism.

Yet it’s hard to deny the involuntary memories evoked upon seeing pixelated graphics or hearing the shriek of a TIE fighter. The best of these revivals (Twin Peaks: The Return, Blade Runner 2049) offer startling new directions amidst the familiar ones, which recontextualize that which came before. These stories are reimagined, rather than repeated to diminishing effect. Others are shallower.

oulipo
parafiction
literary-criticism
generative-art
user-centric-art
rather-interesting
nostalgia
With Generation X beginning to reach middle age in slow horrified disbelief, it’s little surprise that 1980s revivalisms are big business, from Stranger Things and Ready Player One to the recent Star Wars resurrection. A joyless cynic might see this trend as an example of a culture paralysed by conservatism, cowardice and infantilism.

Yet it’s hard to deny the involuntary memories evoked upon seeing pixelated graphics or hearing the shriek of a TIE fighter. The best of these revivals (Twin Peaks: The Return, Blade Runner 2049) offer startling new directions amidst the familiar ones, which recontextualize that which came before. These stories are reimagined, rather than repeated to diminishing effect. Others are shallower.

7 days ago

Accurate High Performance Concrete Prediction with an Alignment-Based Genetic Programming System | SpringerLink

16 days ago

In 2013, our research group published a contribution in which a new version of genetic programming, called Geometric Semantic Genetic Programming (GSGP), was fostered as an appropriate computational intelligence method for predicting the strength of high-performance concrete. That successful work, in which GSGP was shown to outperform the existing systems, allowed us to promote GSGP as the new state-of-the-art technology for high-performance concrete strength prediction. In this paper, we propose, for the first time, a novel genetic programming system called Nested Align Genetic Programming (NAGP). NAGP exploits semantic awareness in a completely different way compared to GSGP. The reported experimental results show that NAGP is able to significantly outperform GSGP for high-performance concrete strength prediction. More specifically, not only NAGP is able to obtain more accurate predictions than GSGP, but NAGP is also able to generate predictive models with a much smaller size, and thus easier to understand and interpret, than the ones generated by GSGP. Thanks to this ability of NAGP, we are able here to show the model evolved by NAGP, which was impossible for GSGP.

symbolic-regression
genetic-programming
numerical-methods
models
algorithms
to-write-about
16 days ago

[1512.04349] Clustering time series under the Fr'echet distance

17 days ago

e Fréchet distance is a popular distance measure for curves. We study the problem of clustering time series under the Fréchet distance. In particular, we give (1+ε)-approximation algorithms for variations of the following problem with parameters k and ℓ. Given n univariate time series P, each of complexity at most m, we find k time series, not necessarily from P, which we call \emph{cluster centers} and which each have complexity at most ℓ, such that (a) the maximum distance of an element of P to its nearest cluster center or (b) the sum of these distances is minimized. Our algorithms have running time near-linear in the input size for constant ε, k and ℓ. To the best of our knowledge, our algorithms are the first clustering algorithms for the Fréchet distance which achieve an approximation factor of (1+ε) or better.

Keywords: time series, longitudinal data, functional data, clustering, Fréchet distance, dynamic time warping, approximation algorithms.

computational-geometry
algorithms
metrics
clustering
to-understand
time-series
Keywords: time series, longitudinal data, functional data, clustering, Fréchet distance, dynamic time warping, approximation algorithms.

17 days ago

[1811.10665] Stepping Stones to Inductive Synthesis of Low-Level Looping Programs

17 days ago

Inductive program synthesis, from input/output examples, can provide an opportunity to automatically create programs from scratch without presupposing the algorithmic form of the solution. For induction of general programs with loops (as opposed to loop-free programs, or synthesis for domain-specific languages), the state of the art is at the level of introductory programming assignments. Most problems that require algorithmic subtlety, such as fast sorting, have remained out of reach without the benefit of significant problem-specific background knowledge. A key challenge is to identify cues that are available to guide search towards correct looping programs. We present MAKESPEARE, a simple delayed-acceptance hillclimbing method that synthesizes low-level looping programs from input/output examples. During search, delayed acceptance bypasses small gains to identify significantly-improved stepping stone programs that tend to generalize and enable further progress. The method performs well on a set of established benchmarks, and succeeds on the previously unsolved "Collatz Numbers" program synthesis problem. Additional benchmarks include the problem of rapidly sorting integer arrays, in which we observe the emergence of comb sort (a Shell sort variant that is empirically fast). MAKESPEARE has also synthesized a record-setting program on one of the puzzles from the TIS-100 assembly language programming game.

benchmarking
genetic-programming
software-synthesis
rather-interesting
to-write-about
17 days ago

Fréchet distance - Wikipedia

17 days ago

In mathematics, the Fréchet distance is a measure of similarity between curves that takes into account the location and ordering of the points along the curves. It is named after Maurice Fréchet.

measurement
metrics
data-analysis
feature-construction
distance
computational-geometry
to-consider
ReQ
17 days ago

[1610.02247] Logic as a distributive law

18 days ago

We present an algorithm for deriving a spatial-behavioral type system from a formal presentation of a computational calculus. Given a 2-monad Calc: Catv→ Cat for the free calculus on a category of terms and rewrites and a 2-monad BoolAlg for the free Boolean algebra on a category, we get a 2-monad Form = BoolAlg + Calc for the free category of formulae and proofs. We also get the 2-monad BoolAlg ∘ Calc for subsets of terms. The interpretation of formulae is a natural transformation $\interp{-}$: Form ⇒ BoolAlg ∘ Calc defined by the units and multiplications of the monads and a distributive law transformation δ: Calc ∘ BoolAlg ⇒ BoolAlg ∘ Calc. This interpretation is consistent both with the Curry-Howard isomorphism and with realizability. We give an implementation of the "possibly" modal operator parametrized by a two-hole term context and show that, surprisingly, the arrow type constructor in the λ-calculus is a specific case. We also exhibit nontrivial formulae encoding confinement and liveness properties for a reflective higher-order variant of the π-calculus.

representation
higher-order
mathematics
category-theory
to-understand
π-calculus
ReQ
18 days ago

computability - How can a cyclic tag system halt with an output? - Computer Science Stack Exchange

19 days ago

How can a cyclic tag system halt with an output?

computer-science
representation
to-write-about
ReQ
19 days ago

Q-BAL Programming Language

19 days ago

Q-BAL is a programming language that Ben Yackley and Michael Shulman invented on a whim, based on the question "What would it be like if a language were based on queues rather than stacks?" The acronym stands for Queue-BAsed Lanugage. This language is not designed to be useful, just fun. If you feel like writing any programs in Q-BAL, I would be very glad to post them here. I hope to someday write an interpreter for Q-BAL programs, so stay tuned! The Q-BAL system has been undergoing revamping lately in discussions with Andy and Ben, and when I have time I'll update these pages to reflect whatever conclusions we will hopefully have reached by then.

programming-language
esoteric-languages
ReQ
reference
19 days ago

[1504.04311] Higher category models of the pi-calculus

23 days ago

We present an approach to modeling computational calculi using higher category theory. Specifically we present a fully abstract semantics for the pi-calculus. The interpretation is consistent with Curry-Howard, interpreting terms as typed morphisms, while simultaneously providing an explicit interpretation of the rewrite rules of standard operational presentations as 2-morphisms. One of the key contributions, inspired by catalysis in chemical reactions, is a method of restricting the application of 2-morphisms interpreting rewrites to specific contexts.

concurrency
π-calculus
rewriting-systems
to-understand
ReQ
23 days ago

Tanya Khovanova's Math Blog » Blog Archive » Shapes of Symbols in Latin Squares

24 days ago

John likes finding interesting ways to remember which shape is which. You can find his and Alex’s suggestions in the paper which Alex submitted to the arxiv.

Oops! While I was writing this essay, arxiv rejected the paper.

latin-squares
combinatorics
constraint-satisfaction
feature-construction
rather-interesting
dammit
Oops! While I was writing this essay, arxiv rejected the paper.

24 days ago

[quant-ph/0208149] A semi-quantum version of the game of Life

24 days ago

A version of John Conway's game of Life is presented where the normal binary values of the cells are replaced by oscillators which can represent a superposition of states. The original game of Life is reproduced in the classical limit, but in general additional properties not seen in the original game are present that display some of the effects of a quantum mechanical Life. In particular, interference effects are seen.

quantums
Game-of-Life
hey-I-know-this-guy
cellular-automata
24 days ago

Flowers for Julia | Fronkonstin

24 days ago

To color the points, I pick a random palette from the top list of COLOURLovers site using the colourlovers package. Since each flower involves a huge amount of calculations, I use Reduce to make this process efficiently. More examples:

fractals
visualization
color
details-of-note
24 days ago

Tanya Khovanova's Math Blog » Blog Archive » Another Cool Coin-Weighing Problem

24 days ago

My coauthor, Konstantin Knop, sent me a coin-weighing problem that is really good. Surprisingly, it is old: it first appeared in a Russian math journal, Kvant, in 1973.

mathematical-recreations
constraint-satisfaction
rather-interesting
nudge-targets
to-write-about
to-generalize
24 days ago

[1207.4497] Efficient Algorithms for Zeckendorf Arithmetic

26 days ago

We study the problem of addition and subtraction using the Zeckendorf representation of integers. We show that both operations can be performed in linear time; in fact they can be performed by combinational logic networks with linear size and logarithmic depth. The implications of these results for multiplication, division and square-root extraction are also discussed.

arithmetic
representation
rather-interesting
mathematical-recreations
nudge-targets
consider:rediscovery
26 days ago

3 tools from sociocracy to use right away (plus magic phrases!)

27 days ago

Of course I myself am ego-driven and I have a ton of good ideas! But I also know that it only takes one person in the circle engaging in cross-talk and the good effects of rounds are lost. What do I do with all my brilliant ideas? I write them on a piece of paper. When it is my turn, I will often look at my piece of paper and realize that, after a few minutes of listening to others, about 90% of my ideas have either been named or, on second thought, they don’t seem all that great or urgent anymore. Humbled, I am often grateful for having been forced to weed through what I say. And when people pass on their turn saying “All I wanted to say has been said” I feel the urge to get up and hug them in gratitude for not putting the group through endless repetitions. Which also answers the last reservation I hear very often: aren’t rounds lenghty? Maybe. But both inconsiderate decisions, repetitive statements and emotional “clean-up” after disregard of team members takes a lot of time too. Your choice!

social-dynamics
social-norms
collaboration
organizational-behavior
teams
rather-interesting
to-write-about
27 days ago

[PDF] The Spread of Improvement: Why Innovation Accelerated in Britain 1547-1851

27 days ago

In the three centuries after the reign of Henry VIII, the British Isles emerged from civil wars, invasion threats, and religious strife to become the world's technological leader. Why did innovation accelerate? I studied the people responsible, the innovators themselves, using a sample of 1,452 people in Britain who innovated between 1547 and 1851.

The paper charts the emergence and spread of an improving mentality, tracing its transmission from person to person and across the country. The mentality was not a technique, skill, or special understanding, but a frame of mind: innovators saw room for improvement where others saw none. The mentality could be received by anyone, and it could be applied to any field – anything, after all, could be better.

But what led to innovation’s acceleration was not just that the mentality spread: over the course of the eighteenth century innovators became increasingly committed to spreading the mentality further – they became innovation’s evangelists. By creating new institutions and adopting social norms conducive to openness and active sharing, innovators ensured the continued dissemination of innovation, giving rise to modern economic growth in Britain and abroad.

epidemiology-of-ideas
symmathesy
rather-interesting
topical
history-of-ideas
collaboration
the-mangle-in-practice
sociotechnical-us
The paper charts the emergence and spread of an improving mentality, tracing its transmission from person to person and across the country. The mentality was not a technique, skill, or special understanding, but a frame of mind: innovators saw room for improvement where others saw none. The mentality could be received by anyone, and it could be applied to any field – anything, after all, could be better.

But what led to innovation’s acceleration was not just that the mentality spread: over the course of the eighteenth century innovators became increasingly committed to spreading the mentality further – they became innovation’s evangelists. By creating new institutions and adopting social norms conducive to openness and active sharing, innovators ensured the continued dissemination of innovation, giving rise to modern economic growth in Britain and abroad.

27 days ago

[1810.07074] Why We Do Not Evolve Software? Analysis of Evolutionary Algorithms

27 days ago

In this paper, we review the state-of-the-art results in evolutionary computation and observe that we do not evolve non trivial software from scratch and with no human intervention. A number of possible explanations are considered, but we conclude that computational complexity of the problem prevents it from being solved as currently attempted. A detailed analysis of necessary and available computational resources is provided to support our findings.

via:lspector
yeah-no
nimby
system-of-professions
mistaking-the-publications-for-the-work
academic-culture
27 days ago

Automating String Processing in Spreadsheets using Input-Output Examples - Microsoft Research

27 days ago

We describe the design of a string programming/expression language that supports restricted forms of regular expressions, conditionals and loops. The language is expressive enough to represent a wide variety of string manipulation tasks that end-users struggle with. We describe an algorithm based on several novel concepts for synthesizing a desired program in this language from input-output examples. The synthesis algorithm is very efficient taking fraction of a second for various benchmark examples. The synthesis algorithm is interactive and has several desirable features: it can rank multiple solutions and has fast convergence, it can detect noise in the user input, and it supports an active interaction model wherein the user is prompted to provide outputs on inputs that may have multiple computational interpretations.

The algorithm has been implemented as an interactive add-in for Microsoft Excel spreadsheet system. The prototype tool has met the golden test – it has synthesized part of itself, and has been used to solve problems beyond authors’ imagination.

learning-from-data
microsoft
software-synthesis
rather-interesting
pattern-discovery
to-write-about
The algorithm has been implemented as an interactive add-in for Microsoft Excel spreadsheet system. The prototype tool has met the golden test – it has synthesized part of itself, and has been used to solve problems beyond authors’ imagination.

27 days ago

[PDF] MEXICA: a computer model of a cognitive account of creative writing.

4 weeks ago

MEXICA is a computer model that produces frameworks for short stories based on the engagement-reflection cognitive account of writing. During engagement MEXICA generates material guided by content and rhetorical constraints, avoiding the use of explicit goals or story- structure information. During reflection the system breaks impasses, evaluates the novelty and interestingness of the story in progress and verifies that coherence requirements are satisfied. In this way, MEXICA complements and extends those models of computerised story-telling based on traditional problem-solving techniques where explicit goals drive the generation of stories. This paper describes the engagement-reflection account of writing, the general characteristics of MEXICA and reports an evaluation of the program

generative-models
cognition
simulation
looking-to-see
cultural-engineering
rather-interesting
representation
to-write-about
4 weeks ago

Domain hacks with unusual Unicode characters – Terence Eden's Blog

4 weeks ago

Unicode contains a range of symbols which don't get much use. For example, there are separate symbols for TradeMark - ™, Service Mark - ℠, and Prescriptions - ℞.

Nestling among the "Letterlike Symbols" are two curious entries. Both of these are single characters:

Telephone symbol - ℡

Numero Sign - №

What's interesting is both .tel and .no are Top-Level-Domains (TLD) on the Domain Name System (DNS).

typography
domains
DNS
rather-odd
Nestling among the "Letterlike Symbols" are two curious entries. Both of these are single characters:

Telephone symbol - ℡

Numero Sign - №

What's interesting is both .tel and .no are Top-Level-Domains (TLD) on the Domain Name System (DNS).

4 weeks ago

[1705.07386] DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution

4 weeks ago

Recent research has demonstrated the vulnerability of fingerprint recognition systems to dictionary attacks based on MasterPrints. MasterPrints are real or synthetic fingerprints that can fortuitously match with a large number of fingerprints thereby undermining the security afforded by fingerprint systems. Previous work by Roy et al. generated synthetic MasterPrints at the feature-level. In this work we generate complete image-level MasterPrints known as DeepMasterPrints, whose attack accuracy is found to be much superior than that of previous methods. The proposed method, referred to as Latent Variable Evolution, is based on training a Generative Adversarial Network on a set of real fingerprint images. Stochastic search in the form of the Covariance Matrix Adaptation Evolution Strategy is then used to search for latent input variables to the generator network that can maximize the number of impostor matches as assessed by a fingerprint recognizer. Experiments convey the efficacy of the proposed method in generating DeepMasterPrints. The underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.

evolutionary-algorithms
neural-networks
generative-models
rather-interesting
biometrics
whoopsie-daisy
also:duh
4 weeks ago

Pseudoarchaeology and the Racism Behind Ancient Aliens

4 weeks ago

If we look to von Däniken’s work, there can be little doubt that his racial beliefs influenced his extraterrestrial theories. After a short stint in jail for fraud and either writing or appropriating the material for a number of other books that developed his ancient astronauts theory, von Däniken published Signs of the Gods? in 1979. It is here that many of his racial views are most boldly stated. British archaeology officer Keith Fitzpatrick-Matthews points out on his Bad Archaeology blog just a few of the many racist questions and statements posed by the author: “Was the black race a failure and did the extraterrestrials change the genetic code by gene surgery and then programme a white or a yellow race?” He also printed beliefs about the innate talents of certain races: “Nearly all negroes are musical; they have rhythm in their blood.” Von Däniken also consistently uses the term “negroid race” in comparison with “Caucasians.”

racism
psychoceramics
imperialism
archaeology
have-read
spacemen-and-cavemen
4 weeks ago

Quinn Slobodian – Globalists — Crooked Timber

4 weeks ago

Slobodian thinks that this is mistaken. In his account, markets have not become disembedded from national societies and states so much as they have become re-embedded in international institutions. Neo-liberalism as manifested in the thought of Hayek and his European followers is the political project of looking to recreate state structures outside the grasp of democratic and non-democratic states. Far from thinking that markets are natural, neo-liberals accept that they are “products of the political construction of institutions to encase them.” (p.7) Instead of a double movement, we have a ‘double world’ of imperium, political rule exercised through nation states, and dominium, the world of economics and business, and a deliberate political effort to insulate the latter inside its own steel-hard casing against the depredations of the former. Neo-liberals then, look to an `interdependent’ world and a single global economy as a realm that should be held inviolate from national states, and the demands their people put upon them. This, as they came to realize over time, requires them to build their own quasi-constitutional structures at the international level, in order to fend off the persistent efforts of national states to shape and control competitive forces and economic flows that are better left alone.

Under this account, the most crucial dynamics of neo-liberalism did not involve the glamorous public clash of ideas between intellectuals. Instead, they were duller, more relentless and in the end, more effective – the persistent efforts of neo-liberals to argue through new kinds of international institution and to push back against organized efforts to make global markets more accountable to national authorities. Mont Pelerin was important – but so too were the International Chamber of Commerce and a multitude of boring seeming meetings and negotiations.

neoliberalism
books
political-economy
define-your-terms
to-read
if-I-have-the-guts
fascism
Under this account, the most crucial dynamics of neo-liberalism did not involve the glamorous public clash of ideas between intellectuals. Instead, they were duller, more relentless and in the end, more effective – the persistent efforts of neo-liberals to argue through new kinds of international institution and to push back against organized efforts to make global markets more accountable to national authorities. Mont Pelerin was important – but so too were the International Chamber of Commerce and a multitude of boring seeming meetings and negotiations.

4 weeks ago

Making Sense of Bivector Addition, viXra.org e-Print archive, viXra:1807.0234

4 weeks ago

As a demonstration of the coherence of Geometric Algebra's (GA's) geometric and algebraic concepts of bivectors, we add three geometric bivectors according to the procedure described by Hestenes and Macdonald, then use bivector identities to determine, from the result, two bivectors whose outer product is equal to the initial sum. In this way, we show that the procedure that GA's inventors dened for adding geometric bivectors is precisely that which is needed to give results that coincide with those obtained by calculating outer products of vectors that are expressed in terms of a 3D basis. We explain that that accomplishment is no coincidence: it is a consequence of the attributes that GA's designers assigned (or didn't) to bivectors.

linear-algebra
algebra
define-your-terms
rather-interesting
to-write-about
nudge-targets
consider:representation
Grassmannian
wedge-product
4 weeks ago

The Arbelos in Wasan Geometry, Problems of Izumiya and Nait=o, viXra.org e-Print archive, viXra:1811.0132

4 weeks ago

We generalize two sangaku problems involving an arbelos proposed by Izumiya and Nait\=o, and show the existence of six non-Archimedean congruent circles.

plane-geometry
nanohistory
to-write-about
nudge-targets
consider:rediscovery
4 weeks ago

Robinson Tiles - Futility Closet

4 weeks ago

Berkeley mathematician Raphael Robinson discovered this remarkable set of aperiodic tiles in 1978. The six shapes will neatly tile a plane, as shown below, and though the pattern cannot be regular, it reliably produces a hierarchical design: Each small orange square sits at the corner of a larger orange square, which sits at the corner of a still larger one, and so on ad infinitum. This is because subgroups of tiles form “supertiles” with similar properties — see here.

tiling
mathematical-recreations
aperiodic-patterns
constraint-satisfaction
4 weeks ago

Problem Solving with Trig | Continuous Everywhere but Differentiable Nowhere

4 weeks ago

I’m going to try to outline the messiness that was my thought process in this triangle problem, to show/archive the messiness that is problem solving.

...

The point of this post isn’t to teach someone the solution to the problem. I could have written something much easier. (See we can draw this auxiliary line to create similar triangles. We use proportions since we have similar triangles. Then exploit the new isosceles triangle by setting the leg lengths equal to each other.) But that’s whitewashing all that went into the problem. It’s like a math paper or a science paper. It is a distillation of so freaking much. It was to capture what it’s like to not know something, and how my brain worked in trying to get to figure something out. To show what’s behind a solution.

problem-solving
plane-geometry
learning-in-public
pedagogy
to-write-about
...

The point of this post isn’t to teach someone the solution to the problem. I could have written something much easier. (See we can draw this auxiliary line to create similar triangles. We use proportions since we have similar triangles. Then exploit the new isosceles triangle by setting the leg lengths equal to each other.) But that’s whitewashing all that went into the problem. It’s like a math paper or a science paper. It is a distillation of so freaking much. It was to capture what it’s like to not know something, and how my brain worked in trying to get to figure something out. To show what’s behind a solution.

4 weeks ago

NeuralFunk - Combining Deep Learning with Sound Design

4 weeks ago

NeuralFunk - Combining Deep Learning with Sound Design

Making a Track Entirely out of Samples Generated by Neural Networks

rather-interesting
neural-networks
generative-art
learning-in-public
the-mangle-in-practice
to-write-about
performance-measure
Making a Track Entirely out of Samples Generated by Neural Networks

4 weeks ago

PsyArXiv Preprints | The rich are different: Unraveling the perceived and self-reported personality profiles of high net-worth individuals

4 weeks ago

Beyond money and possessions, how are the rich different from the general population? Drawing on a unique sample of high net-worth individuals from Germany (≥1 million Euro in financial assets; N = 130), nationally representative data (N = 22,981), and an additional online panel (N = 690), we provide the first direct investigation of the stereotypically-perceived and self-reported personality profiles of high net-worth individuals. Investigating the broad personality traits of the Big Five and the more specific traits of narcissism and locus of control, we find that stereotypes about wealthy people’s personality are accurate albeit somewhat exaggerated and that wealthy people can be characterized as stable, flexible, and agentic individuals who are focused more on themselves than on others.

psychology
wealth
capitalism
social-norms
cultural-norms
social-psychology
stereotypes
yup
4 weeks ago

academia
academic-culture
activism
advice
agent-based
agility
algorithms
amusing
ann-arbor
approximation
architecture
archive
art
artificial-intelligence
artificial-life
benchmarking
bioinformatics
biological-engineering
blogging
books
bushism
business
business-culture
business-model
cellular-automata
classification
clustering
collaboration
collective-intelligence
combinatorics
commons
communication
community
complex-systems
complexology
compressed-sensing
computational-complexity
computational-geometry
computer-science
conservatism
consider:feature-discovery
consider:looking-to-see
consider:performance-measures
consider:rediscovery
consider:representation
consider:stress-testing
constraint-satisfaction
copyright
corporatism
criticism
crowdsourcing
cultural-assumptions
cultural-dynamics
cultural-norms
data-analysis
data-mining
deep-learning
define-your-terms
design
design-patterns
development
digital-humanities
digitization
disintermediation-in-action
distributed-processing
diversity
dynamical-systems
economics
education
emergence
emergent-design
engineering
engineering-design
evolutionary-algorithms
evolutionary-economics
experiment
feature-construction
feature-extraction
finance
financial-crisis
fitness-landscapes
formalization
game-theory
games
generative-art
generative-models
genetic-programming
geometry
google
government
graph-theory
graphic-design
heuristics
hey-i-know-this-guy
history
horse-races
humor
image-analysis
image-processing
image-segmentation
inference
information-theory
innovation
intellectual-property
interesting
inverse-problems
javascript
kauffmania
language
law
lawyers
learning
learning-by-doing
learning-from-data
library
local
looking-to-see
machine-learning
macos
management
marketing
mathematical-recreations
mathematics
matrices
media
metaheuristics
metrics
modeling
models
models-and-modes
multiobjective-optimization
nanohistory
nanotechnology
natural-language-processing
network-theory
networks
neural-networks
nonlinear-dynamics
nudge
nudge-targets
number-theory
numerical-methods
open-access
open-questions
open-source
openness
operations-research
optimization
pedagogy
performance-measure
philosophy
philosophy-of-engineering
philosophy-of-science
photography
physics
plane-geometry
planning
politics
prediction
probability-theory
programming
project-management
proof
public-policy
publishing
puzzles
rather-interesting
representation
research
review
robotics
robustness
ruby
science
self-organization
signal-processing
simulation
social-dynamics
social-engineering
social-networks
social-norms
sociology
software
software-development
statistics
strings
sustainability
system-of-professions
systems-biology
technology
the-mangle-in-practice
theoretical-biology
tiling
time-series
to-do
to-read
to-understand
to-write-about
tools
trading
tutorial
typography
user-experience
via:cshalizi
video
visualization
web-design
web2.0
worklife
writing