**representation**4729

The Theory of Concatenative Combinators

3 hours ago by Vaguery

This article attempts to outline, in informal terms, a new theory of combinators, related to the theory of Combinatory Logic pioneered by Moses Schonfinkel, Haskell Curry, and others in the 1930s. Although not essential, an understanding of the classical theory of combinators may be helpful (see the links at the bottom of this article for some introductory material to combinators).

This topic is one which no doubt ought to be subjected to the rigor of modern mathematics; there are many theorems from classical combinatory logic (e.g., Church-Rosser) which we conjecture have analogues here. However, what follows is only a rough, but hopefully, friendly, introduction to the subject.

The inspiration for this theory comes from the programming language Joy, designed by Manfred von Thun. It would be very helpful if the reader is basically familiar with Joy. In Joy, data is manipulated through a stack (there are no variables); in this way, it is similar to the programming language FORTH. However, Joy goes one step further and permits (and actively encourages) pushing programs themselves onto the stack, which can then be manipulated just like ordinary data.

In fact, the theory here is basically a subset of Joy in which programs are the only kind of data (i.e., numbers, string literals, and other kinds of data are not part of the theory here). To someone unfamiliar with combinatory logic, it might seem that no useful computations could be done without numbers, but it will soon be seen that numeric data can be simulated using concatenative combinators, just as they could using classical combinators.

programming-language
concatenative-languages
Forth
Joy
engineering-design
representation
to-write-about
ReQ
This topic is one which no doubt ought to be subjected to the rigor of modern mathematics; there are many theorems from classical combinatory logic (e.g., Church-Rosser) which we conjecture have analogues here. However, what follows is only a rough, but hopefully, friendly, introduction to the subject.

The inspiration for this theory comes from the programming language Joy, designed by Manfred von Thun. It would be very helpful if the reader is basically familiar with Joy. In Joy, data is manipulated through a stack (there are no variables); in this way, it is similar to the programming language FORTH. However, Joy goes one step further and permits (and actively encourages) pushing programs themselves onto the stack, which can then be manipulated just like ordinary data.

In fact, the theory here is basically a subset of Joy in which programs are the only kind of data (i.e., numbers, string literals, and other kinds of data are not part of the theory here). To someone unfamiliar with combinatory logic, it might seem that no useful computations could be done without numbers, but it will soon be seen that numeric data can be simulated using concatenative combinators, just as they could using classical combinators.

3 hours ago by Vaguery

OpenAI unveils multitalented AI that writes, translates, and slanders - The Verge

3 hours ago by Vaguery

The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

One reason to be excited about GPT-2, says Ani Kembhavi, a researcher at the Allen Institute for Artificial Intelligence, is that predicting text can be thought of as an “uber-task” for computers: a broad challenge that, once solved, will open a floodgate of intelligence.

“Asking the time or getting directions can both be thought of as question-answering tasks that involve predicting text,” Kembhavi tells The Verge. “So, hypothetically, if you train a good enough question-answering model, it can potentially do anything.”

Take GPT-2’s ability to translate text from English to French, for example. Usually, translation algorithms are fed hundreds of thousands of phrases in relevant languages, and the networks themselves are structured in such a way that they process data by converting input X into output Y. This data and network architecture give these systems the tools they need to progress on this task the same way snow chains help cars get a grip on icy roads.

natural-language-processing
openAI
machine-learning
rather-interesting
to-write-about
representation
One reason to be excited about GPT-2, says Ani Kembhavi, a researcher at the Allen Institute for Artificial Intelligence, is that predicting text can be thought of as an “uber-task” for computers: a broad challenge that, once solved, will open a floodgate of intelligence.

“Asking the time or getting directions can both be thought of as question-answering tasks that involve predicting text,” Kembhavi tells The Verge. “So, hypothetically, if you train a good enough question-answering model, it can potentially do anything.”

Take GPT-2’s ability to translate text from English to French, for example. Usually, translation algorithms are fed hundreds of thousands of phrases in relevant languages, and the networks themselves are structured in such a way that they process data by converting input X into output Y. This data and network architecture give these systems the tools they need to progress on this task the same way snow chains help cars get a grip on icy roads.

3 hours ago by Vaguery

[1902.06720] Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent

3 hours ago by Vaguery

A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.

neural-networks
representation
deep-learning
optimization
via:cshalizi
rather-interesting
define-your-terms
getting-the-same-place-by-the-back-road
3 hours ago by Vaguery

I lost faith in the industry, burned out, but the cult of the tool saved me / Habr

4 hours ago by Vaguery

I don’t know what’s the deal — if F# is a monumentally awesome technology, or if it simply fits me perfectly, or if it’s created for these tasks specifically — what’s the difference? What’s important is at that moment I was sinking and I needed a lifeboat. Life threw me F# and I pulled out of it. Now it’s not just another soulless technology to me — it’s a huge emotional deal.

Now, when I hear someone scold F# — “A stillborn tech! A geek toy…” — I always remember the cold winter night, the burning car, the cigarette frozen in my mouth, depression and F# that pulled me out of it. It’s as if someone threw shit at my best friend.

It might look strange to an outsider, but if you lived that day in my place, you would’ve reacted the same. I think that’s common in any technology cultist. They fell in love with their languages, because they have an emotional attachment to the circumstances that made them discover it. And then I come and spit right into their soul. Who’s the idiot now? I am. I won’t do it again, I hope.

introspection
the-mangle-in-practice
representation
the-unruly-body-of-the-programmer
via:perturbations
Now, when I hear someone scold F# — “A stillborn tech! A geek toy…” — I always remember the cold winter night, the burning car, the cigarette frozen in my mouth, depression and F# that pulled me out of it. It’s as if someone threw shit at my best friend.

It might look strange to an outsider, but if you lived that day in my place, you would’ve reacted the same. I think that’s common in any technology cultist. They fell in love with their languages, because they have an emotional attachment to the circumstances that made them discover it. And then I come and spit right into their soul. Who’s the idiot now? I am. I won’t do it again, I hope.

4 hours ago by Vaguery

[1811.03557] Efficient Numerical Algorithms based on Difference Potentials for Chemotaxis Systems in 3D

4 hours ago by Vaguery

In this work, we propose efficient and accurate numerical algorithms based on Difference Potentials Method for numerical solution of chemotaxis systems and related models in 3D. The developed algorithms handle 3D irregular geometry with the use of only Cartesian meshes and employ Fast Poisson Solvers. In addition, to further enhance computational efficiency of the methods, we design a Difference-Potentials-based domain decomposition approach which allows mesh adaptivity and easy parallelization of the algorithm in space. Extensive numerical experiments are presented to illustrate the accuracy, efficiency and robustness of the developed numerical algorithms.

theoretical-biology
pattern-formation
reaction-diffusion
finite-elements
self-organization
rather-interesting
simulation
representation
4 hours ago by Vaguery

[1806.06166] $alpha$-Expansions with odd partial quotients

17 hours ago by Vaguery

We consider an analogue of Nakada's α-continued fraction transformation in the setting of continued fractions with odd partial quotients. More precisely, given α∈[12(5‾√−1),12(5‾√+1)], we show that every irrational number x∈Iα=[α−2,α) can be uniquely represented as

x=e1(x;α)d1(x;α)+e2(x;α)d2(x;α)+⋯,

with ei(x;α)∈{±1} and di(x;α)∈2ℕ−1 determined by the iterates of the transformation

φα(x):=1|x|−2[12|x|+1−α2]−1

of Iα. We also describe the natural extension of φα and prove that the endomorphism φα is exact.

continued-fractions
algebra
representation
approximation
to-understand
number-theory
dynamical-systems
x=e1(x;α)d1(x;α)+e2(x;α)d2(x;α)+⋯,

with ei(x;α)∈{±1} and di(x;α)∈2ℕ−1 determined by the iterates of the transformation

φα(x):=1|x|−2[12|x|+1−α2]−1

of Iα. We also describe the natural extension of φα and prove that the endomorphism φα is exact.

17 hours ago by Vaguery

[1902.06006] Contextual Word Representations: A Contextual Introduction

2 days ago by arsyed

" This introduction aims to tell the story of how we put words into computers. It is part of the story of the field of natural language processing (NLP), a branch of artificial intelligence. It targets a wide audience with a basic understanding of computer programming, but avoids a detailed mathematical treatment, and it does not present any algorithms. It also does not focus on any particular application of NLP such as translation, question answering, or information extraction. The ideas presented here were developed by many researchers over many decades, so the citations are not exhaustive but rather direct the reader to a handful of papers that are, in the author's view, seminal. After reading this document, you should have a general understanding of word vectors (also known as word embeddings): why they exist, what problems they solve, where they come from, how they have changed over time, and what some of the open questions about them are. Readers already familiar with word vectors are advised to skip to Section 5 for the discussion of the most recent advance, contextual word vectors."

nlp
representation
embedding
via:chl
via:csantos
2 days ago by arsyed

All in a Row Review | Shaun May

5 days ago by Felicity

“Autism is a way of being. It is not possible to separate the person from the autism.

Therefore, when parents say, I wish my child did not have autism, what they’re really saying is, I wish the autistic child I have did not exist, and I had a different (non-autistic) child instead.

Read that again. This is what we hear when you mourn over our existence. This is what we hear when you pray for a cure. This is what we know, when you tell us of your fondest hopes and dreams for us: that your greatest wish is that one day we will cease to be, and strangers you can love will move in behind our faces.” (Sinclair 1993)

quote
autism
neurodivergence
disability
disabilityrights
representation
theatre
review
Therefore, when parents say, I wish my child did not have autism, what they’re really saying is, I wish the autistic child I have did not exist, and I had a different (non-autistic) child instead.

Read that again. This is what we hear when you mourn over our existence. This is what we hear when you pray for a cure. This is what we know, when you tell us of your fondest hopes and dreams for us: that your greatest wish is that one day we will cease to be, and strangers you can love will move in behind our faces.” (Sinclair 1993)

5 days ago by Felicity

Black Kids don't have the same rights as Nick Sandmann and other white kids

8 days ago by mrbennett

On the MAGA kids and the Native American, Part 3

Representation
8 days ago by mrbennett

[1802.07089] Attentive Tensor Product Learning

10 days ago by Vaguery

This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models. ATPL is a new architecture to bridge this gap by exploiting Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate deep learning with explicit language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via TPR-based deep neural network; 2) employing attention modules to compute TPR; and 3) integration of TPR with typical deep learning architectures including Long Short-Term Memory (LSTM) and Feedforward Neural Network (FFNN). The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. This ATPL approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a sentence. Experimental results demonstrate the effectiveness of the proposed approach.

machine-learning
representation
natural-language-processing
recurrent-networks
time-series
to-understand
data-fusion
to-write-about
consider:parsing-GP-results
10 days ago by Vaguery

[1812.05433] Lenia - Biology of Artificial Life

10 days ago by Vaguery

We report a new model of artificial life called Lenia (from Latin lenis "smooth"), a two-dimensional cellular automaton with continuous space-time-state and generalized local rule. Computer simulations show that Lenia supports a great diversity of complex autonomous patterns or "lifeforms" bearing resemblance to real-world microscopic organisms. More than 400 species in 18 families have been identified, many discovered via interactive evolutionary computation. They differ from other cellular automata patterns in being geometric, metameric, fuzzy, resilient, adaptive, and rule-generic.

We present basic observations of the model regarding the properties of space-time and basic settings. We provide a board survey of the lifeforms, categorize them into a hierarchical taxonomy, and map their distribution in the parameter hyperspace. We describe their morphological structures and behavioral dynamics, propose possible mechanisms of their self-propulsion, self-organization and plasticity. Finally, we discuss how the study of Lenia would be related to biology, artificial life, and artificial intelligence.

artificial-life
representation
cellular-automata
rather-interesting
to-write-about
to-implement
consider:simulation
consider:abstraction
We present basic observations of the model regarding the properties of space-time and basic settings. We provide a board survey of the lifeforms, categorize them into a hierarchical taxonomy, and map their distribution in the parameter hyperspace. We describe their morphological structures and behavioral dynamics, propose possible mechanisms of their self-propulsion, self-organization and plasticity. Finally, we discuss how the study of Lenia would be related to biology, artificial life, and artificial intelligence.

10 days ago by Vaguery

Don't Doubt What You Saw With Your Own Eyes

10 days ago by mrbennett

On the MAGA kids and the Native American, Part 2

Representation
10 days ago by mrbennett

Stop Trusting Viral Videos

10 days ago by mrbennett

On the MAGA kids and the Native American

Representation
10 days ago by mrbennett

Zobrist hashing - Wikipedia

11 days ago by jwh

"Zobrist hashing (also referred to as Zobrist keys or Zobrist signatures [1]) is a hash function construction used in computer programs that play abstract board games, such as chess and Go, to implement transposition tables, a special kind of hash table that is indexed by a board position and used to avoid analyzing the same position more than once."

game
representation
data
function
dev
programming
gamedev
11 days ago by jwh

(90) Trump und der Staatsstreich der Konzerne | Kurzgefasst | ARTE - YouTube

15 days ago by asterisk2a

no longer 1 person 1 vote

book - John Ralston Saul - der markt frist seine Kinder.

DonaldTrump
Donald
Trump
lobby
revolving
door
Washington
BarackObama
transparency
vested
interest
democracy
No
Representation
kleptocracy
greed
self-regulation
bailout
book
PAC
SuperPAC
poverty
inequality
social
mobility
American
Dream
Brexit
Macron
France
Germany
tax
evasion
avoidance
1%
Rentier
working
poor
trap
Austerity
GFC
oligarchy
aristocracy
Elite
taxation
short-termism
book - John Ralston Saul - der markt frist seine Kinder.

15 days ago by asterisk2a

[1807.04437] Finite-State Classical Mechanics

16 days ago by Vaguery

Reversible lattice dynamics embody basic features of physics that govern the time evolution of classical information. They have finite resolution in space and time, don't allow information to be erased, and easily accommodate other structural properties of microscopic physics, such as finite distinct state and locality of interaction. In an ideal quantum realization of a reversible lattice dynamics, finite classical rates of state-change at lattice sites determine average energies and momenta. This is very different than traditional continuous models of classical dynamics, where the number of distinct states is infinite, the rate of change between distinct states is infinite, and energies and momenta are not tied to rates of distinct state change. Here we discuss a family of classical mechanical models that have the informational and energetic realism of reversible lattice dynamics, while retaining the continuity and mathematical framework of classical mechanics. These models may help to clarify the informational foundations of mechanics.

nonlinear-dynamics
cellular-automata
lattice-gases
complexology
representation
review
16 days ago by Vaguery

**related tags**

Copy this bookmark: