representation   4729

« earlier    

The Theory of Concatenative Combinators
This article attempts to outline, in informal terms, a new theory of combinators, related to the theory of Combinatory Logic pioneered by Moses Schonfinkel, Haskell Curry, and others in the 1930s. Although not essential, an understanding of the classical theory of combinators may be helpful (see the links at the bottom of this article for some introductory material to combinators).

This topic is one which no doubt ought to be subjected to the rigor of modern mathematics; there are many theorems from classical combinatory logic (e.g., Church-Rosser) which we conjecture have analogues here. However, what follows is only a rough, but hopefully, friendly, introduction to the subject.

The inspiration for this theory comes from the programming language Joy, designed by Manfred von Thun. It would be very helpful if the reader is basically familiar with Joy. In Joy, data is manipulated through a stack (there are no variables); in this way, it is similar to the programming language FORTH. However, Joy goes one step further and permits (and actively encourages) pushing programs themselves onto the stack, which can then be manipulated just like ordinary data.

In fact, the theory here is basically a subset of Joy in which programs are the only kind of data (i.e., numbers, string literals, and other kinds of data are not part of the theory here). To someone unfamiliar with combinatory logic, it might seem that no useful computations could be done without numbers, but it will soon be seen that numeric data can be simulated using concatenative combinators, just as they could using classical combinators.
programming-language  concatenative-languages  Forth  Joy  engineering-design  representation  to-write-about  ReQ 
3 hours ago by Vaguery
OpenAI unveils multitalented AI that writes, translates, and slanders - The Verge
The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

One reason to be excited about GPT-2, says Ani Kembhavi, a researcher at the Allen Institute for Artificial Intelligence, is that predicting text can be thought of as an “uber-task” for computers: a broad challenge that, once solved, will open a floodgate of intelligence.

“Asking the time or getting directions can both be thought of as question-answering tasks that involve predicting text,” Kembhavi tells The Verge. “So, hypothetically, if you train a good enough question-answering model, it can potentially do anything.”

Take GPT-2’s ability to translate text from English to French, for example. Usually, translation algorithms are fed hundreds of thousands of phrases in relevant languages, and the networks themselves are structured in such a way that they process data by converting input X into output Y. This data and network architecture give these systems the tools they need to progress on this task the same way snow chains help cars get a grip on icy roads.
natural-language-processing  openAI  machine-learning  rather-interesting  to-write-about  representation 
3 hours ago by Vaguery
[1902.06720] Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.
neural-networks  representation  deep-learning  optimization  via:cshalizi  rather-interesting  define-your-terms  getting-the-same-place-by-the-back-road 
3 hours ago by Vaguery
I lost faith in the industry, burned out, but the cult of the tool saved me / Habr
I don’t know what’s the deal — if F# is a monumentally awesome technology, or if it simply fits me perfectly, or if it’s created for these tasks specifically — what’s the difference? What’s important is at that moment I was sinking and I needed a lifeboat. Life threw me F# and I pulled out of it. Now it’s not just another soulless technology to me — it’s a huge emotional deal.

Now, when I hear someone scold F# — “A stillborn tech! A geek toy…” — I always remember the cold winter night, the burning car, the cigarette frozen in my mouth, depression and F# that pulled me out of it. It’s as if someone threw shit at my best friend.

It might look strange to an outsider, but if you lived that day in my place, you would’ve reacted the same. I think that’s common in any technology cultist. They fell in love with their languages, because they have an emotional attachment to the circumstances that made them discover it. And then I come and spit right into their soul. Who’s the idiot now? I am. I won’t do it again, I hope.
introspection  the-mangle-in-practice  representation  the-unruly-body-of-the-programmer  via:perturbations 
4 hours ago by Vaguery
[1811.03557] Efficient Numerical Algorithms based on Difference Potentials for Chemotaxis Systems in 3D
In this work, we propose efficient and accurate numerical algorithms based on Difference Potentials Method for numerical solution of chemotaxis systems and related models in 3D. The developed algorithms handle 3D irregular geometry with the use of only Cartesian meshes and employ Fast Poisson Solvers. In addition, to further enhance computational efficiency of the methods, we design a Difference-Potentials-based domain decomposition approach which allows mesh adaptivity and easy parallelization of the algorithm in space. Extensive numerical experiments are presented to illustrate the accuracy, efficiency and robustness of the developed numerical algorithms.
theoretical-biology  pattern-formation  reaction-diffusion  finite-elements  self-organization  rather-interesting  simulation  representation 
4 hours ago by Vaguery
[1806.06166] $alpha$-Expansions with odd partial quotients
We consider an analogue of Nakada's α-continued fraction transformation in the setting of continued fractions with odd partial quotients. More precisely, given α∈[12(5‾√−1),12(5‾√+1)], we show that every irrational number x∈Iα=[α−2,α) can be uniquely represented as
with ei(x;α)∈{±1} and di(x;α)∈2ℕ−1 determined by the iterates of the transformation
of Iα. We also describe the natural extension of φα and prove that the endomorphism φα is exact.
continued-fractions  algebra  representation  approximation  to-understand  number-theory  dynamical-systems 
17 hours ago by Vaguery
[1902.06006] Contextual Word Representations: A Contextual Introduction
" This introduction aims to tell the story of how we put words into computers. It is part of the story of the field of natural language processing (NLP), a branch of artificial intelligence. It targets a wide audience with a basic understanding of computer programming, but avoids a detailed mathematical treatment, and it does not present any algorithms. It also does not focus on any particular application of NLP such as translation, question answering, or information extraction. The ideas presented here were developed by many researchers over many decades, so the citations are not exhaustive but rather direct the reader to a handful of papers that are, in the author's view, seminal. After reading this document, you should have a general understanding of word vectors (also known as word embeddings): why they exist, what problems they solve, where they come from, how they have changed over time, and what some of the open questions about them are. Readers already familiar with word vectors are advised to skip to Section 5 for the discussion of the most recent advance, contextual word vectors."
nlp  representation  embedding  via:chl  via:csantos 
2 days ago by arsyed
All in a Row Review | Shaun May
“Autism is a way of being. It is not possible to separate the person from the autism.

Therefore, when parents say, I wish my child did not have autism, what they’re really saying is, I wish the autistic child I have did not exist, and I had a different (non-autistic) child instead.

Read that again. This is what we hear when you mourn over our existence. This is what we hear when you pray for a cure. This is what we know, when you tell us of your fondest hopes and dreams for us: that your greatest wish is that one day we will cease to be, and strangers you can love will move in behind our faces.” (Sinclair 1993)
quote  autism  neurodivergence  disability  disabilityrights  representation  theatre  review 
5 days ago by Felicity
[1802.07089] Attentive Tensor Product Learning
This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models. ATPL is a new architecture to bridge this gap by exploiting Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate deep learning with explicit language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via TPR-based deep neural network; 2) employing attention modules to compute TPR; and 3) integration of TPR with typical deep learning architectures including Long Short-Term Memory (LSTM) and Feedforward Neural Network (FFNN). The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. This ATPL approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a sentence. Experimental results demonstrate the effectiveness of the proposed approach.
machine-learning  representation  natural-language-processing  recurrent-networks  time-series  to-understand  data-fusion  to-write-about  consider:parsing-GP-results 
10 days ago by Vaguery
[1812.05433] Lenia - Biology of Artificial Life
We report a new model of artificial life called Lenia (from Latin lenis "smooth"), a two-dimensional cellular automaton with continuous space-time-state and generalized local rule. Computer simulations show that Lenia supports a great diversity of complex autonomous patterns or "lifeforms" bearing resemblance to real-world microscopic organisms. More than 400 species in 18 families have been identified, many discovered via interactive evolutionary computation. They differ from other cellular automata patterns in being geometric, metameric, fuzzy, resilient, adaptive, and rule-generic.
We present basic observations of the model regarding the properties of space-time and basic settings. We provide a board survey of the lifeforms, categorize them into a hierarchical taxonomy, and map their distribution in the parameter hyperspace. We describe their morphological structures and behavioral dynamics, propose possible mechanisms of their self-propulsion, self-organization and plasticity. Finally, we discuss how the study of Lenia would be related to biology, artificial life, and artificial intelligence.
artificial-life  representation  cellular-automata  rather-interesting  to-write-about  to-implement  consider:simulation  consider:abstraction 
10 days ago by Vaguery
Don't Doubt What You Saw With Your Own Eyes
On the MAGA kids and the Native American, Part 2
10 days ago by mrbennett
Stop Trusting Viral Videos
On the MAGA kids and the Native American
10 days ago by mrbennett
Zobrist hashing - Wikipedia
"Zobrist hashing (also referred to as Zobrist keys or Zobrist signatures [1]) is a hash function construction used in computer programs that play abstract board games, such as chess and Go, to implement transposition tables, a special kind of hash table that is indexed by a board position and used to avoid analyzing the same position more than once."
game  representation  data  function  dev  programming  gamedev 
11 days ago by jwh
[1807.04437] Finite-State Classical Mechanics
Reversible lattice dynamics embody basic features of physics that govern the time evolution of classical information. They have finite resolution in space and time, don't allow information to be erased, and easily accommodate other structural properties of microscopic physics, such as finite distinct state and locality of interaction. In an ideal quantum realization of a reversible lattice dynamics, finite classical rates of state-change at lattice sites determine average energies and momenta. This is very different than traditional continuous models of classical dynamics, where the number of distinct states is infinite, the rate of change between distinct states is infinite, and energies and momenta are not tied to rates of distinct state change. Here we discuss a family of classical mechanical models that have the informational and energetic realism of reversible lattice dynamics, while retaining the continuity and mathematical framework of classical mechanics. These models may help to clarify the informational foundations of mechanics.
nonlinear-dynamics  cellular-automata  lattice-gases  complexology  representation  review 
16 days ago by Vaguery

« earlier    

related tags

(it-ain't)  1%  10%  2003  2018  2019  500  601  a  accessibility  accidentalrepresentation  accountability  accuracy  acting  advertising  afd  ageism  ai  algebra  algorithms  america  american  anand  anthropomorphizing  aoc  approximation  architecture  archives  aristocracy  artificial-life  asking-the-other-questions  assimilation  at  audience  austerity  authenticity  autism  automata  automation  avoidance  bailout  barackobama  bias  big  binary  body  book  boolean-functions  brexit  bryancranston  bubble  bullshit  but  by:danielfareyjones  by:dineenporter  by:ealexjung  calls  capta  capture  categorical-variables  categories  causation  cellular-automata  ceo  chat  china  choice  christian  chronicpain  classics  clustering  cognition  combinatorics  comics  commodityquantity  complexology  compression  computability  computational-complexity  computational-geometry  computer-science  concatenative-languages  condensed-matter  congress  consider:`req`  consider:abstraction  consider:data-balancing  consider:feature-discovery  consider:looking-to-see  consider:parsing-gp-results  consider:performance-measures  consider:req  consider:simulation  constancewu  construction  context  continued-fractions  controversy  corporate  corporations  could-be-clearer  counting  crisis  csr  culture  data-analysis  data-fusion  data  davey  davos  decision  decisions  decomposition  deep-learning  define-your-terms  democracy  design  desire  dev  did  dikw  dimension-reduction  disability  disability_culture  disabilityrights  discrete-and-continuous-sittin-in-a-tree  diversity  donald  donaldtrump  door  downward  dream  dynamical-systems  echo  eds  elite  embedding  emoji  engineering-design  entertainment  enumeration  essay  evasion  evidence  expert  explanation  extreme-values  feature-construction  feature-extraction  feature-selection  film  film:crazyrichasians  films  finite-elements  floating-point  forth  foundational-work  france  frechet-distance  free-energy-model  freetoplay  from:campaign  from:twitter  function  functional-languages  fuzzy  gaga  game  gamedev  games  gay  geekculture  gelbwesten  generative-models  genetic-programming  geometry  germany  gesellschaft  getting-the-same-place-by-the-back-road  gfc  giridharadas  given  globes  goldenglobes  graph-theory  greed  hashing  hex  hey-i-know-this-guy  hiring  history-of-science  history  hoarders  human  image  inclusion  income  incomplete  india  indigenouspeople  indirect-representations  inequality  inference  information-theory  information  inheritance  injustice  inspiration_porn  integer-programming  intellect  interaction  interest  internet  interpretation  intersectionality  interview  introspection  inverse-problems  journalism  journalismus  joy  kleptocracy  knot-theory  knowledge  labor  lady  language  lattice-gases  lgbtq  live  lobby  machine-learning  macron  marginal  material  mathematical-programming  mathematics  matrices  measure  media  metrics  mike  mikewatson  minimum_wage  moba  mobility  models-and-modes  models-as-compressed-forms  models  motivation  movie  museums  narrative  natural-language-processing  neoliberal  neoliberalism  neural-networks  neurodivergence  neutrality  nlp  no  nonexpert  nonfiction  nonlinear-dynamics  nontraditional-computing  not  nudge-targets  number-theory  numerical-methods  of  oligarchy  omission  online  open-questions  openai  operations-research  optimization  out-of-the-box  pac  pattern-formation  pence  people-of-colour  petabyte  phase-transitions  philanthropy  philosophy-of-science  philosophy  planning  play  podemo  policy  political_parties  politics  poor  posting  poverty  power  probability-theory  processing  product  production  programming-language  programming  queer  quote  race  rate  rather-interesting  rather-odd  raw  reaction-diffusion  realtimecapitalism  recordsmgmt  recurrent-networks  refugee  remarkably-legible  remix  rentier  req  research  responsibility  responsiveness  review  revolving  rewriting-systems  salvini  sciencefiction  self-organization  self-regulation  short-termism  simulation  skills  social  spatial-embedding  spider  startrek  statistical-mechanics  stereotypes  streaming  superpac  systems  taken  talk  tax  taxation  technocracy  television  the-mangle-in-practice  the-unruly-body-of-the-programmer  the  theatre  theexpanse  theoretical-biology  tiling  time-series  to-cite  to-do  to-examine  to-implement  to-simulate  to-translate  to-understand  to-write-about  tool  topology  tradcultures  training  translations-between-representations  transparency  trap  trevornoah  trump  truth  trying-not-to-do-gp  tutorial  twitch  unicode  unjust  upskilling  us  usa  vested  video  videogames  visualization  walsh-polynomials  washington  wavey  webmind  westminster  wisdon  women  won  working  worst  youtube   

Copy this bookmark: