janpeuker + ai   123

Machine Learning FAQ
Random Forests vs. SVMs

I would say that random forests are probably THE “worry-free” approach - if such a thing exists in ML: There are no real hyperparameters to tune (maybe except for the number of trees; typically, the more trees we have the better). On the contrary, there are a lot of knobs to be turned in SVMs: Choosing the “right” kernel, regularization penalties, the slack variable, …

Both random forests and SVMs are non-parametric models (i.e., the complexity grows as the number of training samples increases). Training a non-parametric model can thus be more expensive, computationally, compared to a generalized linear model, for example. The more trees we have, the more expensive it is to build a random forest. Also, we can end up with a lot of support vectors in SVMs; in the worst-case scenario, we have as many support vectors as we have samples in the training set. Although, there are multi-class SVMs, the typical implementation for mult-class classification is One-vs.-All; thus, we have to train an SVM for each class – in contrast, decision trees or random forests, which can handle multiple classes out of the box.

To summarize, random forests are much simpler to train for a practitioner; it’s easier to find a good, robust model. The complexity of a random forest grows with the number of trees in the forest, and the number of training samples we have. In SVMs, we typically need to do a fair amount of parameter tuning, and in addition to that, the computational cost grows linearly with the number of classes as well.
ai  howto  algorithm 
6 hours ago by janpeuker
Research Blog: TensorFlow Lattice: Flexibility Empowered by Prior Knowledge
We take advantage of the look-up table’s structure, which can be keyed by multiple inputs to approximate an arbitrarily flexible relationship, to satisfy monotonic relationships that you specify in order to generalize better. That is, the look-up table values are trained to minimize the loss on the training examples, but in addition, adjacent values in the look-up table are constrained to increase along given directions of the input space, which makes the model outputs increase in those directions
ai  google  library  analytics 
6 hours ago by janpeuker
CS231n Convolutional Neural Networks for Visual Recognition
These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition.
For questions/concerns/bug reports contact Justin Johnson regarding the assignments, or contact Andrej Karpathy regarding the course notes. You can also submit a pull request directly to our git repo.
We encourage the use of the hypothes.is extension to annote comments and discuss these notes inline.
ai  howto 
yesterday by janpeuker
The Unreasonable Effectiveness of Recurrent Neural Networks
Viewed this way, RNNs essentially describe programs. In fact, it is known that RNNs are Turing-Complete in the sense that they can to simulate arbitrary programs (with proper weights). But similar to universal approximation theorems for neural nets you shouldn’t read too much into this. In fact, forget I said anything.
ai  learning  engineering  Emergence  reference 
yesterday by janpeuker
DAOs, DACs, DAs and More: An Incomplete Terminology Guide - Ethereum Blog
an AI is completely autonomous, whereas a DAO still requires heavy involvement from humans specifically interacting according to a protocol defined by the DAO in order to operate. We can classify DAOs, DOs (and plain old Os), AIs and a fourth category, plain old robots, according to a good old quadrant chart, with another quadrant chart to classify entities that do not have internal capital thus altogether making a cube:

DAOs == automation at the center, humans at the edges. Thus, on the whole, it makes most sense to see Bitcoin and Namecoin as DAOs, albeit ones that barely cross the threshold from the DA mark.
economics  ai  blockchain  reference 
4 days ago by janpeuker
[1705.07962] pix2code: Generating Code from a Graphical User Interface Screenshot
Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).
gui  ai  research  design 
4 days ago by janpeuker
ML Algorithms addendum: Passive Aggressive Algorithms - Giuseppe Bonaccorso
Temporal - Time-based Algorithm

Crammer K., Dekel O., Keshet J., Shalev-Shwartz S., Singer Y., Online Passive-Aggressive Algorithms, Journal of Machine Learning Research 7 (2006) 551–585
ai  research 
10 days ago by janpeuker
Forget Killer Robots—Bias Is the Real AI Danger - MIT Technology Review
The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see “Biased Algorithms Are Everywhere, and No One Seems to Care”).
psychology  bias  ai  analytics 
10 days ago by janpeuker
The Seven Deadly Sins of AI Predictions - MIT Technology Review
It turns out that many AI researchers and AI pundits, especially those pessimists who indulge in predictions about AI getting out of control and killing people, are similarly imagination-challenged. They ignore the fact that if we are able to eventually build such smart devices, the world will have changed significantly by then. We will not suddenly be surprised by the existence of such super-intelligences. They will evolve technologically over time, and our world will come to be populated by many other intelligences, and we will have lots of experience already. Long before there are evil super-intelligences that want to get rid of us, there will be somewhat less intelligent, less belligerent machines. Before that, there will be really grumpy machines. Before that, quite annoying machines. And before them, arrogant, unpleasant machines. We will change our world along the way, adjusting both the environment for new technologies and the new technologies themselves. I am not saying there may not be challenges. I am saying that they will not be sudden and unexpected, as many people think.
ai  research  article 
10 days ago by janpeuker
Is AI Riding a One-Trick Pony? - MIT Technology Review
Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled. A deep neural net that recognizes images can be totally stymied when you change a single pixel, or add visual noise that’s imperceptible to a human. Indeed, almost as often as we’re finding new ways to apply deep learning, we’re finding more of its limits. Self-driving cars can fail to navigate conditions they’ve never seen before. Machines have trouble parsing sentences that demand common-sense understanding of how the world works.

Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way—which perhaps explains why its intelligence can sometimes seem so shallow. Indeed, backprop wasn’t discovered by probing deep into the brain, decoding thought itself; it grew out of models of how animals learn by trial and error in old classical-conditioning experiments. And most of the big leaps that came about as it developed didn’t involve some new insight about neuroscience; they were technical improvements, reached by years of mathematics and engineering. What we know about intelligence is nothing against the vastness of what we still don’t know.
ai  research  psychology  algorithm 
10 days ago by janpeuker
Numenta.com • Guest Post: Behind the Idea – HTM Based Autonomous Agent
As a firm believer in the power of video games as a communication tool, my main goal was to explore the feasibility of an HTM based game agent which can explore its environment and learn behaviors that are rewarding. The literature is almost non-existent on an unsupervised HTM based autonomous agent. I proposed a real-time agent architecture involving a hierarchy of HTM layers that can learn action sequences with respect to the stimulated reward. This agent navigates a procedurally generated 3D environment and models the patterns streaming onto its visual sensor shown in Figures 1 and 2.
games  ai  research 
10 days ago by janpeuker
ALGORITHMS HAVE ALREADY GONE ROGUE
Bizarre interview with Tim O’Reilly where he praises Chinese authoritarianism and lauds Jeff Bezos
ai  culture  politics  from twitter_favs
11 days ago by janpeuker
How Computers Do Genocide
SHIBBOLETH MACHINES: Simulations of our machines show initial levels of apparently random behavior giving way, around generation 300, to high rates of cooperation that coincide with near-complete domination by a single machine that drives others to extinction. This enforced cooperation collapses around generation 450. From then on, the system alternates between these two extremes. Green and yellow bands correspond to eras of high and low cooperation, respectively.
..
Francis Fukuyama might have been thinking along these lines when he penned his end-of-history thesis in 1992. Though Fukuyama’s argument was rooted in 19th-century German philosophers such as Friedrich Nietzsche and Georg Wilhelm Friedrich Hegel, we might rewrite it this way: A sufficiently complex simulation of human life would terminate in a rational, liberal-democratic, and capitalist order standing against a scattered and dispersing set of enemies.
...
Prisoner's Dilemma Cellular Automata
ai  society  innovation 
23 days ago by janpeuker
Connectionism (Stanford Encyclopedia of Philosophy)
Philosophers have become interested in connectionism because it promises to provide an alternative to the classical theory of the mind: the widely held view that the mind is something akin to a digital computer processing a symbolic language. Exactly how and to what extent the connectionist paradigm constitutes a challenge to classicism has been a matter of hot debate in recent years.
ai  philosophy  psychology  research 
23 days ago by janpeuker
API.AI
Build delightful and natural conversational experiences
Give users new ways to interact with your product by building engaging voice and text-based conversational apps with API.AI.

Chatbot / Google Assistant / Actions on Google
ai  development  android 
24 days ago by janpeuker
Unsupervised Feature Learning and Deep Learning Tutorial
Softmax regression (or multinomial logistic regression) is a generalization of logistic regression to the case where we want to handle multiple classes. In logistic regression we assumed that the labels were binary: y(i)∈{0,1}y(i)∈{0,1}. We used such a classifier to distinguish between two kinds of hand-written digits. Softmax regression allows us to handle y(i)∈{1,…,K}y(i)∈{1,…,K} where KK is the number of classes.
howto  algorithm  ai 
4 weeks ago by janpeuker
Meet Michelangelo: Uber's Machine Learning Platform
o address these issues, we created a DSL (domain specific language) that modelers use to select, transform, and combine the features that are sent to the model at training and prediction times. The DSL is implemented as sub-set of Scala. It is a pure functional language with a complete set of commonly used functions. With this DSL, we also provide the ability for customer teams to add their own user-defined functions. There are accessor functions that fetch feature values from the current context (data pipeline in the case of an offline model or current request from client in the case of an online model) or from the Feature Store.
scala  DSL  Architecture  ai 
5 weeks ago by janpeuker
It's Been 100 Years and the Robots Still Haven't Taken Over | Literary Hub
A similar, optimistic view of artificial intelligence informs Frank Herbert’s novel Destination: Void (1966). His four scientists aboard the spaceship Earthling—a psychiatrist, a life-systems engineer, a doctor who specializes in brain chemistry, and a computer scientist—represent the four disciplines most closely allied with the understanding and development of cognitive science. In the critical circumstances that attend their lone journey through space, they come to the realization that their survival depends on developing high-level artificial intelligence. Herbert’s view is clearly that machine intelligence in cooperation with human intelligence is our only hope for the future and that scientists are therefore indispensable for the very reasons that led to their vilification by the majority of novelists discussed hitherto.
ai  article  future 
5 weeks ago by janpeuker
Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big Data
scikit-learn algorithm cheat sheet
- classification (labeled categories)
- clustering (unlabeled categories)
- regression (quantity, binary)
- dimensionality reduction (distribution, visualization, investigation)
ai  howto  Python 
5 weeks ago by janpeuker
Fooling The Machine | Popular Science
“We show you a photo that’s clearly a photo of a school bus, and we make you think it’s an ostrich,” says Ian Goodfellow, a researcher at Google who has driven much of the work on adversarial examples.
By altering the images fed into a deep neural network by just four percent, researchers were able to trick it into misclassifying the image with a success rate of 97 percent. Even when they did not know how the network was processing the images, they could deceive the network with nearly 85 percent accuracy. That latter research, tricking the network without knowing its architecture, is called a black box attack. This is the first documented research of a functional black box attack on a deep learning system, which is important because this is the most likely scenario in the real world.
ai  security  psychology 
6 weeks ago by janpeuker
Logistic Regression for Machine Learning - Machine Learning Mastery
Ultimately in predictive modeling machine learning projects you are laser focused on making accurate predictions rather than interpreting the results. As such, you can break some assumptions as long as the model is robust and performs well.

Binary Output Variable: This might be obvious as we have already mentioned it, but logistic regression is intended for binary (two-class) classification problems. It will predict the probability of an instance belonging to the default class, which can be snapped into a 0 or 1 classification.
Remove Noise: Logistic regression assumes no error in the output variable (y), consider removing outliers and possibly misclassified instances from your training data.
Gaussian Distribution: Logistic regression is a linear algorithm (with a non-linear transform on output). It does assume a linear relationship between the input variables with the output. Data transforms of your input variables that better expose this linear relationship can result in a more accurate model. For example, you can use log, root, Box-Cox and other univariate transforms to better expose this relationship.
Remove Correlated Inputs: Like linear regression, the model can overfit if you have multiple highly-correlated inputs. Consider calculating the pairwise correlations between all inputs and removing highly correlated inputs.
Fail to Converge: It is possible for the expected likelihood estimation process that learns the coefficients to fail to converge. This can happen if there are many highly correlated inputs in your data or the data is very sparse (e.g. lots of zeros in your input data).
ai  algorithm  howto  mathematics 
6 weeks ago by janpeuker
New version of Cloud Datalab: Jupyter meets TensorFlow, cloud meets local deployment | Google Cloud Big Data and Machine Learning Blog  |  Google Cloud Platform
Google Cloud Datalab beta, an easy-to-use interactive tool for large-scale data exploration, analysis and visualization using Google Cloud Platform services such as Google BigQuery, Google App Engine Flex and Google Cloud Storage. Based on Jupyter (formerly IPython),
ai  Python  visualization  cloud 
6 weeks ago by janpeuker
Human-Centered Machine Learning – Google Design – Medium
4. Weigh the costs of false positives and false negatives
Your ML system will make mistakes. It’s important to understand what these errors look like and how they might affect the user’s experience of the product. In one of the questions in point 2 we mentioned something called the confusion matrix. This is a key concept in ML and describes what it looks like when an ML system gets it right and gets it wrong.
ai  google  research  design 
8 weeks ago by janpeuker
Introduction to Local Interpretable Model-Agnostic Explanations (LIME) - O'Reilly Media
Introduction to Local Interpretable Model-Agnostic Explanations (LIME)
A technique to explain the predictions of any machine learning classifier.

By Marco Tulio RibeiroSameer SinghCarlos Guestrin August 12, 2016

Happy predictions. (source: Jared Hersch on Flickr)
Machine learning is at the core of many recent advances in science and technology. With computers beating professionals in games like Go, many people have started asking if machines would also make for better drivers or even better doctors.

In many applications of machine learning, users are asked to trust a model to help them make decisions. A doctor will certainly not operate on a patient simply because “the model said so.” Even in lower-stakes situations, such as when choosing a movie to watch from Netflix, a certain measure of trust is required before we surrender hours of our time based on a model. Despite the fact that many machine learning models are black boxes, understanding the rationale behind the model's predictions would certainly help users decide when to trust or not to trust their predictions. An example is shown in Figure 1, in which a model predicts that a certain patient has the flu. The prediction is then explained by an "explainer" that highlights the symptoms that are most important to the model. With this information about the rationale behind the model, the doctor is now empowered to trust the model—or not.
ai  research  documentation 
8 weeks ago by janpeuker
Project Naptha
Project Naptha highlight, copy and translate text from any image

Project Naptha started out as "Images as Text" the 2nd place winning entry to 2013's HackMIT Hackathon.

It launched 5 months later, reaching 200,000 users in a week, and was featured on the front page of Hacker News, Reddit, Engadget, Lifehacker, The Verge, and PCWorld.
ai  visualization  Software 
8 weeks ago by janpeuker
Introducing Seldon Deploy – Open Source Machine Learning – Medium
Kubernetes

Model Explanations
In May 2018 the new General Data Protection Regulation (GDPR) will give consumers a legal “right to explanation” from organisations that use algorithmic decision making.
And as more important decisions are being made and automated on the basis of machine learning models, organisations are seeking to understand why models give a certain output. This is a tough challenge considering there are many types of models with varying degrees of interpretability. E.g. It’s easy to traverse the decision tree generated by a random forest algorithm, but the connections between the nodes and layers of a neural network model are beyond human comprehension.
ai  devops  cloud  library 
8 weeks ago by janpeuker
Hands-On Machine Learning with Scikit-Learn and TensorFlow - O'Reilly Media
Explore the machine learning landscape, particularly neural nets
Use scikit-learn to track an example machine-learning project end-to-end
Explore several training models, including support vector machines, decision trees, random forests, and ensemble methods
Use the TensorFlow library to build and train neural nets
Dive into neural net architectures, including convolutional nets, recurrent nets, and deep reinforcement learning
Learn techniques for training and scaling deep neural nets
Apply practical code examples without acquiring excessive machine learning theory or algorithm details
book  Python  ai 
8 weeks ago by janpeuker
Grotesque and Gorgeous: 100,000 Art and Medicine Images Released for Open Use
These images, freely available from the Wellcome Library, exist at the intersection of art and medicine.
ai  visualization  medicine  from twitter_favs
9 weeks ago by janpeuker
Research Blog: Facets: An Open Source Visualization Tool for Machine Learning Training Data
Facets, an open source visualization tool to aid in understanding and analyzing ML datasets. Facets consists of two visualizations that allow users to see a holistic picture of their data at different granularities. Get a sense of the shape of each feature of the data using Facets Overview, or explore a set of individual observations using Facets Dive.
visualization  ai  analytics 
11 weeks ago by janpeuker
CS231n Convolutional Neural Networks for Visual Recognition
Human Perception - Convolutional Neural Networks take advantage of the fact that the input consists of images and they constrain the architecture in a more sensible way. In particular, unlike a regular Neural Network, the layers of a ConvNet have neurons arranged in 3 dimensions: width, height, depth. (Note that the word depth here refers to the third dimension of an activation volume, not to the depth of a full Neural Network, which can refer to the total number of layers in a network.)
ai  algorithm  psychology 
11 weeks ago by janpeuker
Probabilistic programming from scratch
A simple algorithm for Bayesian inference

We can do that using Bayesian inference. Bayesian inference is a method for updating your knowledge about the world with the information you learn during an experiment. It derives from a simple equation called Bayes’s Rule. In its most advanced and efficient forms, it can be used to solve huge problems. But we’re going use a specific, simple inference algorithm called Approximate Bayesian Computation (ABC), which is barely a couple of lines of Python:
ai  engineering  mathematics  analytics 
july 2017 by janpeuker
Rise of the machines: who is the ‘internet of things’ good for? | Technology | The Guardian
There is a clear philosophical position, even a worldview, behind all of this: that the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion. As applied to the affairs of cities, this is effectively an argument that there is one and only one correct solution to each identified need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something that can be encoded in public policy, without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)
iot  ai  article  philosophy 
june 2017 by janpeuker
TensorFlow Linear Model Tutorial  |  TensorFlow
How Logistic Regression Works

Finally, let's take a minute to talk about what the Logistic Regression model actually looks like in case you're not already familiar with it. We'll denote the label as
, and the set of observed features as a feature vector
. We define
if an individual earned > 50,000 dollars and
otherwise. In Logistic Regression, the probability of the label being positive (
) given the features
is given as:


where
are the model weights for the features
.
is a constant that is often called the bias of the model. The equation consists of two parts—A linear model and a logistic function:
...
Model training is an optimization problem: The goal is to find a set of model weights (i.e. model parameters) to minimize a loss function defined over the training data, such as logistic loss for Logistic Regression models. The loss function measures the discrepancy between the ground-truth label and the model's prediction. If the prediction is very close to the ground-truth label, the loss value will be low; if the prediction is very far from the label, then the loss value would be high.
mathematics  ai  howto 
june 2017 by janpeuker
TensorFlow Lite Introduces Machine Learning in Mobile Apps
TensorFlow Lite, a streamlined version of TensorFlow for mobile, was announced by Dave Burke, vice president of engineering for Android. Mr. Burke said: “TensorFlow Lite will leverage a new neural network API to tap into silicate specific accelerators, and over time we expect to see DSPs (Digital Signal Processors) specifically designed for neural network inference and training.” He also added: “We think these new capabilities will help power the next generation of on-device speech processing, visual search, augmented reality, and more.” TensorFlow Lite comes at a time where silicon manufacturers like Qualcomm have begun adding on-chip machine learning capabilities to their products, and as OEMs have increasingly been adopting varying degrees of “AI” into their ROMs.
android  ai  library 
june 2017 by janpeuker
A16Z AI Playbook
Machine Learning refers to a broad set of computer science techniques that let us give computers, as Arthur Samuel put it in 1959, "the ability to learn without being explicitly programmed." There are many different types of machine learning algorithms, including reinforcement learning, genetic algorithms, rule-based machine learning, learning classifier systems, and decision trees. The Wikipedia article has many more examples. The current darling of these machine learning algorithms are deep learning algorithms which we'll discuss in detail (as well as code) later in this guide.
ai  reference  learning 
june 2017 by janpeuker
OpenAI Baselines: DQN
See the world as your agent does: like most deep learning approaches, for DQN we tend to convert images of our environments to grayscale to reduce the computation required during training. This can create its own bugs: when we ran our DQN algorithm on Seaquest we noticed that our implementation was performing poorly. When we inspected the environment we discovered this was because our post-processed images contained no fish, as this picture shows.
ai  Patterns 
june 2017 by janpeuker
You can probably use deep learning even if your data isn't that big
After all, there is a 70+ year history of super flexible models in machine learning in statistics, and I don’t think neural nets are a priori any more flexible than other algorithms of the same complexity.

Here’s a quick run down of some reasons why I think they’ve been successful:

Everything is an exercise in the bias/variance tradeoff. Just to be clear, the actual argument I think Jeff is making is about model complexity and the bias/variance trade off. If you don’t have a lot of data it’s probably better to go with a simple model (high bias/low variance) than a really complex one (low bias/high variance). I think that this is objectively good advice in most cases, however…
ai  algorithm  Patterns 
june 2017 by janpeuker
In Deep Learning, Architecture Engineering is the New Feature Engineering
A major reason for the resurgence in popularity of neural networks were their impressive results from the ImageNet contest in 2012. The model produced and documented by Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. That's a staggeringly large improvement - and the model did allow the removal of many hand crafted features too!

CNNs are a prime example of the promise of deep learning - removing complicated and problematic hand crafted feature engineering. Edge detection is no longer handled by an explicitly coded human program but is instead learned by the first convolutional layer. Indeed, the filters learned by the first layer of CNNs when given images are highly reminiscent of Gabor filters, traditionally used for edge and texture detection.
ai  algorithm  engineering 
may 2017 by janpeuker
Biologically Inspired Software Architecture for Deep Learning
The folks at Google wrote a paper (a long time ago, meaning 2014), “Machine Learning: The High-Interest Credit Card of Technical Debt” that enumerates many of the difficulties that we need to consider when building software that consists of machine learning or deep learning sub-components. Contrary to popular perception that Deep Learning systems can be “self-driving”. There is a massive ongoing maintenance cost when machine learning is used. In the Google paper, the authors enumerate many risk factors, design patterns, and anti-patterns to needs to be taken into consideration in an architecture. These include design patterns such as : boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies and changes in the external world.
Architecture  ai  Patterns 
may 2017 by janpeuker
Google.ai
Our mission is to organize the world’s information and make it universally accessible and useful, and AI is enabling us to do that in incredible new ways - solving problems for our users, our customers, and the world.
google  ai  education 
may 2017 by janpeuker
Research Blog: Introducing the TensorFlow Research Cloud
We recommend participating in the Cloud TPU Alpha program if you are interested in any of the following:
Accelerating training of proprietary ML models; models that take weeks to train on other hardware can be trained in days or even hours on Cloud TPUs
Accelerating batch processing of industrial-scale datasets: images, videos, audio, unstructured text, structured data, etc.
Processing live requests in production using larger and more complex ML models than ever before
ai  research  performance  cloud 
may 2017 by janpeuker
Neuralink and the Brain's Magical Future - Wait But Why
The mind-bending bigness of Neuralink’s mission, combined with the labyrinth of impossible complexity that is the human brain, made this the hardest set of concepts yet to fully wrap my head around—but it also made it the most exhilarating when, with enough time spent zoomed on both ends, it all finally clicked. I feel like I took a time machine to the future, and I’m here to tell you that it’s even weirder than we expect.
research  article  ai  visualization  medicine 
may 2017 by janpeuker
How the TensorFlow team handles open source support - O'Reilly Media
It's impossible for every developer to test all these combinations manually when they make a change, so we have a suite of automated tests running on most of the supported platforms, all controlled by the Jenkins automation system. Keeping this working takes a lot of time and effort because there are always operating system updates, hardware problems, and other issues unrelated to TensorFlow that can cause the tests to fail. There's a team of engineers devoted to making the whole testing process work. That team has saved us from a lot of breakages we’d have suffered otherwise, so the investment has been worth it.
ai  library  opensource  devops 
may 2017 by janpeuker
Jupyter Notebook - FloydHub
Jupyter or IPython Notebooks allow you to create and share documents that contain live code, equations, visualizations and explanatory text. It is great for interactive development of code. This guide will show you how to run a Jupyter notebook on Floyd ...Clone a project which contain deep learning jupyter notebooks. See some great Tensorflow Notebook examples at floydhub/tensorflow-notebooks-examples. Then initialize a floyd project inside that.
ai  Python  cloud 
may 2017 by janpeuker
Semantics derived automatically from language corpora contain human-like biases | Science
Machines learn what people know implicitly
AlphaGo has demonstrated that a machine can learn how to do things that people spend many years of concentrated study learning, and it can rapidly learn how to do them better than any human can. Caliskan et al. now show that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT) (see the Perspective by Greenwald). Why does this matter? Because the IAT has predictive value in uncovering the association between concepts, such as pleasantness and flowers or unpleasantness and insects. It can also tease out attitudes and beliefs—for example, associations between female names and family or male names and career. Such biases may not be expressed explicitly, yet they can prove influential in behavior.
ai  reference  psychology 
may 2017 by janpeuker
PHEME: Computing Veracity – the Fourth Challenge of Big Data | Computing Veracity – the Fourth Challenge of Big Data
Fake News - Identifying Phemes (Rumorous Memes)
We are concentrating on identifying four types of phemes and modelling their spread across social networks and online media: speculation, controversy, misinformation, and disinformation. However, it is particularly difficult to assess whether a piece of information falls into one of these categories in the context of social media. The quality of the information here is highly dependent on its social context and, up to now, it has proven very challenging to identify and interpret this context automatically.
literature  psychology  ai 
april 2017 by janpeuker
Research Blog: Distill: Supporting Clarity in Machine Learning
That’s why, in collaboration with OpenAI, DeepMind, YC Research, and others, we’re excited to announce the launch of Distill, a new open science journal and ecosystem supporting human understanding of machine learning. Distill is an independent organization, dedicated to fostering a new segment of the research community.
ai  visualization  research 
april 2017 by janpeuker
Frontiers | Editorial: Improving Bayesian Reasoning: What Works and Why? | Cognition
However, attention to problems that have a temporal component is not lacking in this collection: Tubau et al. (2015) provide an insightful and comprehensive review of the Monty Hall Problem and Baratgin (2015) uses the two-player version of that problem to expose logical and terminological breakdowns in earlier theoretical analyses. Mandel (2014b) explores the perhaps even more complex Sleeping Beauty problem, which involves belief revision under conditions of asynchrony, to highlight how visual representations using quasi-logic trees can help clarify points of philosophical disagreement in the literature.
ai  book  research  psychology 
march 2017 by janpeuker
Improving Bayesian Reasoning: What Works and Why? | Frontiers Research Topic
Bias - Bayes’ theorem, named after English statistician, philosopher, and Presbyterian minister, Thomas Bayes, offers a method for updating one’s prior probability of an hypothesis H on the basis of new data D such that P(H|D) = P(D|H)P(H)/P(D). The first wave of psychological research, pioneered by Ward Edwards, revealed that people were overly conservative in updating their posterior probabilities (i.e., P(D|H)). A second wave, spearheaded by Daniel Kahneman and Amos Tversky, showed that people often ignored prior probabilities or base rates, where the priors had a frequentist interpretation, and hence were not Bayesians at all. In the 1990s, a third wave of research spurred by Leda Cosmides and John Tooby and by Gerd Gigerenzer and Ulrich Hoffrage showed that people can reason more like a Bayesian if only the information provided takes the form of (non-relativized) natural frequencies.
research  psychology  ai  bias 
march 2017 by janpeuker
Overcoming Catastrophic Forgetting
One of the critical steps towards artificial general intelligence is the ability to continually learn - that is, an agent should be capable of learning new tasks without forgetting how to perform old tasks. Yet this simple property is something that artificial neural networks have historically failed to display. McCloskey and Cohen (1989) first noted this inability by showing that a neural network trained to add 1 to a digit, and then trained to add 2 to a digit, would be unable to add 1 to a digit. They labeled this problem catastrophic forgetting due to neural networks' tendencies while learning a new task to quickly overwrite, and thus lose, the parameters necessary to perform well at a previous task.
ai  learning  howto 
march 2017 by janpeuker
Appreciating Art with Algorithms
More experimentations with colors and color palettes. I am pretty happy with the shapes being drawn. The next step is to continue experimenting with various colors and perhaps generate preset palettes. By converting an image into a mosaic you naturally remove detail from the photo. Thus, what holds up an image is usually going to be a dynamic range of colors and contrast. It would be great if given an image that doesn’t have such ranges that I can provide the user a set of color palettes to enhance the image prior to applying the mosaic effect.
algorithm  art  Python  ai 
march 2017 by janpeuker
Assessing and Comparing Classifier Performance with ROC Curves
One way to overcome the problem of having to choose a cutoff is to start with a threshold of 0.0, so that every case is considered as positive. We correctly classify all of the positive cases, and incorrectly classify all of the negative cases. We then move the threshold over every value between 0.0 and 1.0, progressively decreasing the number of false positives and increasing the number of true positives.

TP (sensitivity) can then be plotted against FP (1 – specificity) for each threshold used. The resulting graph is called a Receiver Operating Characteristic (ROC) curve (Figure 2). ROC curves were developed for use in signal detection in radar returns in the 1950’s, and have since been applied to a wide range of problems.
ai  visualization  mathematics 
march 2017 by janpeuker
Expect Deeper and Cheaper Machine Learning - IEEE Spectrum
Norm Jouppi, a hardware engineer at Google, announced the existence of the Tensor Processing Unit two months after the Go match, explaining in a blog post that Google had been outfitting its data centers with these new accelerator cards for more than a year. Google has not shared exactly what is on these boards, but it’s clear that it represents an increasingly popular strategy to speed up deep-learning calculations: using an application-specific integrated circuit, or ASIC.

Deep-Learning Software Revenues (US $, Billions)

Revenues from deep-learning software should soon exceed $1 billion. Source: Tractica
Another tactic being pursued (primarily by Microsoft) is to use field-programmable gate arrays (FPGAs), which provide the benefit of being reconfigurable if the computing requirements change. The more common approach, though, has been to use graphics processing units, or GPUs, which can perform many mathematical operations in parallel. The foremost proponent of this approach is GPU maker Nvidia.
hardware  ai  performance 
march 2017 by janpeuker
We Need to Tell Better Stories About Our AI Future - Motherboard
We will continue to come up against the AI inscrutability problem, so we might need to look to more experimental forms of narrative to articulate that unknowability and ontological novelty. Think about nonlinear narrative, or postmodern and impressionistic storytelling. Outputs from Deep Dream illustrate how AI pattern recognition processes can be overly sensitive, seeing eyes or faces in things when they aren't there (known as "pareidolia"), producing uncanny and surreal images that while visually meaningless, give us some perspective on how the system "sees."
article  ai  literature 
march 2017 by janpeuker
Technically Sentient - Inside
The most interesting thing I read this week was this MIT Tech Review piece about AI learning to make AI software.  This is so crazy cool, and has so many interesting implications that I may have to do a few commentary pieces in coming newsletters about what it means.  But for now, let me make two points.  First, it's a breakthrough that will accelerate A.I. even faster, which is exciting but scary.  Second, it may exacerbate the problem of introspection.  Now that machine learning systems are designing other machine learning systems, we will take what was already a black box and now put it in a bigger maze of black boxes.  Automated machine learning is the trend to watch in A.I.  As it expands beyond deep learning to other types of A.I., expect to see these systems mix and match A.I. approaches more than we have in the past, and that should lead to awesome new gains.
blog  podcast  ai 
march 2017 by janpeuker
Introspection in AI – Building Intelligent Probabilistic Systems
Like any source of intuition, introspection is just a starting point. It’s not a rigorous scientific argument, and it ultimately needs to be backed up with math and experiments. What we really want is an explanation for the heuristics people use. In other words,

figure out what regularities our heuristics are exploiting
see if existing algorithms already exploit those same regularities
if not, come up with one that does
justify it theoretically and empirically
ai  research  literature 
march 2017 by janpeuker
Explainable Artificial Intelligence
New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user. Our strategy is to pursue a variety of techniques in order to generate a portfolio of methods that will provide future developers with a range of design options covering the performance-versus-explainability trade space.
research  ai  military 
february 2017 by janpeuker
AI Principles - Future of Life Institute
Ethics and Values

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
ai  reference  future  philosophy 
february 2017 by janpeuker
What is probabilistic programming? - O'Reilly Media
A probabilistic programming language is a high-level language that makes it easy for a developer to define probability models and then “solve” these models automatically. These languages incorporate random events as primitives and their runtime environment handles inference.
engineering  ai  model  Emergence 
february 2017 by janpeuker
Research Blog: Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System
We call this “zero-shot” translation, shown by the yellow dotted lines in the animation. To the best of our knowledge, this is the first time this type of transfer learning has worked in Machine Translation.

The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”? Using a 3-dimensional representation of internal network data, we were able to take a peek into the system as it translates a set of sentences between all possible pairs of the Japanese, Korean, and English languages.
ai  research  literature 
february 2017 by janpeuker
The AI Threat Isn't Skynet. It's the End of the Middle Class | WIRED
In the US, the number of manufacturing jobs peaked in 1979 and has steadily decreased ever since. At the same time, manufacturing has steadily increased, with the US now producing more goods than any other country but China. Machines aren’t just taking the place of humans on the assembly line. They’re doing a better job. And all this before the coming wave of AI upends so many other sectors of the economy. “I am less concerned with Terminator scenarios,” MIT economist Andrew McAfee said on the first day at Asilomar. “If current trends continue, people are going to rise up well before the machines do.”
society  economics  ai  reference 
february 2017 by janpeuker
Game Theory reveals the Future of Deep Learning – Intuition Machine – Medium
The classical view of machine learning is that the problem can be cast as an optimization problem where all that is needed are algorithms that are able to search for an optimal solution. However, with machine learning we want to build machines that don’t overfit the data but rather is able to perform well on data that it has yet to encounter. We want these machines to make predictions about the unknown. This requirement, which is called generalization, is very different from the classical optimization problem. It is very different from the classical dynamics problem where all information is expected to be available. That is why, a lot of the engineering in deep learning requires additional constraints on the optimization problem. These, to my disliking are called ‘priors’ in some texts and also called regularizations in an optimization problem.
ai  mathematics  strategy  model 
january 2017 by janpeuker
Three arguments against the singularity - Charlie's Diary
we're not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we're going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs "intelligently". But it will be the intelligence of the serving hand rather than the commanding brain, and we're only at risk of disaster if we harbour self-destructive impulses.

We may eventually see mind uploading, but there'll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run Nozick's experience machine thought experiment for real, I'm not sure we'd be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.
ai  future  cybernetics 
january 2017 by janpeuker
Who Will Command The Robot Armies?
On arrival there, you get a little card telling you you'll be killed for drug smuggling. Curiously, they only give it to you once you're already over the border.

But the point is made. Don't mess with Singapore.

Singaporeans have traded a great deal of their political and social freedom for safety and prosperity. The country is one of the most invasive surveillance states in the world, and it's also a clean, prosperous city with a strong social safety net.


The trade-off is one many people seem happy with. While Dubai is morally odious, I feel ambivalent about Singapore. It's a place that makes me question my assumptions about surveillance and social control.
future  society  urbanism  ai 
december 2016 by janpeuker
Moving from AI-assisted humans to human-assisted AI | VentureBeat | Bots | by David Pichsenmeister, Oratio
This is the natural evolution of human assisted-AI. Just like with AI-assisted humans, each message is analyzed and classified using AI. The main difference, however, is that if the AI achieves a certain confidence level with its suggested response (e.g., confidence >= 90%), the message is sent out automatically.

Only messages with a lower confidence level are forwarded to human operators to be reviewed, sent out, and then used to train the bot again. The downside of this approach is that high availability of operators is needed to shorten response times to a minimum, especially if the user thinks he or she is messaging with a bot. On the other hand, only a few operators are needed to supervise the bot.

AI / NLU Level needed: High
Human availability: High
Development costs: High
Operating costs: Low

A glimpse into the future
One of the issues right now is that many bot developers don’t put that much effort into designing and training conversations. Machine learning and AI is already at a very decent level, but many teams and developers fail to model conversations properly. It also helps to have a big set of actual customer data or to train your bot with new incoming conversations.
ai  reference 
december 2016 by janpeuker
Computational Law, Symbolic Discourse, and the AI Constitution
One can get deep into the foundations of science and philosophy about this. Yes, there’s a computational universe out there of all the possible rules by which systems can operate (and, yes, I’ve spent a good part of my life studying the basic science of this). And there’s our physical universe that presumably operates according to certain rules from the computational universe. But from these rules can emerge all sorts of complex behavior, and in fact the phenomenon of computational irreducibility implies that in a sense there’s no limit to what can be built up.
ai  article  literature  Emergence 
december 2016 by janpeuker
Nuts and Bolts of Applying Deep Learning — Summary – Medium
Bias/variance trade-offs
A Machine learning engineer has a lot of decisions to make while building a ML system. These include how much data is appropriate for the task at hand and when to go out and get some more? Whether to train the model for longer time? When is it time to re think the architecture? When to introduce/remove regularization terms? To answer these questions in systematic manner, Ng brings the good old bias/variance analysis in the mix
ai  engineering  Architecture 
november 2016 by janpeuker
Mobileye Curtailing Tesla Autonomous Relationship In Favor Of BMW, Intel
“MobilEye’s ability to evolve its technology is unfortunately negatively affected by having to support hundreds of models from legacy auto companies, resulting in a very high engineering drag coefficient,” the billionaire tech industrialist told reporters today at the Gigafactory battery plant. “Tesla is laser-focused on achieving full self-driving capability on one integrated platform with an order of magnitude greater safety than the average manually driven car.”
ai  hardware  marketing  innovation 
october 2016 by janpeuker
Can we open the black box of AI? : Nature News & Comment
Ultimately, these researchers argue, the complex answers given by machine learning have to be part of science's toolkit because the real world is complex: for phenomena such as the weather or the stock market, a reductionist, synthetic description might not even exist. “There are things we cannot verbalize,” says Stéphane Mallat, an applied mathematician at the École Polytechnique in Paris. “When you ask a medical doctor why he diagnosed this or this, he's going to give you some reasons,” he says. “But how come it takes 20 years to make a good doctor? Because the information is just not in books.”
ai  research  society 
october 2016 by janpeuker
Research Blog: How Robots Can Acquire New Skills from Their Shared Experience
We have a lot of intuition about how various manipulation skills can be performed, and it only seems natural that transferring this intuition to robots can help them learn these skills a lot faster. In the next experiment, we provided each robot with a different door, and guided each of them by hand to show how these doors can be opened. These demonstrations are encoded into a single combined strategy for all robots, called a policy. The policy is a deep neural network which converts camera images to robot actions, and is maintained on a central server.
google  ai  robotics 
october 2016 by janpeuker
Venture capitalist Marc Andreessen explains how AI will change the world - Vox
The reason is because it's not a feature; it's a totally new architecture. The drone has to be built on AI from the ground up. The bet that DJI and other drone makers are making is that it's a feature. The bet that we’re making is that it requires a brand new architecture.

That's an example of fundamental reinvention. If our thesis on that is right, then all the existing drones become obsolete. They just don't matter because they can't do the thing that actually matters.

If you talk to the automakers, they all think that autonomy is a feature they're going to add to their cars. The Silicon Valley companies think it's a brand new architecture. It’s a bottom-up reinvention of the fundamental assumptions about how these things work.
ai  economics  article 
october 2016 by janpeuker
« earlier      
per page:    204080120160

Copy this bookmark:



description:


tags: