Software 2.0 – Andrej Karpathy – Medium


94 bookmarks. First posted by kartik november 2017.



In contrast, Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints. In the case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent.

It is very agile. If you had a C++ code and someone wanted you to make it twice as fast (at cost of performance if needed), it would be highly non-trivial to tune the system for the new spec. However, in Software 2.0 we can take our network, remove half of the channels, retrain, and there — it runs exactly at twice the speed and works a bit worse. It’s magic. Conversely, if you happen to get more data/compute, you can immediately make your program work better just by adding more channels and retraining.
Modules can meld into an optimal whole. Our software is often decomposed into modules that communicate through public functions, APIs, or endpoints. However, if two Software 2.0 modules that were originally trained separately interact, we can easily backpropagate through the whole. Think about how amazing it could be if your web browser could automatically re-design the low-level system instructions 10 stacks down to achieve a higher efficiency in loading web pages. With 2.0, this is the default behavior.

And Software 3.0? That will be entirely up to the AGI.
programming  computerscience  history  future  2017  teslamotors 
21 days ago by WimLeers
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. via Pocket
IFTTT  Pocket 
6 weeks ago by arronpj
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win…
radar 
6 weeks ago by twleung
Software 2.0 – Andrej Karpathy – Medium
from twitter_favs
8 weeks ago by afternoon
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
8 weeks ago by pulsar
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
11 weeks ago by Blubser
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
12 weeks ago by leftyotter
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
12 weeks ago by svs
Quote: "Software 2.0 is not going to replace 1.0 (indeed, a large amount of 1.0 infrastructure is needed for training and inference to “compile” 2.0 code), but it is going to take over increasingly large portions of what Software 1.0 is responsible for today. Let’s examine some examples of the ongoing transition to make this more concrete..." The ".. increasingly large portions of what Software 1.0 is.." seems like a stretch since the vast majority of code in the world isn't doing speech recogn...
ai  programming  software  machinelearning  future  ml 
12 weeks ago by ajohnson1200
Software 2.0 – Andrej Karpathy – Medium via Instapaper http://ift.tt/2hsOCzx
IFTTT  Instapaper 
november 2017 by dotimpact
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
november 2017 by motdiem
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
november 2017 by blanghals
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
november 2017 by thomas.carrington
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
november 2017 by mledu
Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. They are Software 2.0.
software  essay 
november 2017 by danmichaelo
The “classical stack” of Software 1.0 is what we’re all familiar with. It consists of explicit instructions to the computer written by a programmer. In contrast, Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard.

Benefits:
1. Computationally homogeneous.
2. Simple to bake into silicon.
3. Constant running time.
4. Constant memory use.
5. It is highly portable.
6. It is very agile.
7. Modules can meld into an optimal whole.
8. It is easy to pick up.
9. It is better than you.

Limitations:
1. At the end of the optimization we’re left with large networks that work well, but it’s very hard to tell how.
2. The 2.0 stack can fail in unintuitive and embarrassing ways ,or worse, they can “silently fail”, e.g., by silently adopting biases in their training data.
3. Finally, we’re still discovering some of the peculiar properties of this stack. For instance, the existence of adversarial examples and attacks.
programming  ai  benefits  limitations 
november 2017 by drmeme
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. Unfortunately, this interpretation completely misses the forest for the trees. Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. They are Software 2.0.
software  ai  programming  future  neural_networks  machine_learning 
november 2017 by archangel
New blog post: "Software 2.0"
november 2017 by dpl
RT ehddn1 : 테슬라 AI 부문 디렉터 Andrej Karpathy가 말하는 Software 2.0. 다소 길지만, 미래 소프트웨어 진화에 대한 다양한 견해를 접한다는 측면에서 읽어둘만한 내용인듯.. http://bit.ly/2jt5tD3 November 15, 2017 at 07:26AM http://twitter.com/ehddn1/status/930562531252363266
IFTTT  Twitter  ththlink 
november 2017 by seoulrain
Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints. In the case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent.

It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program.

If you think of neural networks as a software stack and not just a pretty good classifier, it becomes quickly apparent that they have a huge number of advantages and a lot of potential for transforming software in general.
development  !publish 
november 2017 by zephyr777
Software 2.0 – Andrej Karpathy
from twitter
november 2017 by nicola
In the future, humans will exist to provide training data for neural nets.
engineering  machinelearning  from iphone
november 2017 by danielbachhuber
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win…
Software_Engineering 
november 2017 by jnavon
Andrej Karpathy on how data-taught self-coding networks will write software of the future:
<p>Software 2.0 is not going to replace 1.0 (indeed, a large amount of 1.0 infrastructure is needed for training and inference to “compile” 2.0 code), but it is going to take over increasingly large portions of what Software 1.0 is responsible for today. Let’s examine some examples of the ongoing transition to make this more concrete:

<strong>Visual Recognition</strong> used to consist of engineered features with a bit of machine learning sprinkled on top at the end (e.g., SVM). Since then, we developed the machinery to discover much more powerful image analysis programs (in the family of ConvNet architectures), and more recently we’ve begun searching over architectures.

<strong>Speech recognition</strong> used to involve a lot of preprocessing, gaussian mixture models and hidden markov models, but today consist almost entirely of neural net stuff.

<strong>Speech synthesis</strong> has historically been approached with various stitching mechanisms, but today the state of the art models are large convnets (e.g. WaveNet) that produce raw audio signal outputs.

<strong>Machine Translation</strong> has usually been approaches with phrase-based statistical techniques, but neural networks are quickly becoming dominant. My favorite architectures are trained in the multilingual setting, where a single model translates from any source language to any target language, and in weakly supervised (or entirely unsupervised) settings.

<strong>Robotics</strong> has a long tradition of breaking down the problem into blocks of sensing, pose estimation, planning, control, uncertainty modeling etc., using explicit representations and algorithms over intermediate representations. We’re not quite there yet, but research at UC Berkeley and Google hint at the fact that Software 2.0 may be able to do a much better job of representing all of this code.

<strong>Games:</strong> Go playing programs have existed for a long while, but AlphaGo Zero (a ConvNet that looks at the raw state of the board and plays a move) has now become by far the strongest player of the game. I expect we’re going to see very similar results in other areas, e.g. DOTA 2, or StarCraft.</p>
ai  programming 
november 2017 by charlesarthur
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions.
november 2017 by pitiphong_p
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. via Pocket
Pocket 
november 2017 by LaptopHeaven
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win…
programming  future  ai  generalized  omnicomprensive 
november 2017 by gilberto5757
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win…
ml  ai 
november 2017 by nham
It is better than you. Finally, and most importantly, a neural network is a better piece of code than anything you or I can come up with in a large fraction of valuable verticals, which currently at the very least involve anything to do with images/video, sound/speech, and text.
AI 
november 2017 by kristofger
RT : New blog post: "Software 2.0"
from twitter
november 2017 by dave_sullivan
RT : New blog post: "Software 2.0"
from twitter
november 2017 by sktrill
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. via Pocket
IFTTT  Pocket 
november 2017 by tkhwang
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
november 2017 by wahoo5
RT : New blog post: "Software 2.0"
from twitter
november 2017 by lewk
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. via Pocket
IFTTT  Pocket 
november 2017 by domingogallardo
Are neural networks Software 2.0? A more or less standard network replaces hand coded algorithms:
from twitter_favs
november 2017 by rukku
Bold text, wonder how it will read in a few years.
from twitter
november 2017 by moritz_stefaner
Bold text, wonder how it will read in a few years.
from twitter_favs
november 2017 by jwtulp
Software 2.0 – Andrej Karpathy – Medium
IFTTT  bitly  from instapaper
november 2017 by trieloff
I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there,…
from instapaper
november 2017 by mjbrej