The impossibility of intelligence explosion – François Chollet – Medium


68 bookmarks. First posted by romac 12 weeks ago.


In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
ifttt  twitter 
18 days ago by marshallk
The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false. Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology. Our brains themselves were never a significant bottleneck in the AI-design process.
intelligence  ai  systems  complexity  society  culture  signularity  cognitive  ability  politics  reallygood 
28 days ago by timcowlishaw
“The impossibility of intelligence explosion” by François Chollet
from twitter
9 weeks ago by brandizzi
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): Decades later, the concept of an “intelligence explosion” …
ai 
10 weeks ago by twleung
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
from instapaper
10 weeks ago by paryshnikov
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
10 weeks ago by jkleske
The impossibility of intelligence explosion –

"intelligence is fundamentally linked to spe…
from twitter
10 weeks ago by TaylorPearson
New post: the impossibility of intelligence explosion
calibre-recipe 
10 weeks ago by personalnadir
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
from instapaper
10 weeks ago by matttrent
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): via Pocket
to_share 
10 weeks ago by puzzlement
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
getpocket 
10 weeks ago by linkt
If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt


Ah!
#ml  %contrarian 
10 weeks ago by lemeb
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): via Pocket
pocket 
10 weeks ago by jburkunk
Intelligent stuk over A.I., lees om wat rust hierover in je donder te krijgen:
from twitter
11 weeks ago by onedaycompany
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
Pocket 
11 weeks ago by nildram
Sensible, but disappointing that it has to be said.
(Even shorter: I. J. Good apparently forgot that a monotonically increasing sequence can have a finite limit!)
artificial_intelligence  debunking  rapture_for_nerds 
11 weeks ago by cshalizi
AI, in this sense, is no different than computers, or books, or language itself: it’s a technology that empowers our civilizatition
ai  MachineLearning  intelligence 
11 weeks ago by guyl
This here is sense.
machinelearning  ai  sense 
11 weeks ago by yorksranter
We are not individuals:

'Intelligence is fundamentally situational.'

The impossibility of intelligence explosion – François Chollet – Medium https://t.co/QtLBjkoPyG

— Emrys Schoemaker (@emrys_s) December 3, 2017
IFTTT  Twitter 
11 weeks ago by semrys
argument that "general intelligence" is impossible
philosophy  ai  software 
11 weeks ago by noisesmith
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): Decades later, the concept of an “intelligence explosion” …
11 weeks ago by kmarchand
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
from instapaper
11 weeks ago by louderthan10
RT : New post: the impossibility of intelligence explosion
from twitter
11 weeks ago by abolibibelot
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
from instapaper
11 weeks ago by rboone
RT : New post: the impossibility of intelligence explosion
from twitter_favs
11 weeks ago by schraeds
"In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human."

"Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. It gets increasingly harder to keep up with the firehose of new publications. Remember that a network with N nodes has N * (N - 1) / 2 edges."
AI  singularity  science 
11 weeks ago by elrob
Læs evt også denne af - god antidote til AI KOMMER-trenden
from twitter
11 weeks ago by classy.dk

“I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.” — Stephen Jay Gould

Primarily, because the usefulness of software is fundamentally limited by the context of its application — much like intelligence is both defined and limited by the context in which it expresses itself. Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain.
intelligence  ai  human 
11 weeks ago by WimLeers
The impossibility of intelligence explosion …
from twitter_favs
11 weeks ago by linkmachinego
"You cannot dissociate intelligence from the context in which it expresses itself."
from twitter_favs
12 weeks ago by briantrice
Is it merely coincidence that and dropped on the same day?
from twitter_favs
12 weeks ago by saxwell
François Chollet:
<p>What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? We cannot perform this experiment, but given the extent to which our most fundamental behaviors and early learning patterns are hard-coded, chances are this human brain would not display any intelligent behavior, and would quickly die off. Not so smart now, Mr. Brain.

What would happen if we were to put a human — brain and body — into an environment that does not feature human culture as we know it? Would Mowgli the man-cub, raised by a pack of wolves, grow up to outsmart his canine siblings? To be smart like us? And if we swapped baby Mowgli with baby Einstein, would he eventually educate himself into developing grand theories of the universe? Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any intelligence beyond basic animal-like survival behaviors. As adults, they cannot even acquire language.

If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.</p>
ai  artificialintelligence 
12 weeks ago by charlesarthur
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): via Pocket
IFTTT  Pocket 
12 weeks ago by Pheelmore
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
IFTTT  Pocket 
12 weeks ago by timothyarnold
New post: the impossibility of intelligence explosion
from twitter_favs
12 weeks ago by romac
New post: the impossibility of intelligence explosion
from twitter_favs
12 weeks ago by oismail91
New post: the impossibility of intelligence explosion
from twitter_favs
12 weeks ago by amerberg