The impossibility of intelligence explosion – François Chollet – Medium


73 bookmarks. First posted by romac november 2017.


从上一篇来的
AI 
12 weeks ago by Qin
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
ifttt  twitter 
february 2018 by marshallk
The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false. Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology. Our brains themselves were never a significant bottleneck in the AI-design process.
intelligence  ai  systems  complexity  society  culture  signularity  cognitive  ability  politics  reallygood 
january 2018 by timcowlishaw
“The impossibility of intelligence explosion”
from twitter
january 2018 by amitkaps
“The impossibility of intelligence explosion” by François Chollet
from twitter
december 2017 by brandizzi
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): Decades later, the concept of an “intelligence explosion” …
ai 
december 2017 by twleung
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
from instapaper
december 2017 by paryshnikov
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
december 2017 by jkleske
The impossibility of intelligence explosion –

"intelligence is fundamentally linked to spe…
from twitter
december 2017 by TaylorPearson
New post: the impossibility of intelligence explosion
calibre-recipe 
december 2017 by personalnadir
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
from instapaper
december 2017 by matttrent
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
getpocket 
december 2017 by linkt
If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt


Ah!
%contrarian  #t#ml 
december 2017 by lemeb
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): via Pocket
pocket 
december 2017 by jburkunk
Intelligent stuk over A.I., lees om wat rust hierover in je donder te krijgen:
from twitter
december 2017 by onedaycompany
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
Pocket 
december 2017 by nildram
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
Dev 
december 2017 by garrettqmartin
Sensible, but disappointing that it has to be said.
(Even shorter: I. J. Good apparently forgot that a monotonically increasing sequence can have a finite limit!)
artificial_intelligence  debunking  rapture_for_nerds 
december 2017 by cshalizi
AI, in this sense, is no different than computers, or books, or language itself: it’s a technology that empowers our civilizatition
ai  MachineLearning  intelligence 
december 2017 by guyl
This here is sense.
machinelearning  ai  sense 
december 2017 by yorksranter
We are not individuals:

'Intelligence is fundamentally situational.'

The impossibility of intelligence explosion – François Chollet – Medium https://t.co/QtLBjkoPyG

— Emrys Schoemaker (@emrys_s) December 3, 2017
IFTTT  Twitter 
december 2017 by semrys
argument that "general intelligence" is impossible
philosophy  ai  software 
november 2017 by noisesmith
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): Decades later, the concept of an “intelligence explosion” …
november 2017 by kmarchand
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
from instapaper
november 2017 by louderthan10
RT : New post: the impossibility of intelligence explosion
from twitter
november 2017 by abolibibelot
Transcendence (2014 science-fiction movie) In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial…
from instapaper
november 2017 by rboone
RT : New post: the impossibility of intelligence explosion
from twitter_favs
november 2017 by schraeds
"In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human."

"Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. It gets increasingly harder to keep up with the firehose of new publications. Remember that a network with N nodes has N * (N - 1) / 2 edges."
AI  singularity  science 
november 2017 by elrob
Læs evt også denne af - god antidote til AI KOMMER-trenden
from twitter
november 2017 by classy.dk

“I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.” — Stephen Jay Gould

Primarily, because the usefulness of software is fundamentally limited by the context of its application — much like intelligence is both defined and limited by the context in which it expresses itself. Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain.
intelligence  ai  human 
november 2017 by WimLeers
The impossibility of intelligence explosion …
from twitter_favs
november 2017 by linkmachinego
"You cannot dissociate intelligence from the context in which it expresses itself."
from twitter_favs
november 2017 by briantrice
Is it merely coincidence that and dropped on the same day?
from twitter_favs
november 2017 by saxwell
François Chollet:
<p>What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? We cannot perform this experiment, but given the extent to which our most fundamental behaviors and early learning patterns are hard-coded, chances are this human brain would not display any intelligent behavior, and would quickly die off. Not so smart now, Mr. Brain.

What would happen if we were to put a human — brain and body — into an environment that does not feature human culture as we know it? Would Mowgli the man-cub, raised by a pack of wolves, grow up to outsmart his canine siblings? To be smart like us? And if we swapped baby Mowgli with baby Einstein, would he eventually educate himself into developing grand theories of the universe? Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any intelligence beyond basic animal-like survival behaviors. As adults, they cannot even acquire language.

If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.</p>
ai  artificialintelligence 
november 2017 by charlesarthur
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI): via Pocket
IFTTT  Pocket 
november 2017 by Pheelmore
In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
IFTTT  Pocket 
november 2017 by timothyarnold
New post: the impossibility of intelligence explosion
from twitter_favs
november 2017 by oismail91
New post: the impossibility of intelligence explosion
from twitter_favs
november 2017 by romac
New post: the impossibility of intelligence explosion
from twitter_favs
november 2017 by amerberg