warrenellis + ai   5

An Open Letter To Everyone Tricked Into Fearing Artificial Intelligence | Popular Science
"The history of AI research is full of theoretical benchmarks and milestones whose only barrier appeared to be a lack of computing resources. And yet, even as processor and storage technology has raced ahead of researchers' expectations, the deadlines for AI's most promising (or terrifying, depending on your agenda) applications remain stuck somewhere in the next 10 or 20 years. I've written before about the myth of inevitable superintelligence, but Selman is much more succinct on the subject. The key mistake, he says, is in confusing principle with execution, and assuming that throwing more resources at given system will trigger an explosive increase in capability."
ai 
9 weeks ago by warrenellis
Stanford to host 100-year study on artificial intelligence | KurzweilAI
"Stanford University has invited leading thinkers from several institutions to begin a 100-year effort to study and anticipate how the effects of artificial intelligence on every aspect of how people work, live, and play."
ai 
december 2014 by warrenellis
Deep neural network rivals primate brain in object recognition | KurzweilAI
"A new study from MIT neuroscientists has found that for the first time, one of the latest generation of “deep neural networks” matches the ability of the primate brain to recognize objects during a brief glance."
tech  ai  comp 
december 2014 by warrenellis
Cycorp AI - Business Insider
"If computers were human," Lenat told us, "they'd present themselves as autistic, schizophrenic, or otherwise brittle. It would be unwise or dangerous for that person to take care of children and cook meals, but it's on the horizon for home robots. That's like saying, 'We have an important job to do, but we're going to hire dogs and cats to do it.'"
tech  ai  comp 
july 2014 by warrenellis
Roko's basilisk - RationalWiki
"According to the proposition, it is possible that this ultimate (future godlike artificial) intelligence may punish those who fail to help it, with greater punishment accorded those who knew the importance of the task. This is conventionally comprehensible, but the notable bit of the basilisk and similar constructions is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles."
ai  future  mad  funny 
february 2013 by warrenellis

Copy this bookmark:



description:


tags: