jm + nvidia   3

The Dark Secret at the Heart of AI - MIT Technology Review
'The mysterious mind of [NVidia's self-driving car, driven by machine learning] points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”'
ai  algorithms  ml  machine-learning  legibility  explainability  deep-learning  nvidia 
19 days ago by jm
Zeynep Tufekci: Machine intelligence makes human morals more important | TED Talk | TED.com
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns — and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."


More relevant now that nVidia are trialing ML-based self-driving cars in the US...
nvidia  ai  ml  machine-learning  scary  zeynep-tufekci  via:maciej  technology  ted-talks 
4 weeks ago by jm
NVIDIA SHIELD Android TV Pro
'Best Plex Media Server' -- this looks pretty superb for EUR240 or thereabouts
media-servers  plex  video  home  tv  toget  nvidia  shield  android 
july 2016 by jm

Copy this bookmark:



description:


tags: