jm + via:maciej   4

Zeynep Tufekci: Machine intelligence makes human morals more important | TED Talk |
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns — and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."

More relevant now that nVidia are trialing ML-based self-driving cars in the US...
nvidia  ai  ml  machine-learning  scary  zeynep-tufekci  via:maciej  technology  ted-talks 
6 days ago by jm
Reddit comments from a nuclear-power expert
Reddit user "Hiddencamper" is a senior nuclear reactor operator in the US, and regularly posts very knowledgeable comments about reactor operations, safety procedures, and other details. It's fascinating (via Maciej)
via:maciej  nuclear-power  nuclear  atomic  power  energy  safety  procedures  operations  history  chernobyl  scram 
august 2015 by jm
The Titanium Gambit | History | Air & Space Magazine
Amazing story of 1960s detente via Maciej: 'During the Cold War, Boeing execs got a strange call from the State Department: Would you guys mind trading secrets with the Russians?'
via:maciej  titanium  history  cold-war  detente  ussr  usa  boeing  russia  aerospace 
july 2015 by jm
Roko's basilisk - RationalWiki
Wacky transhumanists.
Roko's basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.

Silly over-extrapolations of local memes are posted to LessWrong quite a lot; almost all are just downvoted and ignored. But this one, Yudkowsky reacted to hugely, then doubled-down on his reaction. Thanks to the Streisand effect, discussion of the basilisk and the details of the affair soon spread outside of LessWrong. The entire affair is a worked example of spectacular failure at community management and at controlling purportedly dangerous information.

Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.[6]
transhumanism  funny  insane  stupid  singularity  ai  rokos-basilisk  via:maciej  lesswrong  rationalism  superintelligences  striesand-effect  absurd 
march 2013 by jm

Copy this bookmark: