jm + ai   23

Zeynep Tufekci: Machine intelligence makes human morals more important | TED Talk | TED.com
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns — and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."


More relevant now that nVidia are trialing ML-based self-driving cars in the US...
nvidia  ai  ml  machine-learning  scary  zeynep-tufekci  via:maciej  technology  ted-talks 
7 days ago by jm
Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' | Technology | The Guardian
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.” [...]

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
bias  ai  racism  politics  big-data  technology  fascism  crime  algorithms  faceception  discrimination  computer-says-no 
6 weeks ago by jm
When DNNs go wrong – adversarial examples and what we can learn from them
Excellent paper.
[The] results suggest that classifiers based on modern machine learning techniques, even those that obtain excellent performance on the test set, are not learning the true underlying concepts that determine the correct output label. Instead, these algorithms have built a Potemkin village that works well on naturally occuring data, but is exposed as a fake when one visits points in space that do not have high probability in the data distribution.
ai  deep-learning  dnns  neural-networks  adversarial-classification  classification  classifiers  machine-learning  papers 
8 weeks ago by jm
Zeynep Tufekci: "Youtube is a crucial part of the misinfomation ecology"
This is so spot on. I hope Google address this issue --
YouTube is crucial part of the misinformation ecology. Not just a demand issue: its recommender algo is a "go down the rabbit hole" machine.
You watch a Trump rally: you get suggested white supremacist videos, sometimes, auto-playing. Like a gateway drug theory of engagement.
I've seen this work across the political spectrum. YouTube algo has discovered out-flanking and "red-pilling" is.. engaging. So it does.


This thread was in response to this Buzzfeed article on the same topic: https://www.buzzfeed.com/josephbernstein/youtube-has-become-the-content-engine-of-the-internets-dark
youtube  nazis  alt-right  lies  politics  google  misinformation  recommendations  ai  red-pill 
8 weeks ago by jm
Toyota's Gill Pratt: "No one is close to achieving true level 5 [self-driving cars]"
The most important thing to understand is that not all miles are the same. Most miles that we drive are very easy, and we can drive them while daydreaming or thinking about something else or having a conversation. But some miles are really, really hard, and so it’s those difficult miles that we should be looking at: How often do those show up, and can you ensure on a given route that the car will actually be able to handle the whole route without any problem at all? Level 5 autonomy says all miles will be handled by the car in an autonomous mode without any need for human intervention at all, ever.

So if we’re talking to a company that says, “We can do full autonomy in this pre-mapped area and we’ve mapped almost every area,” that’s not Level 5. That’s Level 4. And I wouldn’t even stop there: I would ask, “Is that at all times of the day, is it in all weather, is it in all traffic?” And then what you’ll usually find is a little bit of hedging on that too. The trouble with this Level 4 thing, or the “full autonomy” phrase, is that it covers a very wide spectrum of possible competencies. It covers “my car can run fully autonomously in a dedicated lane that has no other traffic,” which isn’t very different from a train on a set of rails, to “I can drive in Rome in the middle of the worst traffic they ever have there, while it’s raining," which is quite hard.

Because the “full autonomy” phrase can mean such a wide range of things, you really have to ask the question, “What do you really mean, what are the actual circumstances?” And usually you’ll find that it’s geofenced for area, it may be restricted by how much traffic it can handle, for the weather, the time of day, things like that. So that’s the elaboration of why we’re not even close.
autonomy  driving  self-driving  cars  ai  robots  toyota  weather 
january 2017 by jm
For World’s Newest Scrabble Stars, SHORT Tops SHORTER
Nigeria's scrabble team are kicking ass with short-word strats.
“ ‘What would the robot do?’ is now the key question in Scrabble,” said Mr. Fatsis. Often, he said, the robot plays five letters: “There are inefficiencies in the game that you can exploit by having a mastery of those intermediate-length words.”
games  scrabble  nigeria  ai  word-play  strats 
may 2016 by jm
DeepMind founder Demis Hassabis on how AI will shape the future | The Verge
Good interview with Demis Hassabis on DeepMind, AlphaGo and AI:
I’d like to see AI-assisted science where you have effectively AI research assistants that do a lot of the drudgery work and surface interesting articles, find structure in vast amounts of data, and then surface that to the human experts and scientists who can make quicker breakthroughs. I was giving a talk at CERN a few months ago; obviously they create more data than pretty much anyone on the planet, and for all we know there could be new particles sitting on their massive hard drives somewhere and no-one’s got around to analyzing that because there’s just so much data. So I think it’d be cool if one day an AI was involved in finding a new particle.
ai  deepmind  google  alphago  demis-hassabis  cern  future  machine-learning 
march 2016 by jm
The NSA’s SKYNET program may be killing thousands of innocent people
Death by Random Forest: this project is a horrible misapplication of machine learning. Truly appalling, when a false positive means death:

The NSA evaluates the SKYNET program using a subset of 100,000 randomly selected people (identified by their MSIDN/MSI pairs of their mobile phones), and a a known group of seven terrorists. The NSA then trained the learning algorithm by feeding it six of the terrorists and tasking SKYNET to find the seventh. This data provides the percentages for false positives in the slide above.

"First, there are very few 'known terrorists' to use to train and test the model," Ball said. "If they are using the same records to train the model as they are using to test the model, their assessment of the fit is completely bullshit. The usual practice is to hold some of the data out of the training process so that the test includes records the model has never seen before. Without this step, their classification fit assessment is ridiculously optimistic."

The reason is that the 100,000 citizens were selected at random, while the seven terrorists are from a known cluster. Under the random selection of a tiny subset of less than 0.1 percent of the total population, the density of the social graph of the citizens is massively reduced, while the "terrorist" cluster remains strongly interconnected. Scientifically-sound statistical analysis would have required the NSA to mix the terrorists into the population set before random selection of a subset—but this is not practical due to their tiny number.

This may sound like a mere academic problem, but, Ball said, is in fact highly damaging to the quality of the results, and thus ultimately to the accuracy of the classification and assassination of people as "terrorists." A quality evaluation is especially important in this case, as the random forest method is known to overfit its training sets, producing results that are overly optimistic. The NSA's analysis thus does not provide a good indicator of the quality of the method.
terrorism  surveillance  nsa  security  ai  machine-learning  random-forests  horror  false-positives  classification  statistics 
february 2016 by jm
Schneier on Automatic Face Recognition and Surveillance
When we talk about surveillance, we tend to concentrate on the problems of data collection: CCTV cameras, tagged photos, purchasing habits, our writings on sites like Facebook and Twitter. We think much less about data analysis. But effective and pervasive surveillance is just as much about analysis. It's sustained by a combination of cheap and ubiquitous cameras, tagged photo databases, commercial databases of our actions that reveal our habits and personalities, and ­-- most of all ­-- fast and accurate face recognition software.

Don't expect to have access to this technology for yourself anytime soon. This is not facial recognition for all. It's just for those who can either demand or pay for access to the required technologies ­-- most importantly, the tagged photo databases. And while we can easily imagine how this might be misused in a totalitarian country, there are dangers in free societies as well. Without meaningful regulation, we're moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.

Despite protests from industry, we need to regulate this budding industry. We need limitations on how our images can be collected without our knowledge or consent, and on how they can be used. The technologies aren't going away, and we can't uninvent these capabilities. But we can ensure that they're used ethically and responsibly, and not just as a mechanism to increase police and corporate power over us.
privacy  regulation  surveillance  bruce-schneier  faces  face-recognition  machine-learning  ai  cctv  photos 
october 2015 by jm
jwz on Inceptionism
"Shoggoth ovipositors":
So then they reach inside to one of the layers and spin the knob randomly to fuck it up. Lower layers are edges and curves. Higher layers are faces, eyes and shoggoth ovipositors. [....] But the best part is not when they just glitch an image -- which is a fun kind of embossing at one end, and the "extra eyes" filter at the other -- but is when they take a net trained on some particular set of objects and feed it static, then zoom in, and feed the output back in repeatedly. That's when you converge upon the platonic ideal of those objects, which -- it turns out -- tend to be Giger nightmare landscapes. Who knew. (I knew.)


This stuff is still boggling my mind. All those doggy faces! That is one dog-obsessed ANN.
neural-networks  ai  jwz  funny  shoggoths  image-recognition  hr-giger  art  inceptionism 
june 2015 by jm
Volvo says horrible 'self-parking car accident' happened because driver didn't have 'pedestrian detection'
Grim meathook future, courtesy of Volvo:
“The Volvo XC60 comes with City Safety as a standard feature however this does not include the Pedestrian detection functionality [...] The pedestrian detection feature [...] costs approximately $3,000.


However, there's another lesson here, in crappy car UX and the risks thereof:
But even if it did have the feature, Larsson says the driver would have interfered with it by the way they were driving and “accelerating heavily towards the people in the video.” “The pedestrian detection would likely have been inactivated due to the driver inactivating it by intentionally and actively accelerating,” said Larsson. “Hence, the auto braking function is overrided by the driver and deactivated.” Meanwhile, the people in the video seem to ignore their instincts and trust that the car assumed to be endowed with artificial intelligence knows not to hurt them. It is a sign of our incredible faith in the power of technology, but also, it’s a reminder that companies making AI-assisted vehicles need to make safety features standard and communicate clearly when they aren’t.
self-driving-cars  cars  ai  pedestrian  computer-vision  volvo  fail  accidents  grim-meathook-future 
may 2015 by jm
Automating Tinder with Eigenfaces
While my friends were getting sucked into "swiping" all day on their phones with Tinder, I eventually got fed up and designed a piece of software that automates everything on Tinder.


This is awesome. (via waxy)
via:waxy  tinder  eigenfaces  machine-learning  k-nearest-neighbour  algorithms  automation  ai 
february 2015 by jm
IBM's creepy AI cyberstalking plans
'let's say that you tweet that you've gotten a job offer to move to San Francisco. Using IBM's linguistic analysis technologies, your bank would analyze your Twitter feed and not only tailor services it could offer you ahead of the move--for example, helping you move your account to another branch, or offering you a loan for a new house -- but also judge your psychological profile based upon the tone of your messages about the move, giving advice to your bank's representatives about the best way to contact you.'


Ugh. Here's hoping they've patented this shit so we don't actually have to suffer through it. Creeeepy. (via Adam Shostack)
datamining  ai  ibm  stupid-ideas  creepy  stalking  twitter  via:adamshostack 
february 2014 by jm
Meet the Robot Telemarketer Who Denies She's A Robot
Florida's spammers strike again - pushing the boundaries of intrusive direct sales and marketing
florida  ai  spam  direct-marketing  bots  sales  health-insurance 
december 2013 by jm
The New York Review of Bots
'Welcome to the New York Review of Bots, a professional journal of automated-agent studies. We aspire to the highest standards of rigorous analysis, but will often just post things we liked that a computer made.'
robots  bots  tumblr  ai  word-frequency  markov-chain  random  twitter 
october 2013 by jm
Forecast Blog
Forecast.io are doing such a great job of applying modern machine-learning to traditional weather data. "Quicksilver" is their neural-net-adjusted global temperature geodata, and here's how it's built
quicksilver  forecast  forecast.io  neural-networks  ai  machine-learning  algorithms  weather  geodata  earth  temperature 
august 2013 by jm
Abusing hash kernels for wildly unprincipled machine learning
what, is this the first time our spam filtering approach of hashing a giant feature space is hitting mainstream machine learning? that can't be right!
ai  machine-learning  python  data  hashing  features  feature-selection  anti-spam  spamassassin 
april 2013 by jm
Roko's basilisk - RationalWiki
Wacky transhumanists.
Roko's basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.

Silly over-extrapolations of local memes are posted to LessWrong quite a lot; almost all are just downvoted and ignored. But this one, Yudkowsky reacted to hugely, then doubled-down on his reaction. Thanks to the Streisand effect, discussion of the basilisk and the details of the affair soon spread outside of LessWrong. The entire affair is a worked example of spectacular failure at community management and at controlling purportedly dangerous information.

Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.[6]
transhumanism  funny  insane  stupid  singularity  ai  rokos-basilisk  via:maciej  lesswrong  rationalism  superintelligences  striesand-effect  absurd 
march 2013 by jm
HN on "What it takes to build great machine learning products"
TBH, I think this discussion thread is more useful than the article itself. It's still remarkably difficult to successfully apply ML techniques to real-world problems :(
machine-learning  hacker-news  discussion  commentary  ai  algorithms 
april 2012 by jm
Charlie's Diary: The myth of the starship
Charlie Stross' thoughts on the true viability of interstellar travel. This was about the most thought-provoking bit of 'Accelerando' for me alright
beans  ships  travel  interstellar  space  ai  downloading  from delicious
november 2009 by jm
iPhone Sudoku Grab: How does it all work?
lovely run-through of the computer-vision algorithms this iPhone app uses (via Waxy)
via:waxy  ai  image  programming  algorithms  graphics  iphone  ocr  computervision  opencv  sudoku 
august 2009 by jm
Thinkism
great Singularity contemplation from Kevin Kelly: 'to be useful, artificial intelligences have to be embodied in the world, and that world will often set their pace of innovations. Thinkism is not enough. Without conducting experiments, building prototypes, having failures, and engaging in reality, an intelligence can have thoughts but not results. It cannot think its way to solving the world's problems. There won't be instant discoveries the minute, hour, day or year a smarter-than-human AI appears. The rate of discovery will hopefully be significantly accelerated. Even better, a super AI will ask questions no human would ask. But, to take one example, it will require many generations of experiments on living organisms, not even to mention humans, before such a difficult achievement as immortality is gained.'
ai  singularity  ray-kurzweil  kevin-kelly  science  progress  technology  future  philosophy  intelligence  knowledge  thinkism 
july 2009 by jm

related tags

absurd  accidents  adversarial-classification  ai  algorithms  alphago  alt-right  anti-spam  art  automation  autonomy  beans  bias  big-data  bots  bruce-schneier  cars  cctv  cern  classification  classifiers  commentary  computer-says-no  computer-vision  computervision  creepy  crime  data  datamining  deep-learning  deepmind  demis-hassabis  direct-marketing  discrimination  discussion  dnns  downloading  driving  earth  eigenfaces  face-recognition  faceception  faces  fail  false-positives  fascism  feature-selection  features  florida  forecast  forecast.io  funny  future  games  geodata  google  graphics  grim-meathook-future  hacker-news  hashing  health-insurance  horror  hr-giger  ibm  image  image-recognition  inceptionism  insane  intelligence  interstellar  iphone  jwz  k-nearest-neighbour  kevin-kelly  knowledge  lesswrong  lies  machine-learning  markov-chain  misinformation  ml  nazis  neural-networks  nigeria  nsa  nvidia  ocr  opencv  papers  pedestrian  philosophy  photos  politics  privacy  programming  progress  python  quicksilver  racism  random  random-forests  rationalism  ray-kurzweil  recommendations  red-pill  regulation  robots  rokos-basilisk  sales  scary  science  scrabble  security  self-driving  self-driving-cars  ships  shoggoths  singularity  space  spam  spamassassin  stalking  statistics  strats  striesand-effect  stupid  stupid-ideas  sudoku  superintelligences  surveillance  technology  ted-talks  temperature  terrorism  thinkism  tinder  toyota  transhumanism  travel  tumblr  twitter  via:adamshostack  via:maciej  via:waxy  volvo  weather  word-frequency  word-play  youtube  zeynep-tufekci 

Copy this bookmark:



description:


tags: