jm + computer-says-no   2

Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' | Technology | The Guardian
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.” [...]

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
bias  ai  racism  politics  big-data  technology  fascism  crime  algorithms  faceception  discrimination  computer-says-no 
11 days ago by jm
Facebook scuppers Admiral Insurance plan to base premiums on your posts
Well, this is amazingly awful:
The Guardian claims to have further details of the kind of tell-tale signs that Admiral's algorithmic analysis would have looked out for in Facebook posts. Good traits include "writing in short concrete sentences, using lists, and arranging to meet friends at a set time and place, rather than just 'tonight'." On the other hand, "evidence that the Facebook user might be overconfident—such as the use of exclamation marks and the frequent use of 'always' or 'never' rather than 'maybe'—will count against them."


The future is shitty.
insurance  facebook  scoring  computer-says-no  algorithms  text-analysis  awful  future 
november 2016 by jm

Copy this bookmark:



description:


tags: