jm + computer-says-no   3

Automated unemployment insurance fraud detection system had a staggering 93% error rate in production
Expect to see a lot more cases of automated discrimination like this in the future. There is no way an auto-adjudication system would be allowed to have this staggering level of brokenness if it was dealing with the well-off:

State officials have said that between Oct. 1, 2013, when the MiDAS [automated unemployment insurance fraud detection] system came on line, and Aug. 7, 2015, when the state halted the auto-adjudication of fraud determinations and began to require some human review of MiDAS findings, the system had a 93% error rate and made false fraud findings affecting more than 20,000 unemployment insurance claims. Those falsely accused of fraud were subjected to quadruple penalties and aggressive collection techniques, including wage garnishment and seizure of income tax refunds. Some were forced into bankruptcy.

The agency is now reviewing about 28,000 additional fraud determinations that were made during the relevant period, but which involved some human review. An unknown number of those fraud findings were also false.
fraud  broken  fail  michigan  detroit  social-welfare  us-politics  computer-says-no  automation  discrimination  fraud-detection 
29 days ago by jm
Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' | Technology | The Guardian
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.” [...]

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
bias  ai  racism  politics  big-data  technology  fascism  crime  algorithms  faceception  discrimination  computer-says-no 
6 weeks ago by jm
Facebook scuppers Admiral Insurance plan to base premiums on your posts
Well, this is amazingly awful:
The Guardian claims to have further details of the kind of tell-tale signs that Admiral's algorithmic analysis would have looked out for in Facebook posts. Good traits include "writing in short concrete sentences, using lists, and arranging to meet friends at a set time and place, rather than just 'tonight'." On the other hand, "evidence that the Facebook user might be overconfident—such as the use of exclamation marks and the frequent use of 'always' or 'never' rather than 'maybe'—will count against them."


The future is shitty.
insurance  facebook  scoring  computer-says-no  algorithms  text-analysis  awful  future 
november 2016 by jm

Copy this bookmark:



description:


tags: