jm + computer-says-no + algorithms   3

How your selfie could affect your life insurance
Noping so hard. Imagine the levels of algorithmic discrimination inherent in this shit.
"Your face is something you wear all your life, and it tells a very unique story about you," says Karl Ricanek Jr., co-founder and chief data scientist at Lapetus Solutions Inc. in Wilmington, N.C.

Several life insurance companies are testing Lapetus technology that uses facial analytics and other data to estimate life expectancy, he says. (Lapetus would not disclose the names of companies testing its product.) Insurers use life expectancy estimates to make policy approval and pricing decisions. Lapetus says its product, Chronos, would enable a customer to buy life insurance online in as little as 10 minutes without taking a life insurance medical exam.
discrimination  computer-says-no  algorithms  selfies  face  lapetus  photos  life-insurance  life-expectancy 
may 2017 by jm
Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' | Technology | The Guardian
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.” [...]

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
bias  ai  racism  politics  big-data  technology  fascism  crime  algorithms  faceception  discrimination  computer-says-no 
march 2017 by jm
Facebook scuppers Admiral Insurance plan to base premiums on your posts
Well, this is amazingly awful:
The Guardian claims to have further details of the kind of tell-tale signs that Admiral's algorithmic analysis would have looked out for in Facebook posts. Good traits include "writing in short concrete sentences, using lists, and arranging to meet friends at a set time and place, rather than just 'tonight'." On the other hand, "evidence that the Facebook user might be overconfident—such as the use of exclamation marks and the frequent use of 'always' or 'never' rather than 'maybe'—will count against them."


The future is shitty.
insurance  facebook  scoring  computer-says-no  algorithms  text-analysis  awful  future 
november 2016 by jm

Copy this bookmark:



description:


tags: