jm + discrimination   7

Google’s Response to Employee’s Anti-Diversity Manifesto Ignores Workplace Discrimination Law – Medium
A workplace-discrimination lawyer writes:
Stray remarks are not enough. But a widespread workplace discussion of whether women engineers are biologically capable of performing at the same level as their male counterparts could suffice to create a hostile work environment. As another example, envision the racial hostility of a workplace where employees, as Google put it, “feel safe” to espouse their “alternative view” that their African-American colleagues are not well-represented in management positions because they are not genetically predisposed for leadership roles. In short, a workplace where people “feel safe sharing opinions” based on gender (or racial, ethnic or religious) stereotypes may become so offensive that it legally amounts to actionable discrimination.
employment  sexism  workplace  discrimination  racism  misogyny  women  beliefs 
10 weeks ago by jm
How your selfie could affect your life insurance
Noping so hard. Imagine the levels of algorithmic discrimination inherent in this shit.
"Your face is something you wear all your life, and it tells a very unique story about you," says Karl Ricanek Jr., co-founder and chief data scientist at Lapetus Solutions Inc. in Wilmington, N.C.

Several life insurance companies are testing Lapetus technology that uses facial analytics and other data to estimate life expectancy, he says. (Lapetus would not disclose the names of companies testing its product.) Insurers use life expectancy estimates to make policy approval and pricing decisions. Lapetus says its product, Chronos, would enable a customer to buy life insurance online in as little as 10 minutes without taking a life insurance medical exam.
discrimination  computer-says-no  algorithms  selfies  face  lapetus  photos  life-insurance  life-expectancy 
may 2017 by jm
Automated unemployment insurance fraud detection system had a staggering 93% error rate in production
Expect to see a lot more cases of automated discrimination like this in the future. There is no way an auto-adjudication system would be allowed to have this staggering level of brokenness if it was dealing with the well-off:

State officials have said that between Oct. 1, 2013, when the MiDAS [automated unemployment insurance fraud detection] system came on line, and Aug. 7, 2015, when the state halted the auto-adjudication of fraud determinations and began to require some human review of MiDAS findings, the system had a 93% error rate and made false fraud findings affecting more than 20,000 unemployment insurance claims. Those falsely accused of fraud were subjected to quadruple penalties and aggressive collection techniques, including wage garnishment and seizure of income tax refunds. Some were forced into bankruptcy.

The agency is now reviewing about 28,000 additional fraud determinations that were made during the relevant period, but which involved some human review. An unknown number of those fraud findings were also false.
fraud  broken  fail  michigan  detroit  social-welfare  us-politics  computer-says-no  automation  discrimination  fraud-detection 
march 2017 by jm
Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' | Technology | The Guardian
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.” [...]

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
bias  ai  racism  politics  big-data  technology  fascism  crime  algorithms  faceception  discrimination  computer-says-no 
march 2017 by jm
Banks biased against black fraud victims
We raised the issue of discrimination in 2011 with one of the banks and with the Commission for Racial Equality, but as no-one was keeping records, nothing could be proved, until today. How can this discrimination happen? Well, UK rules give banks a lot of discretion to decide whether to refund a victim, and the first responders often don’t know the full story. If your HSBC card was compromised by a skimmer on a Tesco ATM, there’s no guarantee that Tesco will have told anyone (unlike in America, where the law forces Tesco to tell you). And the fraud pattern might be something entirely new. So bank staff end up making judgement calls like “Is this customer telling the truth?” and “How much is their business worth to us?” This in turn sets the stage for biases and prejudices to kick in, however subconsciously. Add management pressure to cut costs, sometimes even bonuses for cutting them, and here we are.
discrimination  racism  fraud  uk  banking  skimming  security  fca 
january 2017 by jm
When It Comes to Age Bias, Tech Companies Don’t Even Bother to Lie
HubSpot’s CEO and co-founder, Brian Halligan, explained to the New York Times that this age imbalance was not something he wanted to remedy, but in fact something he had actively cultivated. HubSpot was “trying to build a culture specifically to attract and retain Gen Y’ers,” because, “in the tech world, gray hair and experience are really overrated,” Halligan said. 

I gasped when I read that. Could anyone really believe this? Even if you did believe this, what CEO would be foolish enough to say it out loud? It was akin to claiming that you prefer to hire Christians, or heterosexuals, or white people. I assumed an uproar would follow. As it turned out, nobody at HubSpot saw this as a problem. Halligan didn’t apologize for his comments or try to walk them back. The lesson I learned is that when it comes to race and gender bias, the people running Silicon Valley at least pay lip service to wanting to do better — but with age discrimination they don’t even bother to lie. 
hiring  startups  tech  ageism  age  hubspot  gen-y  discrimination 
april 2016 by jm

Copy this bookmark:



description:


tags: