jm + bias   11

'Mathwashing,' Facebook and the zeitgeist of data worship
Fred Benenson: Mathwashing can be thought of using math terms (algorithm, model, etc.) to paper over a more subjective reality. For example, a lot of people believed Facebook was using an unbiased algorithm to determine its trending topics, even if Facebook had previously admitted that humans were involved in the process.
maths  math  mathwashing  data  big-data  algorithms  machine-learning  bias  facebook  fred-benenson 
5 weeks ago by jm
Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' | Technology | The Guardian
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.” [...]

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
bias  ai  racism  politics  big-data  technology  fascism  crime  algorithms  faceception  discrimination  computer-says-no 
10 weeks ago by jm
Parable of the Polygons - a playable post on the shape of society
Our cute segregation sim is based off the work of Nobel Prize-winning game theorist, Thomas Schelling. Specifically, his 1971 paper, Dynamic Models of Segregation. We built on top of this, and showed how a small demand for diversity can desegregate a neighborhood. In other words, we gave his model a happy ending.
games  society  visualization  diversity  racism  bias  thomas-schelling  segregation 
february 2017 by jm
How a Machine Learns Prejudice - Scientific American
Agreed, this is a big issue.
If artificial intelligence takes over our lives, it probably won’t involve humans battling an army of robots that relentlessly apply Spock-like logic as they physically enslave us. Instead, the machine-learning algorithms that already let AI programs recommend a movie you’d like or recognize your friend’s face in a photo will likely be the same ones that one day deny you a loan, lead the police to your neighborhood or tell your doctor you need to go on a diet. And since humans create these algorithms, they're just as prone to biases that could lead to bad decisions—and worse outcomes.
These biases create some immediate concerns about our increasing reliance on artificially intelligent technology, as any AI system designed by humans to be absolutely "neutral" could still reinforce humans’ prejudicial thinking instead of seeing through it.
prejudice  bias  machine-learning  ml  data  training  race  racism  google  facebook 
january 2017 by jm
Founder of Google X has no concept of how machine learning as policing tool risks reinforcing implicit bias
This is shocking:
At the end of the panel on artificial intelligence, a young black woman asked [Sebastian Thrun, CEO of the education startup Udacity, who is best known for founding Google X] whether bias in machine learning “could perpetuate structural inequality at a velocity much greater than perhaps humans can.” She offered the example of criminal justice, where “you have a machine learning tool that can identify criminals, and criminals may disproportionately be black because of other issues that have nothing to do with the intrinsic nature of these people, so the machine learns that black people are criminals, and that’s not necessarily the outcome that I think we want.”
In his reply, Thrun made it sound like her concern was one about political correctness, not unconscious bias. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. Sometimes they’re not politically correct,” Thrun said. “When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this.”


"the truth"! Jesus. We are fucked
google  googlex  bias  racism  implicit-bias  machine-learning  ml  sebastian-thrun  udacity  inequality  policing  crime 
october 2016 by jm
Remarks at the SASE Panel On The Moral Economy of Tech
Excellent talk. I love this analogy for ML applied to real-world data which affects people:
Treating the world as software promotes fantasies of control. And the best kind of control is control without responsibility. Our unique position as authors of software used by millions gives us power, but we don't accept that this should make us accountable. We're programmers—who else is going to write the software that runs the world? To put it plainly, we are surprised that people seem to get mad at us for trying to help. Fortunately we are smart people and have found a way out of this predicament. Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.


Particularly apposite today given Y Combinator's revelation that they use an AI bot to help 'sift admission applications', and don't know what criteria it's using: https://twitter.com/aprjoy/status/783032128653107200
culture  ethics  privacy  technology  surveillance  ml  machine-learning  bias  algorithms  software  control 
october 2016 by jm
“Racist algorithms” and learned helplessness
Whenever I’ve had to talk about bias in algorithms, I’ve tried be  careful to emphasize that it’s not that we shouldn’t use algorithms in search, recommendation and decision making. It’s that we often just don’t know how they’re making their decisions to present answers, make recommendations or arrive at conclusions, and it’s this lack of transparency that’s worrisome. Remember, algorithms aren’t just code.

What’s also worrisome is the amplifier effect. Even if “all an algorithm is doing” is reflecting and transmitting biases inherent in society, it’s also amplifying and perpetuating them on a much larger scale than your friendly neighborhood racist. And that’s the bigger issue. [...] even if the algorithm isn’t creating bias, it’s creating a feedback loop that has powerful perception effects.
feedback  bias  racism  algorithms  software  systems  society 
april 2016 by jm
"A reason to hang him": how mass surveillance, secret courts, confirmation bias and the FBI can ruin your life - Boing Boing
This is bananas. Confirmation bias running amok.
Brandon Mayfield was a US Army veteran and an attorney in Portland, OR. After the 2004 Madrid train bombing, his fingerprint was partially matched to one belonging to one of the suspected bombers, but the match was a poor one. But by this point, the FBI was already convinced they had their man, so they rationalized away the non-matching elements of the print, and set in motion a train of events that led to Mayfield being jailed without charge; his home and office burgled by the FBI; his client-attorney privilege violated; his life upended.
confirmation-bias  bias  law  brandon-mayfield  terrorism  fingerprints  false-positives  fbi  scary 
february 2014 by jm
Reinforcing gender stereotypes: how our schools narrow children's choices | Athene Donald | Science | theguardian.com
Our children should be free to choose to study what really excites them, not subtly steered away from certain subjects because teachers believe in and propagate the stereotypes. Last year the IOP published a report "It's Different for Girls" which demonstrated that essentially half of state coeducational schools did not see a single girl progress to A-level physics. By contrast, the likelihood of girls progressing from single sex schools were two and a half times greater.


Amen to this.
sexism  schools  teaching  uk  phyics  girls  children  bias  stereotypes 
december 2013 by jm
My email to Irish Times Editor, sent 25th June
Daragh O'Brien noting 3 stories on 3 consecutive days voicing dangerously skewed misinformation about data protection and privacy law in Ireland:
There is a worrying pattern in these stories. The first two decry the Data Protection legislation (current and future) as being dangerous to children and damaging to the genealogy trade. The third sets up an industry “self-regulation” straw man and heralds it as progress (when it is decidedly not, serving only to further confuse consumers about their rights).

If I was a cynical person I would find it hard not to draw the conclusion that the Irish Times, the “paper of record” has been stooged by organisations who are resistant to the defence of and validation of fundamental rights to privacy as enshrined in the Data Protection Acts and EU Treaties, and in the embryonic Data Protection Regulation. That these stories emerge hot on the heels of the pendulum swing towards privacy concerns that the NSA/Prism revelations have triggered is, I must assume, a co-incidence. It cannot be the case that the Irish Times blindly publishes press releases without conducting cursory fact checking on the stories contained therein?

Three stories over three days is insufficient data to plot a definitive trend, but the emphasis is disconcerting. Is it the Irish Times’ editorial position that Data Protection legislation and the protection of fundamental rights is a bad thing and that industry self-regulation that operates in ignorance of legislation is the appropriate model for the future? It surely cannot be that press releases are regurgitated as balanced fact and news by the Irish Times without fact checking and verification? If I was to predict a “Data Protection killed my Puppy” type headline for tomorrow’s edition or another later this week would I be proved correct?
daragh-obrien  irish-times  iab  bias  advertising  newspapers  press-releases  journalism  data-protection  privacy  ireland 
june 2013 by jm
The Why
How the Irish media are partly to blame for the catastrophic property bubble, from a paper entitled _The Role Of The Media In Propping Up Ireland’s Housing Bubble_, by Dr Julien Mercille, in the _Social Europe Journal_:
“The overall argument is that the Irish media are part and parcel of the political and corporate establishment, and as such the news they convey tend to reflect those sectors’ interests and views. In particular, the Celtic Tiger years involved the financialisation of the economy and a large property bubble, all of it wrapped in an implicit neoliberal ideology. The media, embedded within this particular political economy and itself a constitutive element of it, thus mostly presented stories sustaining it. In particular, news organisations acquired direct stakes in an inflated real estate market by purchasing property websites and receiving vital advertising revenue from the real estate sector. Moreover, a number of their board members were current or former high officials in the finance industry and government, including banks deeply involved in the bubble’s expansion."
economics  irish-times  ireland  newspapers  media  elite  insiders  bubble  property-bubble  property  celtic-tiger  papers  news  bias 
april 2013 by jm

Copy this bookmark:



description:


tags: