jm + racism   28

How my research on DNA ancestry tests became "fake news"
I was not surprised to see our research twisted by fake news and satire websites. Conspiracy theories are meant to be just as entertaining as they are convincing. They also provide a way out of confronting reality and reckoning with facts that don’t confirm preexisting worldviews. For white nationalists and racists, if test results showed traces of African American or Jewish ancestry, either the tests did not work, or the results were planted by some ideologically motivated scientists, or the tests were part of a global war against whites. With conspiracy theories, debunking is rarely useful because the individual is often searching for an interpretation that confirms their prior beliefs.

As such, DNA conspiracy theories allow white supremacists to plan new escape routes for the traps they laid for themselves long ago. With DNA testing, the one-drop rule—a belief made law in the 1900s that one drop of African blood makes one Black—becomes transmuted genealogically into the one-percent rule, according to which to remain racially white, an individual’s results must show no sign of African or Jewish origin. Through the genealogical lens, American white nationalists consider “one hundred percent European” as good results, which in turn substantiates their “birth right” to the United States as a marker of heredity and conquest.
racism  science  fake-news  conspiracy  genealogy  dna  dna-testing 
4 days ago by jm
Cory Doctorow: Zuck’s Empire of Oily Rags
the sophisticated targeting systems available through Facebook, Google, Twitter, and other Big Tech ad platforms made it easy to find the racist, xenophobic, fearful, angry people who wanted to believe that foreigners were destroying their country while being bankrolled by George Soros.

Remember that elections are generally knife-edge affairs, even for politicians who’ve held their seats for decades with slim margins: 60% of the vote is an excellent win. Remember, too, that the winner in most races is “none of the above,” with huge numbers of voters sitting out the election. If even a small number of these non-voters can be motivated to show up at the polls, safe seats can be made contestable. In a tight race, having a cheap way to reach all the latent Klansmen in a district and quietly inform them that Donald J. Trump is their man is a game-changer.

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. Cambridge Analytica uses Facebook to find racist jerks and tell them to vote for Trump and then they claim that they’ve discovered a mystical way to get otherwise sensible people to vote for maniacs.
facebook  politics  surveillance  cory-doctorow  google  twitter  advertising  elections  cambridge-analytica  racism  nazis 
14 days ago by jm
Facial recognition software is not ready for use by law enforcement | TechCrunch
This is a pretty amazing op-ed from the CEO of a facial recognition software development company:

Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie. And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether. There’s really no “nice” way to acknowledge these things.

I’ve been pretty clear about the potential dangers associated with current racial biases in face recognition, and open in my opposition to the use of the technology in law enforcement. As the black chief executive of a software company developing facial recognition services, I have a personal connection to the technology, both culturally and socially.

Having the privilege of a comprehensive understanding of how the software works gives me a unique perspective that has shaped my positions about its uses. As a result, I (and my company) have come to believe that the use of commercial facial recognition in law enforcement or in government surveillance of any kind is wrong — and that it opens the door for gross misconduct by the morally corrupt.
techcrunch  facial-recognition  computer-vision  machine-learning  racism  algorithms  america 
21 days ago by jm
Paradox of tolerance
The paradox of tolerance was described by Karl Popper in 1945. The paradox states that if a society is tolerant without limit, their ability to be tolerant will eventually be seized or destroyed by the intolerant. Popper came to the seemingly paradoxical conclusion that in order to maintain a tolerant society, the society must be intolerant of intolerance.
psychology  diversity  paradoxes  karl-popper  tolerance  intolerance  racism 
4 weeks ago by jm
What Gamergate should have taught us about the 'alt-right'
Spot on, from a year ago:

Prominent critics of the Trump administration need to learn from Gamergate. They need to be prepared for abuse, for falsified concerns, invented grassroots campaigns designed specifically to break, belittle, or disgrace. Words and concepts will be twisted, repackaged and shared across forums, stripping them of meaning. Gamergate painted critics as censors, the far-right movement claims critics are the real racists.

Perhaps the true lesson of Gamergate was that the media is culturally unequipped to deal with the forces actively driving these online movements. The situation was horrifying enough two years ago, it is many times more dangerous now.
politics  fascism  gamergate  history  alt-right  milo  fake-news  propaganda  nazis  racism  misogyny 
december 2017 by jm
IBM urged to avoid working on 'extreme vetting' of U.S. immigrants
ICE wants to use machine learning technology and social media monitoring to determine whether an individual is a “positively contributing member of society,” according to documents published on federal contracting websites. More than 50 civil society groups and more than 50 technical experts sent separate letters on Thursday to the Department of Homeland Security saying the vetting program as described was “tailor-made for discrimination” and contending artificial intelligence was unable to provide the information ICE desired.
civil-rights  politics  usa  trump  ice  ibm  civil-liberties  immigration  discrimination  racism  social-media 
november 2017 by jm
The 10 Top Recommendations for the AI Field in 2017 from the AI Now Institute
I am 100% behind this. There's so much potential for hidden bias and unethical discrimination in careless AI/ML deployment.
While AI holds significant promise, we’re seeing significant challenges in the rapid push to integrate these systems into high stakes domains. In criminal justice, a team at Propublica, and multiple academics since, have investigated how an algorithm used by courts and law enforcement to predict recidivism in criminal defendants may be introducing significant bias against African Americans. In a healthcare setting, a study at the University of Pittsburgh Medical Center observed that an AI system used to triage pneumonia patients was missing a major risk factor for severe complications. In the education field, teachers in Texas successfully sued their school district for evaluating them based on a ‘black box’ algorithm, which was exposed to be deeply flawed.

This handful of examples is just the start — there’s much more we do not yet know. Part of the challenge is that the industry currently lacks standardized methods for testing and auditing AI systems to ensure they are safe and not amplifying bias. Yet early-stage AI systems are being introduced simultaneously across multiple areas, including healthcare, finance, law, education, and the workplace. These systems are increasingly being used to predict everything from our taste in music, to our likelihood of experiencing mental illness, to our fitness for a job or a loan.
ai  algorithms  machine-learning  ai-now  ethics  bias  racism  discrimination 
november 2017 by jm
The Immortal Myths About Online Abuse – Humane Tech – Medium
After building online communities for two decades, we’ve learned how to fight abuse. It’s a solvable problem. We just have to stop repeating the same myths as excuses not to fix things.


Here are the 8 myths Anil Dash picks out:

1. False: You can’t fix abusive behavior online.

2. False: Fighting abuse hurts free speech!

3. False: Software can detect abuse using simple rules.

4. False: Most people say “abuse” when they just mean criticism.

5. False: We just need everybody to use their “real” name.

6. False: Just charge a dollar to comment and that’ll fix things.

7. False: You can call the cops! If it’s not illegal, it’s not harmful.

8. False: Abuse can be fixed without dedicated resources.
abuse  comments  community  harassment  racism  reddit  anil-dash  free-speech 
september 2017 by jm
"You Can't Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech"

In 2015, Reddit closed several subreddits—foremost among them r/fatpeoplehate and r/CoonTown—due to violations of Reddit’s anti-harassment policy. However, the effectiveness of banning as a moderation approach remains unclear: banning might diminish hateful behavior, or it may relocate such behavior to different parts of the site.

We study the ban of r/fatpeoplehate and r/CoonTown in terms of its effect on both participating users and affected subreddits. Working from over 100M Reddit posts and comments, we generate hate speech lexicons to examine variations in hate speech usage via causal inference methods. We find that the ban worked for Reddit. More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage—by at least 80%. Though many subreddits saw an influx of r/fatpeoplehate and r/CoonTown “migrants,” those subreddits saw no significant changes in hate speech usage. In other words, other subreddits did not inherit the problem. We conclude by reflecting on the apparent success of the ban, discussing implications for online moderation, Reddit and internet communities more broadly.


(Via Anil Dash)
abuse  reddit  research  hate-speech  community  moderation  racism  internet 
september 2017 by jm
After Charlottesville, I Asked My Dad About Selma
Dad told me that he didn’t think I was going to have to go through what he went through, but now he can see that he was wrong. “This fight is a never-ending fight,” he said. “There’s no end to it. I think after the ‘60s, the whole black revolution, Martin Luther King, H. Rap Brown, Stokely Carmichael and all the rest of the people, after that happened, people went to sleep,” he said. “They thought, ‘this is over.’”
selma  charlottesville  racism  nazis  america  race  history  civil-rights  1960s 
august 2017 by jm
Google’s Response to Employee’s Anti-Diversity Manifesto Ignores Workplace Discrimination Law – Medium
A workplace-discrimination lawyer writes:
Stray remarks are not enough. But a widespread workplace discussion of whether women engineers are biologically capable of performing at the same level as their male counterparts could suffice to create a hostile work environment. As another example, envision the racial hostility of a workplace where employees, as Google put it, “feel safe” to espouse their “alternative view” that their African-American colleagues are not well-represented in management positions because they are not genetically predisposed for leadership roles. In short, a workplace where people “feel safe sharing opinions” based on gender (or racial, ethnic or religious) stereotypes may become so offensive that it legally amounts to actionable discrimination.
employment  sexism  workplace  discrimination  racism  misogyny  women  beliefs 
august 2017 by jm
Everybody lies: how Google search reveals our darkest secrets | Technology | The Guardian
What can we learn about ourselves from the things we ask online? US data scientist Seth Stephens‑Davidowitz analysed anonymous Google search results, uncovering disturbing truths about [America's] desires, beliefs and prejudices


Fascinating. I find it equally interesting how flawed the existing methodologies for polling and surveying are, compared to Google's data, according to this
science  big-data  google  lying  surveys  polling  secrets  data-science  america  racism  searching 
july 2017 by jm
Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' | Technology | The Guardian
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.” [...]

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
bias  ai  racism  politics  big-data  technology  fascism  crime  algorithms  faceception  discrimination  computer-says-no 
march 2017 by jm
Parable of the Polygons - a playable post on the shape of society
Our cute segregation sim is based off the work of Nobel Prize-winning game theorist, Thomas Schelling. Specifically, his 1971 paper, Dynamic Models of Segregation. We built on top of this, and showed how a small demand for diversity can desegregate a neighborhood. In other words, we gave his model a happy ending.
games  society  visualization  diversity  racism  bias  thomas-schelling  segregation 
february 2017 by jm
Banks biased against black fraud victims
We raised the issue of discrimination in 2011 with one of the banks and with the Commission for Racial Equality, but as no-one was keeping records, nothing could be proved, until today. How can this discrimination happen? Well, UK rules give banks a lot of discretion to decide whether to refund a victim, and the first responders often don’t know the full story. If your HSBC card was compromised by a skimmer on a Tesco ATM, there’s no guarantee that Tesco will have told anyone (unlike in America, where the law forces Tesco to tell you). And the fraud pattern might be something entirely new. So bank staff end up making judgement calls like “Is this customer telling the truth?” and “How much is their business worth to us?” This in turn sets the stage for biases and prejudices to kick in, however subconsciously. Add management pressure to cut costs, sometimes even bonuses for cutting them, and here we are.
discrimination  racism  fraud  uk  banking  skimming  security  fca 
january 2017 by jm
How a Machine Learns Prejudice - Scientific American
Agreed, this is a big issue.
If artificial intelligence takes over our lives, it probably won’t involve humans battling an army of robots that relentlessly apply Spock-like logic as they physically enslave us. Instead, the machine-learning algorithms that already let AI programs recommend a movie you’d like or recognize your friend’s face in a photo will likely be the same ones that one day deny you a loan, lead the police to your neighborhood or tell your doctor you need to go on a diet. And since humans create these algorithms, they're just as prone to biases that could lead to bad decisions—and worse outcomes.
These biases create some immediate concerns about our increasing reliance on artificially intelligent technology, as any AI system designed by humans to be absolutely "neutral" could still reinforce humans’ prejudicial thinking instead of seeing through it.
prejudice  bias  machine-learning  ml  data  training  race  racism  google  facebook 
january 2017 by jm
Founder of Google X has no concept of how machine learning as policing tool risks reinforcing implicit bias
This is shocking:
At the end of the panel on artificial intelligence, a young black woman asked [Sebastian Thrun, CEO of the education startup Udacity, who is best known for founding Google X] whether bias in machine learning “could perpetuate structural inequality at a velocity much greater than perhaps humans can.” She offered the example of criminal justice, where “you have a machine learning tool that can identify criminals, and criminals may disproportionately be black because of other issues that have nothing to do with the intrinsic nature of these people, so the machine learns that black people are criminals, and that’s not necessarily the outcome that I think we want.”
In his reply, Thrun made it sound like her concern was one about political correctness, not unconscious bias. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. Sometimes they’re not politically correct,” Thrun said. “When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this.”


"the truth"! Jesus. We are fucked
google  googlex  bias  racism  implicit-bias  machine-learning  ml  sebastian-thrun  udacity  inequality  policing  crime 
october 2016 by jm
How Internet Trolls Won the 2016 Presidential Election
Because this was a novel iteration of online anti-Semitic culture, to the normie media it was worthy of deeply concerned coverage that likely gave a bunch of anti-Semites, trolls, and anti-Semitic trolls exactly the attention and visibility they craved. All without any of them having to prove they were actually involved, meaningfully, in anti-Semitic politics. That’s just a lot of power to give to a group of anonymous online idiots without at least knowing how many of them are 15-year-old dweebs rather than, you know, actual Nazis. [...]

In the long run, as journalistic coverage of the internet is increasingly done by people with at least a baseline understanding of web culture, that coverage will improve. For now, though, things are grim: It’s hard not to feel like journalists and politicos are effectively being led around on a leash by a group of anonymous online idiots, many of whom don’t really believe in anything.
internet  journalism  politics  4chan  8chan  channers  trolls  nazis  racism  pepe-the-frog  trump 
september 2016 by jm
NPR Website To Get Rid Of Comments
Sadly, this makes sense and I'd have to agree.
Mike Durio, of Phoenix, seemed to sum it up in an email to my office back in April. "Have you considered doing away with the comments sections, or tighter moderation?" he wrote. "The comments have devolved into the Punch-and-Judy-Fest of moronic, un-illuminating observations and petty insults I've seen on other pretty much every other Internet site that allows comments." He added, "This is not in keeping with NPR's take-a-step-back, take-a-deep-breath reporting," and noted, "Now, thread hijacking and personal insults are becoming the stock in trade. Frequent posters use the forums to duke it out with one another."

A user named Mary, from Raleigh, N.C., wrote to implore: "Remove the comments section from your articles. The rude, hateful, racist, judgmental comments far outweigh those who may want to engage in some intelligent sideline conversation about the actual subject of the article. I am appalled at the amount of 'free hate' that is found on a website that represents honest and unbiased reporting such as NPR. What are you really gaining from all of these rabid comments other than proof that a sad slice of humanity that preys on the weak while spreading their hate?"
abuse  comments  npr  racism  web  discussion 
august 2016 by jm
LinkedIn called me a white supremacist
Wow. Massive, massive algorithm fail.
n the morning of May 12, LinkedIn, the networking site devoted to making professionals “more productive and successful,” emailed scores of my contacts and told them I’m a professional racist. It was one of those updates that LinkedIn regularly sends its users, algorithmically assembled missives about their connections’ appearances in the media. This one had the innocent-sounding subject, “News About William Johnson,” but once my connections clicked in, they saw a small photo of my grinning face, right above the headline “Trump put white nationalist on list of delegates.” [.....] It turns out that when LinkedIn sends these update emails, people actually read them. So I was getting upset. Not only am I not a Nazi, I’m a Jewish socialist with family members who were imprisoned in concentration camps during World War II. Why was LinkedIn trolling me?
ethics  fail  algorithm  linkedin  big-data  racism  libel 
may 2016 by jm
Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. - ProPublica
holy crap, this is dystopian:
The first time Paul Zilly heard of his score — and realized how much was riding on it — was during his sentencing hearing on Feb. 15, 2013, in court in Barron County, Wisconsin. Zilly had been convicted of stealing a push lawnmower and some tools. The prosecutor recommended a year in county jail and follow-up supervision that could help Zilly with “staying on the right path.” His lawyer agreed to a plea deal.
But Judge James Babler had seen Zilly’s scores. Northpointe’s software had rated Zilly as a high risk for future violent crime and a medium risk for general recidivism. “When I look at the risk assessment,” Babler said in court, “it is about as bad as it could be.”
Then Babler overturned the plea deal that had been agreed on by the prosecution and defense and imposed two years in state prison and three years of supervision.
dystopia  law  policing  risk  risk-assessment  northpointe  racism  fortune-telling  crime 
may 2016 by jm
“Racist algorithms” and learned helplessness
Whenever I’ve had to talk about bias in algorithms, I’ve tried be  careful to emphasize that it’s not that we shouldn’t use algorithms in search, recommendation and decision making. It’s that we often just don’t know how they’re making their decisions to present answers, make recommendations or arrive at conclusions, and it’s this lack of transparency that’s worrisome. Remember, algorithms aren’t just code.

What’s also worrisome is the amplifier effect. Even if “all an algorithm is doing” is reflecting and transmitting biases inherent in society, it’s also amplifying and perpetuating them on a much larger scale than your friendly neighborhood racist. And that’s the bigger issue. [...] even if the algorithm isn’t creating bias, it’s creating a feedback loop that has powerful perception effects.
feedback  bias  racism  algorithms  software  systems  society 
april 2016 by jm
East of Palo Alto’s Eden
What if Silicon Valley had emerged from a racially integrated community?

Would the technology industry be different? 

Would we?

And what can the technology industry do now to avoid repeating the mistakes of the past?


Amazing article -- this is the best thing I've ever read on TechCrunch: the political history of race in Silicon Valley and East Palo Alto.
racism  politics  history  race  silicon-valley  palo-alto  technology  us-politics  via:burritojustice 
january 2015 by jm
The Double Identity of an "Anti-Semitic" Commenter
Hasbara out of control. This is utterly nuts.
His intricate campaign, which he has admitted to Common Dreams, included posting comments by a screen name, "JewishProgressive," whose purpose was to draw attention to and denounce the anti-Semitic comments that he had written under many other screen names. The deception was many-layered. At one point he had one of his characters charge that the anti-Semitic comments and the criticism of the anti-Semitic comments must be written by "internet trolls who have been known to impersonate anti-Semites in order to then double-back and accuse others of supporting anti-Semitism"--exactly what he was doing.
hasbara  israel  trolls  propaganda  web  racism  comments  anonymity  commondreams 
august 2014 by jm
No, Nate, brogrammers may not be macho, but that’s not all there is to it
Great essay on sexism in tech, "brogrammer" culture, "clubhouse chemistry", outsiders, wierd nerds and exclusion:
Every group, including the excluded and disadvantaged, create cultural capital and behave in ways that simultaneously create a sense of belonging for them in their existing social circle while also potentially denying them entry into another one, often at the expense of economic capital. It’s easy to see that wearing baggy, sagging pants to a job interview, or having large and visible tattoos in a corporate setting, might limit someone’s access. These are some of the markers of belonging used in social groups that are often denied opportunities. By embracing these markers, members of the group create real barriers to acceptance outside their circle even as they deepen their peer relationships. The group chooses to adopt values that are rejected by the society that’s rejecting them. And that’s what happens to “weird nerd” men as well—they create ways of being that allow for internal bonding against a largely exclusionary backdrop.


(via Bryan O'Sullivan)
nerds  outsiders  exclusion  society  nate-silver  brogrammers  sexism  racism  tech  culture  silicon-valley  essays  via:bos31337 
march 2014 by jm
Roma, Racism And Tabloid Policing: Interview With Gary Younge : rabble
[This case] shows the link between the popular and the state. This is tabloid journalism followed by tabloid policing.
It’s also completely ignorant. I wrote my article on the Roma after covering the community for a week. I thought, “that’s interesting – there’s a range of phenotypes, ways of looking, that include Roma.” I mentioned two blonde kids by chance.
I mentioned that Roma are more likely to speak the language of the country they’re in than Romani, more likely to have the religion of the country they’re in. But they have the basic aspect that is true for all identities – they know each other and other people know them.
It’s not like I’m an expert on the Roma. I was covering them for a week and after the second day I knew Roma children had blonde hair and blue eyes.
These people who took that kid away knew nothing. And on that basis they abducted a child.
roma  racism  ireland  gary-younge  tabloid  journalist  children  hse  gardai 
october 2013 by jm

related tags

4chan  8chan  1960s  abuse  advertising  ai  ai-now  algorithm  algorithms  alt-right  america  anil-dash  anonymity  banking  beliefs  bias  big-data  brogrammers  cambridge-analytica  channers  charlottesville  children  civil-liberties  civil-rights  comments  commondreams  community  computer-says-no  computer-vision  conspiracy  cory-doctorow  crime  culture  data  data-science  discrimination  discussion  diversity  dna  dna-testing  dystopia  elections  employment  essays  ethics  exclusion  facebook  faceception  facial-recognition  fail  fake-news  fascism  fca  feedback  fortune-telling  fraud  free-speech  funny  gamergate  games  gardai  gary-younge  genealogy  google  googlex  guardian  harassment  hasbara  hate-speech  history  hse  ibm  ice  immigration  implicit-bias  inequality  internet  intolerance  ireland  israel  journalism  journalist  karl-popper  law  libel  linkedin  lying  machine-learning  media  milo  misogyny  ml  moderation  nate-silver  nazis  nerds  northpointe  npr  outrage  outsiders  palo-alto  paradoxes  pepe-the-frog  pitchforks  policing  politics  polling  prejudice  pricehound  propaganda  psychology  race  racism  rage  reddit  research  risk  risk-assessment  roma  science  searching  sebastian-thrun  secrets  security  segregation  selma  sexism  silicon-valley  skimming  social-media  society  software  surveillance  surveys  systems  tabloid  tech  techcrunch  technology  thomas-schelling  tolerance  training  trolls  trump  twitter  udacity  uk  us-politics  usa  via:bos31337  via:burritojustice  visualization  web  women  workplace 

Copy this bookmark:



description:


tags: