jm + racism   21

The Immortal Myths About Online Abuse – Humane Tech – Medium
After building online communities for two decades, we’ve learned how to fight abuse. It’s a solvable problem. We just have to stop repeating the same myths as excuses not to fix things.


Here are the 8 myths Anil Dash picks out:

1. False: You can’t fix abusive behavior online.

2. False: Fighting abuse hurts free speech!

3. False: Software can detect abuse using simple rules.

4. False: Most people say “abuse” when they just mean criticism.

5. False: We just need everybody to use their “real” name.

6. False: Just charge a dollar to comment and that’ll fix things.

7. False: You can call the cops! If it’s not illegal, it’s not harmful.

8. False: Abuse can be fixed without dedicated resources.
abuse  comments  community  harassment  racism  reddit  anil-dash  free-speech 
5 weeks ago by jm
"You Can't Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech"

In 2015, Reddit closed several subreddits—foremost among them r/fatpeoplehate and r/CoonTown—due to violations of Reddit’s anti-harassment policy. However, the effectiveness of banning as a moderation approach remains unclear: banning might diminish hateful behavior, or it may relocate such behavior to different parts of the site.

We study the ban of r/fatpeoplehate and r/CoonTown in terms of its effect on both participating users and affected subreddits. Working from over 100M Reddit posts and comments, we generate hate speech lexicons to examine variations in hate speech usage via causal inference methods. We find that the ban worked for Reddit. More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage—by at least 80%. Though many subreddits saw an influx of r/fatpeoplehate and r/CoonTown “migrants,” those subreddits saw no significant changes in hate speech usage. In other words, other subreddits did not inherit the problem. We conclude by reflecting on the apparent success of the ban, discussing implications for online moderation, Reddit and internet communities more broadly.


(Via Anil Dash)
abuse  reddit  research  hate-speech  community  moderation  racism  internet 
5 weeks ago by jm
After Charlottesville, I Asked My Dad About Selma
Dad told me that he didn’t think I was going to have to go through what he went through, but now he can see that he was wrong. “This fight is a never-ending fight,” he said. “There’s no end to it. I think after the ‘60s, the whole black revolution, Martin Luther King, H. Rap Brown, Stokely Carmichael and all the rest of the people, after that happened, people went to sleep,” he said. “They thought, ‘this is over.’”
selma  charlottesville  racism  nazis  america  race  history  civil-rights  1960s 
9 weeks ago by jm
Google’s Response to Employee’s Anti-Diversity Manifesto Ignores Workplace Discrimination Law – Medium
A workplace-discrimination lawyer writes:
Stray remarks are not enough. But a widespread workplace discussion of whether women engineers are biologically capable of performing at the same level as their male counterparts could suffice to create a hostile work environment. As another example, envision the racial hostility of a workplace where employees, as Google put it, “feel safe” to espouse their “alternative view” that their African-American colleagues are not well-represented in management positions because they are not genetically predisposed for leadership roles. In short, a workplace where people “feel safe sharing opinions” based on gender (or racial, ethnic or religious) stereotypes may become so offensive that it legally amounts to actionable discrimination.
employment  sexism  workplace  discrimination  racism  misogyny  women  beliefs 
10 weeks ago by jm
Everybody lies: how Google search reveals our darkest secrets | Technology | The Guardian
What can we learn about ourselves from the things we ask online? US data scientist Seth Stephens‑Davidowitz analysed anonymous Google search results, uncovering disturbing truths about [America's] desires, beliefs and prejudices


Fascinating. I find it equally interesting how flawed the existing methodologies for polling and surveying are, compared to Google's data, according to this
science  big-data  google  lying  surveys  polling  secrets  data-science  america  racism  searching 
july 2017 by jm
Artificial intelligence is ripe for abuse, tech researcher warns: 'a fascist's dream' | Technology | The Guardian
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

In the Chinese research it turned out that the faces of criminals were more unusual than those of law-abiding citizens. “People who had dissimilar faces were more likely to be seen as untrustworthy by police and judges. That’s encoding bias,” Crawford said. “This would be a terrifying system for an autocrat to get his hand on.” [...]

With AI this type of discrimination can be masked in a black box of algorithms, as appears to be the case with a company called Faceception, for instance, a firm that promises to profile people’s personalities based on their faces. In its own marketing material, the company suggests that Middle Eastern-looking people with beards are “terrorists”, while white looking women with trendy haircuts are “brand promoters”.
bias  ai  racism  politics  big-data  technology  fascism  crime  algorithms  faceception  discrimination  computer-says-no 
march 2017 by jm
Parable of the Polygons - a playable post on the shape of society
Our cute segregation sim is based off the work of Nobel Prize-winning game theorist, Thomas Schelling. Specifically, his 1971 paper, Dynamic Models of Segregation. We built on top of this, and showed how a small demand for diversity can desegregate a neighborhood. In other words, we gave his model a happy ending.
games  society  visualization  diversity  racism  bias  thomas-schelling  segregation 
february 2017 by jm
Banks biased against black fraud victims
We raised the issue of discrimination in 2011 with one of the banks and with the Commission for Racial Equality, but as no-one was keeping records, nothing could be proved, until today. How can this discrimination happen? Well, UK rules give banks a lot of discretion to decide whether to refund a victim, and the first responders often don’t know the full story. If your HSBC card was compromised by a skimmer on a Tesco ATM, there’s no guarantee that Tesco will have told anyone (unlike in America, where the law forces Tesco to tell you). And the fraud pattern might be something entirely new. So bank staff end up making judgement calls like “Is this customer telling the truth?” and “How much is their business worth to us?” This in turn sets the stage for biases and prejudices to kick in, however subconsciously. Add management pressure to cut costs, sometimes even bonuses for cutting them, and here we are.
discrimination  racism  fraud  uk  banking  skimming  security  fca 
january 2017 by jm
How a Machine Learns Prejudice - Scientific American
Agreed, this is a big issue.
If artificial intelligence takes over our lives, it probably won’t involve humans battling an army of robots that relentlessly apply Spock-like logic as they physically enslave us. Instead, the machine-learning algorithms that already let AI programs recommend a movie you’d like or recognize your friend’s face in a photo will likely be the same ones that one day deny you a loan, lead the police to your neighborhood or tell your doctor you need to go on a diet. And since humans create these algorithms, they're just as prone to biases that could lead to bad decisions—and worse outcomes.
These biases create some immediate concerns about our increasing reliance on artificially intelligent technology, as any AI system designed by humans to be absolutely "neutral" could still reinforce humans’ prejudicial thinking instead of seeing through it.
prejudice  bias  machine-learning  ml  data  training  race  racism  google  facebook 
january 2017 by jm
Founder of Google X has no concept of how machine learning as policing tool risks reinforcing implicit bias
This is shocking:
At the end of the panel on artificial intelligence, a young black woman asked [Sebastian Thrun, CEO of the education startup Udacity, who is best known for founding Google X] whether bias in machine learning “could perpetuate structural inequality at a velocity much greater than perhaps humans can.” She offered the example of criminal justice, where “you have a machine learning tool that can identify criminals, and criminals may disproportionately be black because of other issues that have nothing to do with the intrinsic nature of these people, so the machine learns that black people are criminals, and that’s not necessarily the outcome that I think we want.”
In his reply, Thrun made it sound like her concern was one about political correctness, not unconscious bias. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. Sometimes they’re not politically correct,” Thrun said. “When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this.”


"the truth"! Jesus. We are fucked
google  googlex  bias  racism  implicit-bias  machine-learning  ml  sebastian-thrun  udacity  inequality  policing  crime 
october 2016 by jm
How Internet Trolls Won the 2016 Presidential Election
Because this was a novel iteration of online anti-Semitic culture, to the normie media it was worthy of deeply concerned coverage that likely gave a bunch of anti-Semites, trolls, and anti-Semitic trolls exactly the attention and visibility they craved. All without any of them having to prove they were actually involved, meaningfully, in anti-Semitic politics. That’s just a lot of power to give to a group of anonymous online idiots without at least knowing how many of them are 15-year-old dweebs rather than, you know, actual Nazis. [...]

In the long run, as journalistic coverage of the internet is increasingly done by people with at least a baseline understanding of web culture, that coverage will improve. For now, though, things are grim: It’s hard not to feel like journalists and politicos are effectively being led around on a leash by a group of anonymous online idiots, many of whom don’t really believe in anything.
internet  journalism  politics  4chan  8chan  channers  trolls  nazis  racism  pepe-the-frog  trump 
september 2016 by jm
NPR Website To Get Rid Of Comments
Sadly, this makes sense and I'd have to agree.
Mike Durio, of Phoenix, seemed to sum it up in an email to my office back in April. "Have you considered doing away with the comments sections, or tighter moderation?" he wrote. "The comments have devolved into the Punch-and-Judy-Fest of moronic, un-illuminating observations and petty insults I've seen on other pretty much every other Internet site that allows comments." He added, "This is not in keeping with NPR's take-a-step-back, take-a-deep-breath reporting," and noted, "Now, thread hijacking and personal insults are becoming the stock in trade. Frequent posters use the forums to duke it out with one another."

A user named Mary, from Raleigh, N.C., wrote to implore: "Remove the comments section from your articles. The rude, hateful, racist, judgmental comments far outweigh those who may want to engage in some intelligent sideline conversation about the actual subject of the article. I am appalled at the amount of 'free hate' that is found on a website that represents honest and unbiased reporting such as NPR. What are you really gaining from all of these rabid comments other than proof that a sad slice of humanity that preys on the weak while spreading their hate?"
abuse  comments  npr  racism  web  discussion 
august 2016 by jm
LinkedIn called me a white supremacist
Wow. Massive, massive algorithm fail.
n the morning of May 12, LinkedIn, the networking site devoted to making professionals “more productive and successful,” emailed scores of my contacts and told them I’m a professional racist. It was one of those updates that LinkedIn regularly sends its users, algorithmically assembled missives about their connections’ appearances in the media. This one had the innocent-sounding subject, “News About William Johnson,” but once my connections clicked in, they saw a small photo of my grinning face, right above the headline “Trump put white nationalist on list of delegates.” [.....] It turns out that when LinkedIn sends these update emails, people actually read them. So I was getting upset. Not only am I not a Nazi, I’m a Jewish socialist with family members who were imprisoned in concentration camps during World War II. Why was LinkedIn trolling me?
ethics  fail  algorithm  linkedin  big-data  racism  libel 
may 2016 by jm
Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. - ProPublica
holy crap, this is dystopian:
The first time Paul Zilly heard of his score — and realized how much was riding on it — was during his sentencing hearing on Feb. 15, 2013, in court in Barron County, Wisconsin. Zilly had been convicted of stealing a push lawnmower and some tools. The prosecutor recommended a year in county jail and follow-up supervision that could help Zilly with “staying on the right path.” His lawyer agreed to a plea deal.
But Judge James Babler had seen Zilly’s scores. Northpointe’s software had rated Zilly as a high risk for future violent crime and a medium risk for general recidivism. “When I look at the risk assessment,” Babler said in court, “it is about as bad as it could be.”
Then Babler overturned the plea deal that had been agreed on by the prosecution and defense and imposed two years in state prison and three years of supervision.
dystopia  law  policing  risk  risk-assessment  northpointe  racism  fortune-telling  crime 
may 2016 by jm
“Racist algorithms” and learned helplessness
Whenever I’ve had to talk about bias in algorithms, I’ve tried be  careful to emphasize that it’s not that we shouldn’t use algorithms in search, recommendation and decision making. It’s that we often just don’t know how they’re making their decisions to present answers, make recommendations or arrive at conclusions, and it’s this lack of transparency that’s worrisome. Remember, algorithms aren’t just code.

What’s also worrisome is the amplifier effect. Even if “all an algorithm is doing” is reflecting and transmitting biases inherent in society, it’s also amplifying and perpetuating them on a much larger scale than your friendly neighborhood racist. And that’s the bigger issue. [...] even if the algorithm isn’t creating bias, it’s creating a feedback loop that has powerful perception effects.
feedback  bias  racism  algorithms  software  systems  society 
april 2016 by jm
East of Palo Alto’s Eden
What if Silicon Valley had emerged from a racially integrated community?

Would the technology industry be different? 

Would we?

And what can the technology industry do now to avoid repeating the mistakes of the past?


Amazing article -- this is the best thing I've ever read on TechCrunch: the political history of race in Silicon Valley and East Palo Alto.
racism  politics  history  race  silicon-valley  palo-alto  technology  us-politics  via:burritojustice 
january 2015 by jm
The Double Identity of an "Anti-Semitic" Commenter
Hasbara out of control. This is utterly nuts.
His intricate campaign, which he has admitted to Common Dreams, included posting comments by a screen name, "JewishProgressive," whose purpose was to draw attention to and denounce the anti-Semitic comments that he had written under many other screen names. The deception was many-layered. At one point he had one of his characters charge that the anti-Semitic comments and the criticism of the anti-Semitic comments must be written by "internet trolls who have been known to impersonate anti-Semites in order to then double-back and accuse others of supporting anti-Semitism"--exactly what he was doing.
hasbara  israel  trolls  propaganda  web  racism  comments  anonymity  commondreams 
august 2014 by jm
No, Nate, brogrammers may not be macho, but that’s not all there is to it
Great essay on sexism in tech, "brogrammer" culture, "clubhouse chemistry", outsiders, wierd nerds and exclusion:
Every group, including the excluded and disadvantaged, create cultural capital and behave in ways that simultaneously create a sense of belonging for them in their existing social circle while also potentially denying them entry into another one, often at the expense of economic capital. It’s easy to see that wearing baggy, sagging pants to a job interview, or having large and visible tattoos in a corporate setting, might limit someone’s access. These are some of the markers of belonging used in social groups that are often denied opportunities. By embracing these markers, members of the group create real barriers to acceptance outside their circle even as they deepen their peer relationships. The group chooses to adopt values that are rejected by the society that’s rejecting them. And that’s what happens to “weird nerd” men as well—they create ways of being that allow for internal bonding against a largely exclusionary backdrop.


(via Bryan O'Sullivan)
nerds  outsiders  exclusion  society  nate-silver  brogrammers  sexism  racism  tech  culture  silicon-valley  essays  via:bos31337 
march 2014 by jm
Roma, Racism And Tabloid Policing: Interview With Gary Younge : rabble
[This case] shows the link between the popular and the state. This is tabloid journalism followed by tabloid policing.
It’s also completely ignorant. I wrote my article on the Roma after covering the community for a week. I thought, “that’s interesting – there’s a range of phenotypes, ways of looking, that include Roma.” I mentioned two blonde kids by chance.
I mentioned that Roma are more likely to speak the language of the country they’re in than Romani, more likely to have the religion of the country they’re in. But they have the basic aspect that is true for all identities – they know each other and other people know them.
It’s not like I’m an expert on the Roma. I was covering them for a week and after the second day I knew Roma children had blonde hair and blue eyes.
These people who took that kid away knew nothing. And on that basis they abducted a child.
roma  racism  ireland  gary-younge  tabloid  journalist  children  hse  gardai 
october 2013 by jm

related tags

4chan  8chan  1960s  abuse  ai  algorithm  algorithms  america  anil-dash  anonymity  banking  beliefs  bias  big-data  brogrammers  channers  charlottesville  children  civil-rights  comments  commondreams  community  computer-says-no  crime  culture  data  data-science  discrimination  discussion  diversity  dystopia  employment  essays  ethics  exclusion  facebook  faceception  fail  fascism  fca  feedback  fortune-telling  fraud  free-speech  funny  games  gardai  gary-younge  google  googlex  guardian  harassment  hasbara  hate-speech  history  hse  implicit-bias  inequality  internet  ireland  israel  journalism  journalist  law  libel  linkedin  lying  machine-learning  media  misogyny  ml  moderation  nate-silver  nazis  nerds  northpointe  npr  outrage  outsiders  palo-alto  pepe-the-frog  pitchforks  policing  politics  polling  prejudice  pricehound  propaganda  race  racism  rage  reddit  research  risk  risk-assessment  roma  science  searching  sebastian-thrun  secrets  security  segregation  selma  sexism  silicon-valley  skimming  social-media  society  software  surveys  systems  tabloid  tech  technology  thomas-schelling  training  trolls  trump  twitter  udacity  uk  us-politics  via:bos31337  via:burritojustice  visualization  web  women  workplace 

Copy this bookmark:



description:


tags: