inspiral + moderation   17

Facebook begins testing ways to flag fake news
Facebook will try out new ways to report and flag fake news this week, setting up a partnership with fact-checking organisations to try to address the “worst of the worst” hoaxes spread by spammers. 

The world’s largest social network is testing several ways to try to limit the rapid proliferation of fake news stories. This was highlighted by posts that went viral during the US presidential election campaign, such as a report that the Pope endorsed Donald Trump or the “Pizzagate” story that claimed Democrats were involved in a paedophile ring. 
Facebook  webjournalism  fakenews  strategy  moderation  Poynter  partnership  FinancialTimes  2016 
december 2016 by inspiral
Google studying ways to deal with offensive search suggestions & results
Facing criticism over "fake news," inappropriate search suggestions and more, the search company is looking for long-term and comprehensive solutions.
Google  search  moderation  algorithms  innovation  autocomplete  fakenews  strategy  SearchEngineLand  2016 
december 2016 by inspiral
Twitter Users Can Finally Fight Trolls With Tools to Mute Keywords, Phrases and Conversations | Adweek
Today, the company says it's rolling out a way for users to not just block a user, but also to "mute" keywords, phrases and entire conversations at the notification level. In a blog post, Twitter said the feature will be rolled out in the "coming days" to fight the "growing trend" of users taking advantage of its open platform.
And while the "mute" feature isn't entirely new all by itself, the move signals that maybe Twitter is finally taking concerns more seriously less than a week after the election cycle led to journalists, celebrities and everyday people being harassed to the point of leaving the platform altogether. (This summer, actress and comedian Leslie Jones quit Twitter after being attacked with racist and sexist remarks.)
Twitter  moderation  socialmedia  trolling  bullying  hatespeech  AdWeek  2016 
november 2016 by inspiral
Twitter Rolls Out New And Long-Awaited Anti-Harassment Tools - BuzzFeed News
For now, the product update appears to be centered on the notification experience, which has been a minefield for victims of serial harassment on the platform. While a mute feature has long been called for by those targeted by Twitter’s brutish underbelly, it’s also largely cosmetic — it hides abuse instead of fixing it. Although expanded mute tools will attempt to shield users from a deluge of unwanted interactions, the feature will do little to stop the underlying harassment itself.
As such, Twitter also announced it will add a new “hateful conduct” reporting option (when users report an “abusive or harmful” tweet, they’ll now see an option for “directing hate against a race, religion, gender, or orientation”). Similarly, the company is adding new “extensive” internal training for its support teams that deal with hateful harassment. According to the company, its Safety team support staff will undergo “special sessions on cultural and historical contextualization of hateful conduct” as well as refresher programs that will track how hate speech and abuse evolve on the platform (a necessary step, as many trolls have begun to create their own hateful code language with which to bypass traditional censors and filters).
Twitter  harassment  moderation  launch  review  racism  sexism  homophobia  Buzzfeed  2016 
november 2016 by inspiral
YouTube Creator Blog: New tools to shape conversations in your comments section
Your relationship with your community is what makes YouTube unique. Whether your fans are Nerdfighters or Mirfandas, they’ve created a close-knit bond with you and your content. We realize that comments play a key role in growing this connection and we’re dedicated to making your conversations with your community easier and more personal. We've been listening to your feedback and we’re excited to roll out new comment features, including:
Pinned comments: promote a specific comment by pinning it to the top of your feed. This lets you highlight great engagement from your fans or share information with your audience.
Creator hearts: show some love by giving a heart to your favorite comments. This is a new and easy way to acknowledge comments from your community.
Creator usernames: when you comment on your channel, your username will appear under the text with a pop of color around it so your viewers can easily tell that the comment is coming from you. If you are a verified creator, you will still have a verification checkmark appear beside your name.
community  moderation  redesign  Youtube  2016 
november 2016 by inspiral
Instagram Is Finally Letting Users Hide Inappropriate Comments | Adweek
Brands, celebrities and plenty of average people have all received their unfair share of abuse by anonymous (and non-anonymous) trolls on Instagram. Finally, the Facebook-owned company is taking a more effective approach to filtering unwanted remarks in a way that amounts to more than deleting or reporting comments.
Today, Instagram is rolling out a feature that lets the mobile photo- and video-sharing app's more than 500 million users moderate comments by filtering keywords that will hide comments that use them. The feature lets users create their own list of words or choose default options suggested by Instagram.
In a blog post published today, Instagram co-founder and CEO Kevin Systrom said the company is taking steps to foster a "positive place to express yourself."
"The beauty of the Instagram community is the diversity of its members. All different types of people—from diverse backgrounds, races, genders, sexual orientations, abilities and more—call Instagram home, but sometimes the comments on their posts can be unkind," Systrom wrote. "To empower each individual, we need to promote a culture where everyone feels safe to be themselves without criticism or harassment. It's not only my personal wish to do this, I believe it's also our responsibility as a company."
Instagram  socialmedia  moderation  trolling  launch  AdWeek  2016 
september 2016 by inspiral
Instagram gives marketers a new tool to muzzle haters - Digiday
Brand media accounts are frequent targets of customer complaints, offensive slurs and hashtag trolls. Now Instagram is hoping to contain that by giving users — including brands — greater control over the moderation of their comments.

As the Washington Post reported, the platform has started allowing “accounts with high volume comment threads” to filter their comment streams, and even turn off comments entirely. Instagram already lets accounts with high volume of comments use a basic profanity filter and block out words or phrases commonly reported as offensive. But the new feature will let accounts take more charge and let them identify and block terms that affect them on an individual basis.

This would be particularly useful for brands that champion divisive issues of body positivity, inclusivity and diversity more than others, said Gary Nix, senior social strategist with iCrossing. “Those issues create both good and bad conversations, since people’s tempers tend to flare up more,” he said. “It can be effective in curtailing the negative side.”
Instagram  socialmedia  trolling  spam  utility  moderation  launch  Digiday  2016 
august 2016 by inspiral
Instagram seeks solution for the internet's troll problem
Campaign understands that Instagram is testing ways to manage abuse and spam, and it is trialling these methods on accounts with a high volume of comments – namely celebrities.

Instagram wouldn’t comment on whether it is testing anti-abuse tools with Swift, or whether this would give her the ability to remove all negative comments at will. 

But a spokeswoman said: "We’re always looking for ways to help people have a positive experience with comments on Instagram.

"We're currently focused on providing tools to improve accounts with the most high-volume comment threads, and we will use our learnings to continue to improve the comment experience on Instagram."
Instagram  socialmedia  trolling  spam  launch  moderation  utility  Campaign  2016 
august 2016 by inspiral
Instagram nixes naughty comments | TechCrunch
Comment reels can become cesspools, especially on celebrity social media posts that get replies by the thousands. But now Instagram is letting its new business pages take out the trash with a new Comment Moderation option. It “Blocks comments with words or phrases often reported as offensive from appearing on your posts.”

Instagram confirms to me that the feature rolled out yesterday and can be found in the settings of accounts that have turned on the Business Page option.
Instagram  comments  moderation  launch  community  socialmedia  Techcrunch  2016 
july 2016 by inspiral
How Reddit took on its own users – and won | Technology | The Guardian
Since 2006, the site insisted anything that wasn’t illegal should be tolerated. Under Ellen Pao’s brief leadership, all that changed
Reddit  moderation  racism  sexism  management  EllenYao  review  Guardian  2015 
december 2015 by inspiral
The Future of Anonymity on the Internet Is Facebook Rooms | WIRED
Released last week, the new Facebook app is a place where you can chat with other like-minded people about most anything, from the World Series to 18th century playwrights, and because you needn’t use your real name when joining one of its chat rooms, you have a freedom to express yourself that you wouldn’t have on, say, the main Facebook app.

But at the same time, Mark Zuckerberg and company have committed to policing these rooms at the lowest level. If anything offensive appears in the app—hate speech, threats, spam, or graphic content—room moderators or Facebook itself can take it down. For Citron, a law professor at the University of Maryland and the author of Hate Crimes in Cyberspace, that is crucial.
FacebookRooms  Facebook  launch  anonymity  moderation  socialmedia  Wired  2014 
november 2014 by inspiral
The Fappening, Ebola-chan, revenge porn: Why isn’t 4chan’s founder accountable for 4chan’s crimes?
Responsibility for drawing this line lies only with Poole himself. Tech gadfly Anil Dash once wrote, “[I]f your website is full of assholes, it’s your fault.” Dash excoriated many of 4chan’s anonymous policies and those who share Poole’s hands-off attitude: “[T]ake some goddamn responsibility for what you unleash on the world.” Whether or not you agree with Poole’s views on freedom of speech (I myself am in fact sympathetic, if not in total agreement), Dash is right that Poole bears the ultimate responsibility for the standards—or lack thereof—set in place on 4chan. For all the bile directed at “4chan” and “4chan users,” very little of it has been directed at the single person with the ability to change the site’s standards and enforce them, should he so desire. It’s one thing to share a site with awful people; it’s another to make money off of them.
4Chan  ChristopherPoole  critique  management  moderation  Slate  2014 
september 2014 by inspiral

Copy this bookmark:



description:


tags: