db   23135

« earlier    

Google Bombs Are Our New Normal | Backchannel
Google had a problem. Beginning in 2003, a group of users had figured out how to game the site’s search results. This phenomenon was known as a “Google bomb”— a trick played by toying with Google’s algorithm. If users clicked on a site, it registered as popular and might rise in ranking results. The cons were often elaborate, like when a search for “miserable failure” turned up links to information about then-president George W. Bush. It seemed like the query represented Google’s editorial viewpoint; instead, it was a prank.

By early 2007, Google had all but vanquished the problem with the usual triage. A phalanx of technology and product people would huddle with the PR team to uncover the technical issue causing the bad outcome. They would work on a fix, or a workaround, and issue an apologetic explanation. The engineers might tackle a long-term adjustment to the algorithm addressing the root cause. Then it was back to business as usual.

These problems—often caused by hackers or pranksters, and occasionally triggered by people with truly bad intentions—weren’t everyday situations. They were edge cases.

But now, we have a new normal. Manipulating search results today seems more like an invasion than a joke. As the October 1 massacre in Las Vegas unfolded, Google displayed “news” results from rumor mills like 4Chan, and Facebook promulgated rumors and conspiracy theories, sullying the service on which, according to Pew Research, 45 percent of American adults get their news. Meanwhile, the rapid-fire nature of Twitter led users to pass along false information about missing people in the aftermath.
All of these cases signify the central place a number of digital services have staked out in our lives. We trust our devices: We trust them to surface the correct sources in our information feeds, we trust them to deliver our news, and we trust them to surface the opinions of our friends. So the biggest and most influential platforms falling prey to manipulations upsets that trust—and the order of things.

It’s hard to square the global power, reach, and ubiquity of these massive platforms with their youth: Google just turned 19. Facebook is 13. Twitter is 11 and a half. (None, in other words, out of their teens!) Until recently, widespread digital malfeasance was relatively rare on these young platforms. But in a world that increasingly seems dystopian, we now expect security breaches, hacks, purposeful fakery— all of it more or less constantly across the online services and tools we use. Whether the aim is financial, political, or even just hacking for hacking’s sake, the fact that so many of us live and work online means we are, collectively, an attractive and very large target.

If the companies providing the services we rely on want to keep or regain our trust, this new normal warrants a good deal more of their attention. When a problem occurs, the explanations, as I’ve written, have to reach us quickly and be as forthright. And for the technological fixes, a short-lived war room and an apologetic statement no longer do the trick.

Now that we seem to be in a never-ending arms race with miscreants ranging from lone rangers to state-run disinformation machines, we’re going to need more than an army of brilliant engineers patching holes and building workarounds. Companies need to build an ongoing approach—something like a Federation, through which the massive platforms and services we rely on routinely communicate and coordinate, despite the fact that they are also competitors.
These massive global platforms are always an attractive target for sophisticated hackers and state-sponsored bad actors, which is why I’ve been told that it’s not unusual for security engineers from rival businesses to stay in touch when they see unusual behavior or patterns; they share the information. This is one area where a federation approach is working, however informally.

Now, we need companies to extend it and stay on top of misdeeds and indicators of odd patterns from the get-go. If Twitter sees spikes in new account signups from, say, Macedonia or in Tagalog, it should note that to the federation, so others can review their systems. If Google sees an unusual spike in search queries for an uncommon phrase, it’s worth reporting, so perhaps Facebook can look for recent posts that use similar language. And so on.
One reason why the federation plan is necessary: No single company, no matter how massive and wealthy, can hire its way out of a steady gusher of bad information or false and manipulative ads. Mark Zuckerberg’s announcements in May and early October that Facebook will hire a total of 4,000 people to “monitor content”—flag, escalate and remove problematic items—doesn’t seem very savvy for any tech company priding itself on its ability to scale. These attempts at a solution reminded me of the US military’s repeated decisions to send many thousands of troops over the years in order to “win” in Vietnam. (We know how that turned out.) Another issue with throwing people at this kind of problem: The 4,000 (or whatever number Facebook might ultimately hire) of content flaggers are likely to be slotted as fairly low-level employees because the nature of such work is repetitive and thankless. So it’s not unusual for people in these roles to experience burnout and even PTSD from the awfulness of what they are seeing. As I say, this doesn’t seem like a useful approach.
Recent calls for these companies to take greater responsibility and add human intervention make a lot of sense, and I’d suggest these are also elements of a Federation strategy. I hope for a time when, for example, experienced editors—who know how to assess content for accuracy, research, and presentation—are an integral part of product and engineering teams across all of these platforms, all the time.

I don’t know if these sorts of human and technical adjustments are enough to stem the tide of all the digital mishegas out there. But I do know that throwing more bodies at the miasma of disinformation doesn’t work. Neither does saying “we’re just a platform.”

The era of the edge case—the exception, the outlier—is over. Welcome to our time, where trouble is forever brewing.
Google  DasGeileNeueInternet  db 
6 days ago by walt74
Streams: a new general purpose data structure in Redis. - <antirez>
Streams : a new general purpose data structure in Redis. - Added October 09, 2017 at 10:13AM
data-structures  db  redis 
7 days ago by xenocid
#ElectionWatch: Final Hours Fake News Hype in Germany
„suggests that it is a fake account, rather than a genuine user.“

Sahrer is a well known german Troll, neither fake nor rightwing.


Ahead of Germany’s parliamentary election on Sunday, online supporters of the far-right Alternative für Deutschland (AfD) party began warning their voter base of possible election fraud and calling for observers. On Saturday, the eve of the election, their efforts increased, driven by anonymous troll accounts and boosted by a Russian-language botnet.
@DFRLab investigated some of the claims made and the bots which amplified them.
The probable fake
On the morning of September 22, an apparently left-wing @Twitter user named @von_Sahringen claimed to have been made an election helper, and that as a result, “AfD ballots will be made invalid” — a clear indication of fraud.

The post from @von_Sahringen, archived on September 23, 2017. (Source: Twitter)
The post triggered a swift response from a number of users, including Germany’s official election bureau, which pointed out that election fraud is a punishable offense.

@Wahlleiter_Bund is the official account of Germany’s election bureau. (Source: Twitter)
Four hours later, the same user tweeted that they had been visited by the police and their status as an election helper revoked.

“The police have just been here and said I can’t be an election helper any more.” Archived on September 23, 2017. (Source: Twitter)
By then, the post had triggered a Twitter storm from AfD supporters, using the hashtag #Wahlbetrug (“election fraud”).

“Left-wing pickle @von_Sahringen is already looking forward to being allowed to join the #ElectionFraud.” Archived on September 23, 2017. (Source: Twitter)
The above tweet from user @Hartes_Geld, for example, was shared over 300 times.
However, a quick reverse image search of the @von_Sahringen account suggests that it is a fake account, rather than a genuine user.
db  Trolls  BTW2017  Election 
7 days ago by walt74
How Russia recruited wannabe YouTube stars to convince Black Lives Matter activists to vote for Trump
Two purported Black Lives Matter supporters who endorsed President Donald Trump have been outed as propagandists employed by the Kremlin.

The Daily Beast reports that two wannabe social media stars, who go by the names of Williams and Kalvin Johnson, have recently seen their Facebook and Twitter accounts suspended because they were found to have been part of a Russian propaganda operation to boost support for Trump’s candidacy among black Americans.

In their videos, the two men regularly attack former Democratic nominee Hillary Clinton as an “old racist b*tch,” while praising Trump for being a pragmatic businessman who couldn’t have succeeded if he’d actually been a racist.
Russia  BlackLivesMatter  DonaldTrump  Facebook  db  Propaganda  Election 
8 days ago by walt74

« earlier    

related tags

!tool  2017-10-01  2017-10-03  2017-10-08  2017-10-10  2017-10-12  2017-10-14  4chan  90s  advertising  advocacy  afd  algorithms  altleft  altright  amok  analytics  android  angelanagle  api  app  architecture  autobahn  backup  bank  bestpractices  blacklivesmatter  blog  book  bookmarks_bar  books  breitbart  btw2017  business  campus  campuspolitics  catalyst  censorship  cli  cmu  code  columns  concepts  control  copyright  cpan  create  dasgeileneueinternet  data-structures  data  database  databases  datafun  datastructures  dbic  dbix  debug  dependencies  deutsche  dev  development  discord  dist  distinct  distributed  donaldtrump  ebook  economy  education  election  evolution  example  facebook  fake  fakenews  fascism  feminism  firebase  freespeech  fuschia  future  gab.ai  gamergate  games  github  go  goodgopher  google  googlememo  gpgpu  graph  guide  gunjs  hatespeech  hazelcast  howto  http  i/o  identity  identitypolitics  informationoverload  java  javascript  jeffdean  jepsen  journalism  keyvalue  kids  kotlin  language  lasvegas  laurensouthern  learning  ledger  left  legal  linux  mac  macos  manage  management  mapd  mariadb  media  memetics  mike  miloyiannopoulos  mobile  module  mysql  nazis  nosql  os  osx  outrage  outragememetics  page  pagination  password  path  paygap  pepe  perception  performance  perl  philosophy  photo  photos  photoshop  picasa  pimto  pinto  polarization  politicalcorrectness  politics  postgres  postgresql  postmodernism  programming  propaganda  proscons  psg  psychology  query-languages  racism  reading  rebelmedia  reddit  redis  reference  repl  rest  restore  right  route  russia  schema  science  scottalexander  seen.life  server  sexism  slavojzizek  socialbots  socialmedia  software  spark  sql  statistics  stevebannon  storage  strategies  symbols  synchronization  tech  tech_tools  techniques  terrorism  test  theother  timescale  timescaledb  tipsntricks  tolearn  toread  totry  tounderstand  tribalism  trolls  tutorial  twitter  typo3  unique  vectors  version  versioning  views  vimto  violence  vkontakte  voat  vr  warehouse  weird  yahoo  youtube 

Copy this bookmark: