rvenkat + zeynep.tufekci   15

YouTube, the Great Radicalizer - The New York Times
--With all necessary respect to Zeynep, implicit in her argument is a lack of agency among a society's denizens. I've noticed that she gives too much credit to the idea of a fair minded, thoughtful, reasonable, ethical good citizen, and the idea that most of blame lies on private and public institutions. I remain very suspicious of this idea and it continues to irk me a little bit.

Propensity to radicalize reinforced by recommendation algorithms is the problem. I have a feeling any non-zero propensity to polarize would be sufficient to radicalize certain individuals, no matter how careful we are at designing such algorithms. The question is whether one can demonstrate that such algorithms polarize even the _normal_ denizens. Yes, unregulated artificial intelligence gone wild is a problem but so is extant natural stupidity.
polarization  radicalization  algorithms  ethics  networked_public_sphere  platform_studies  GAFA  zeynep.tufekci 
march 2018 by rvenkat
The Power of Algorithms and Algorithmic Power: Conceptualizing Machine Intelligence and Social Structure | Berkeley Institute for Data Science
It’s been just five years since a journal article about neural networks — a form of computer learning algorithm that uses large datasets to learn to classify input — broke through to the popular press. The article described how Google researchers had connected 16,000 computers and set this network loose on millions of images from YouTube — without supervision. The system invented the concept of a cat, and how to identify it. Since then, there has been an explosion in decision-making software that functions in a similar fashion: churning through large datasets to learn to identify and classify, without being given specific instructions on how to do so, and perhaps more importantly, without the human programmers having an understanding of how it actually functions. The era of machine intelligence has fully arrived, and it is accelerating. Much of the engineering world and scientific press has focused on whether such intelligence is like human intelligence, or if it ever will be. In this talk, I will instead explore what having such types of intelligence in the hands of power — governments, corporations, institutions — means. These systems bring about novel capabilities to the powerful at scale, threaten to displace many human tasks (because they can perform those tasks well enough), create new forms of privacy invasion through their inferential capabilities, introduce new error patterns we have neither statistical tools nor cultural or political institutions to deal with, incentivize massive surveillance because they only work will with massive datasets, and more. I will explore some of the technical aspects of these technologies and connect them directly to core questions of sociology, culture and politics. This event is co-sponsored by CITRIS and BIDS.

-- No video links. The abstract feels like a work/book-in-progress.
zeynep.tufekci  algorithms  agency  authoritarianism  surveillance  inequality  sociology_of_technology 
march 2018 by rvenkat
The Machines Are Coming - The New York Times
-- interesting argument: technology enables transfer of power and control back to employers. Is this how economic historians view aftermath of industrial revolution?
zeynep.tufekci  automation  robots  artificial_intelligence  labor  cybernetics  sociology_of_technology  NYTimes 
march 2017 by rvenkat
Does a Protest’s Size Matter? - The New York Times
-- interesting viewpoint on the impact of lessening coordination costs on size of protests, value of political signaling, spread of ideas,... This argument might have implications on the ability of state to control its dissenters among its denizens.
democracy  collective_intention  social_movements  social_networks  collective_cognition  us_politics  zeynep.tufekci  dmce  networks  teaching  for_friends 
january 2017 by rvenkat
Twitter and Tear Gas | Yale University Press
A firsthand account and incisive analysis of modern protest, revealing internet-fueled social movements’ greatest strengths and frequent challenges

To understand a thwarted Turkish coup, an anti–Wall Street encampment, and a packed Tahrir Square, we must first comprehend the power and the weaknesses of using new technologies to mobilize large numbers of people. An incisive observer, writer, and participant in today’s social movements, Zeynep Tufekci explains in this accessible and compelling book the nuanced trajectories of modern protests—how they form, how they operate differently from past protests, and why they have difficulty persisting in their long-term quests for change.

Tufekci speaks from direct experience, combining on-the-ground interviews with insightful analysis. She describes how the internet helped the Zapatista uprisings in Mexico, the necessity of remote Twitter users to organize medical supplies during Arab Spring, the refusal to use bullhorns in the Occupy Movement that started in New York, and the empowering effect of tear gas in Istanbul’s Gezi Park. These details from life inside social movements complete a moving investigation of authority, technology, and culture—and offer essential insights into the future of governance.
zeynep.tufekci  book  democracy  freedom_of_speech  cultural_cognition  sociology  social_media  social_movements  collective_intention  collective_cognition  social_networks 
november 2016 by rvenkat
[1403.7400] Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls
Large-scale databases of human activity in social media have captured scientific and policy attention, producing a flood of research and discussion. This paper considers methodological and conceptual challenges for this emergent field, with special attention to the validity and representativeness of social media big data analyses. Persistent issues include the over-emphasis of a single platform, Twitter, sampling biases arising from selection by hashtags, and vague and unrepresentative sampling frames. The socio-cultural complexity of user behavior aimed at algorithmic invisibility (such as subtweeting, mock-retweeting, use of "screen captures" for text, etc.) further complicate interpretation of big data social media. Other challenges include accounting for field effects, i.e. broadly consequential events that do not diffuse only through the network under study but affect the whole society. The application of network methods from other fields to the study of human social activity may not always be appropriate. The paper concludes with a call to action on practical steps to improve our analytic capacity in this promising, rapidly-growing field.
social_media  big_data  methods  critique  zeynep.tufekci 
may 2016 by rvenkat

Copy this bookmark: