robertogreco + algorithms   99

Language Is Migrant - South Magazine Issue #8 [documenta 14 #3] - documenta 14
"Language is migrant. Words move from language to language, from culture to culture, from mouth to mouth. Our bodies are migrants; cells and bacteria are migrants too. Even galaxies migrate.

What is then this talk against migrants? It can only be talk against ourselves, against life itself.

Twenty years ago, I opened up the word “migrant,” seeing in it a dangerous mix of Latin and Germanic roots. I imagined “migrant” was probably composed of mei, Latin for “to change or move,” and gra, “heart” from the Germanic kerd. Thus, “migrant” became “changed heart,”
a heart in pain,
changing the heart of the earth.

The word “immigrant” says, “grant me life.”

“Grant” means “to allow, to have,” and is related to an ancient Proto-Indo-European root: dhe, the mother of “deed” and “law.” So too, sacerdos, performer of sacred rites.

What is the rite performed by millions of people displaced and seeking safe haven around the world? Letting us see our own indifference, our complicity in the ongoing wars?

Is their pain powerful enough to allow us to change our hearts? To see our part in it?

I “wounder,” said Margarita, my immigrant friend, mixing up wondering and wounding, a perfect embodiment of our true condition!

Vicente Huidobro said, “Open your mouth to receive the host of the wounded word.”

The wound is an eye. Can we look into its eyes?
my specialty is not feeling, just
looking, so I say:
(the word is a hard look.)
—Rosario Castellanos

I don’t see with my eyes: words
are my eyes.
—Octavio Paz

In l980, I was in exile in Bogotá, where I was working on my “Palabrarmas” project, a way of opening words to see what they have to say. My early life as a poet was guided by a line from Novalis: “Poetry is the original religion of mankind.” Living in the violent city of Bogotá, I wanted to see if anybody shared this view, so I set out with a camera and a team of volunteers to interview people in the street. I asked everybody I met, “What is Poetry to you?” and I got great answers from beggars, prostitutes, and policemen alike. But the best was, “Que prosiga,” “That it may go on”—how can I translate the subjunctive, the most beautiful tiempo verbal (time inside the verb) of the Spanish language? “Subjunctive” means “next to” but under the power of the unknown. It is a future potential subjected to unforeseen conditions, and that matches exactly the quantum definition of emergent properties.

If you google the subjunctive you will find it described as a “mood,” as if a verbal tense could feel: “The subjunctive mood is the verb form used to express a wish, a suggestion, a command, or a condition that is contrary to fact.” Or “the ‘present’ subjunctive is the bare form of a verb (that is, a verb with no ending).”

I loved that! A never-ending image of a naked verb! The man who passed by as a shadow in my film saying “Que prosiga” was on camera only for a second, yet he expressed in two words the utter precision of Indigenous oral culture.

People watching the film today can’t believe it was not scripted, because in thirty-six years we seem to have forgotten the art of complex conversation. In the film people in the street improvise responses on the spot, displaying an awareness of language that seems to be missing today. I wounder, how did it change? And my heart says it must be fear, the ocean of lies we live in, under a continuous stream of doublespeak by the violent powers that rule us. Living under dictatorship, the first thing that disappears is playful speech, the fun and freedom of saying what you really think. Complex public conversation goes extinct, and along with it, the many species we are causing to disappear as we speak.

The word “species” comes from the Latin speciēs, “a seeing.” Maybe we are losing species and languages, our joy, because we don’t wish to see what we are doing.

Not seeing the seeing in words, we numb our senses.

I hear a “low continuous humming sound” of “unmanned aerial vehicles,” the drones we send out into the world carrying our killing thoughts.

Drones are the ultimate expression of our disconnect with words, our ability to speak without feeling the effect or consequences of our words.

“Words are acts,” said Paz.

Our words are becoming drones, flying robots. Are we becoming desensitized by not feeling them as acts? I am thinking not just of the victims but also of the perpetrators, the drone operators. Tonje Hessen Schei, director of the film Drone, speaks of how children are being trained to kill by video games: “War is made to look fun, killing is made to look cool. ... I think this ‘militainment’ has a huge cost,” not just for the young soldiers who operate them but for society as a whole. Her trailer opens with these words by a former aide to Colin Powell in the Bush/Cheney administration:
OUR POTENTIAL COLLECTIVE FUTURE. WATCH IT AND WEEP FOR US. OR WATCH IT AND DETERMINE TO CHANGE THAT FUTURE
—Lawrence Wilkerson, Colonel U.S. Army (retired)


In Astro Noise, the exhibition by Laura Poitras at the Whitney Museum of American Art, the language of surveillance migrates into poetry and art. We lie in a collective bed watching the night sky crisscrossed by drones. The search for matching patterns, the algorithms used to liquidate humanity with drones, is turned around to reveal the workings of the system. And, we are being surveyed as we survey the show! A new kind of visual poetry connecting our bodies to the real fight for the soul of this Earth emerges, and we come out woundering: Are we going to dehumanize ourselves to the point where Earth itself will dream our end?

The fight is on everywhere, and this may be the only beauty of our times. The Quechua speakers of Peru say, “beauty is the struggle.”

Maybe darkness will become the source of light. (Life regenerates in the dark.)

I see the poet/translator as the person who goes into the dark, seeking the “other” in him/herself, what we don’t wish to see, as if this act could reveal what the world keeps hidden.

Eduardo Kohn, in his book How Forests Think: Toward an Anthropology Beyond the Human notes the creation of a new verb by the Quichua speakers of Ecuador: riparana means “darse cuenta,” “to realize or to be aware.” The verb is a Quichuan transfiguration of the Spanish reparar, “to observe, sense, and repair.” As if awareness itself, the simple act of observing, had the power to heal.

I see the invention of such verbs as true poetry, as a possible path or a way out of the destruction we are causing.

When I am asked about the role of the poet in our times, I only question: Are we a “listening post,” composing an impossible “survival guide,” as Paul Chan has said? Or are we going silent in the face of our own destruction?

Subcomandante Marcos, the Zapatista guerrilla, transcribes the words of El Viejo Antonio, an Indian sage: “The gods went looking for silence to reorient themselves, but found it nowhere.” That nowhere is our place now, that’s why we need to translate language into itself so that IT sees our awareness.

Language is the translator. Could it translate us to a place within where we cease to tolerate injustice and the destruction of life?

Life is language. “When we speak, life speaks,” says the Kaushitaki Upanishad.

Awareness creates itself looking at itself.

It is transient and eternal at the same time.

Todo migra. Let’s migrate to the “wounderment” of our lives, to poetry itself."
ceciliavicuña  language  languages  words  migration  immigration  life  subcomandantemarcos  elviejoantonio  lawrencewilkerson  octaviopaz  exile  rosariocastellanos  poetry  spanish  español  subjunctive  oral  orality  conversation  complexity  seeing  species  joy  tonjehessenschei  war  colinpowell  laurapoitras  art  visual  translation  eduoardokohn  quechua  quichua  healing  repair  verbs  invention  listening  kaushitakiupanishad  awareness  noticing  wondering  vicentehuidobro  wounds  woundering  migrants  unknown  future  potential  unpredictability  emergent  drones  morethanhuman  multispecies  paulchan  destruction  displacement  refugees  extinction  others  tolerance  injustice  justice  transience  ephemerality  ephemeral  canon  eternal  surveillance  patterns  algorithms  earth  sustainability  environment  indifference  complicity  dictatorship  documenta14  2017  classideas 
4 weeks ago by robertogreco
Scratching the Surface — 104. Cab Broskoski and Chris Sherron
"Cab Broskoski and Chris Sherron are two of the founders of Are.na, a knowledge sharing platform that combines the creative back-and-forth of social media with the focus of a productivity tool. Before working on Arena, Cab was a digital artist and Chris a graphic designer and in this episode, they talk about their desire for a new type of bookmarking tool and building a platform for collaborative, interdisciplinary research as well as larger questions around open source tools, research as artistic practice, and subverting the norms of social media."

[direct link to audio:
https://soundcloud.com/scratchingthesurfacefm/104-cab-broskoski-and-chris-sherron ]
jarrettfuller  are.na  cabbroskoski  chrissherron  coreyarcangel  del.icio.us  bookmarkling  pinterest  cv  tagging  flickr  michaelcina  youworkforthem  davidbohm  williamgibson  digital  damonzucconi  stanleykubrick  stephaniesnt  julianbozeman  public  performance  collections  collecting  research  2000s  interview  information  internet  web  sharing  conversation  art  design  socialmedia  socialnetworking  socialnetworks  online  onlinetoolkit  inspiration  moodboards  graphicdesign  graphics  images  web2.0  webdesign  webdev  ui  ux  scratchingthesurface  education  teaching  edtech  technology  multidisciplinary  generalists  creative  creativitysingapore  creativegeneralists  learning  howwelearn  attention  interdisciplinary  crossdisciplinary  crosspollination  algorithms  canon  knowledge  transdisciplinary  tools  archives  slow  slowweb  slowinternet  instagram  facebook 
january 2019 by robertogreco
The Stories We Were Told about Education Technology (2018)
"It’s been quite a year for education news, not that you’d know that by listening to much of the ed-tech industry (press). Subsidized by the Chan Zuckerberg Initiative, some publications have repeatedly run overtly and covertly sponsored articles that hawk the future of learning as “personalized,” as focused on “the whole child.” Some of these attempt to stretch a contemporary high-tech vision of social emotional surveillance so it can map onto a strange vision of progressive education, overlooking no doubt how the history of progressive education has so often been intertwined with race science and eugenics.

Meanwhile this year, immigrant, refugee children at the United States border were separated from their parents and kept in cages, deprived of legal counsel, deprived of access to education, deprived in some cases of water.

“Whole child” and cages – it’s hardly the only jarring juxtaposition I could point to.

2018 was another year of #MeToo, when revelations about sexual assault and sexual harassment shook almost every section of society – the media and the tech industries, unsurprisingly, but the education sector as well – higher ed, K–12, and non-profits alike, as well school sports all saw major and devastating reports about cultures and patterns of sexual violence. These behaviors were, once again, part of the hearings and debates about a Supreme Court Justice nominee – a sickening deja vu not only for those of us that remember Anita Hill ’s testimony decades ago but for those of us who have experienced something similar at the hands of powerful people. And on and on and on.

And yet the education/technology industry (press) kept up with its rosy repetition that social equality is surely its priority, a product feature even – that VR, for example, a technology it has for so long promised is “on the horizon,” is poised to help everyone, particularly teachers and students, become more empathetic. Meanwhile, the founder of Oculus Rift is now selling surveillance technology for a virtual border wall between the US and Mexico.

2018 was a year in which public school teachers all over the US rose up in protest over pay, working conditions, and funding, striking in red states like West Virginia, Kentucky, and Oklahoma despite an anti-union ruling by the Supreme Court.

And yet the education/technology industry (press) was wowed by teacher influencers and teacher PD on Instagram, touting the promise for more income via a side-hustle like tutoring rather by structural or institutional agitation. Don’t worry, teachers. Robots won’t replace you, the press repeatedly said. Unsaid: robots will just de-professionalize, outsource, or privatize the work. Or, as the AI makers like to say, robots will make us all work harder (and no doubt, with no unions, cheaper).

2018 was a year of ongoing and increased hate speech and bullying – racism and anti-Semitism – on campuses and online.

And yet the education/technology industry (press) still maintained that blockchain would surely revolutionize the transcript and help insure that no one lies about who they are or what they know. Blockchain would enhance “smart spending” and teach financial literacy, the ed-tech industry (press) insisted, never once mentioning the deep entanglements between anti-Semitism and the alt-right and blockchain (specifically Bitcoin) backers.

2018 was a year in which hate and misinformation, magnified and spread by technology giants, continued to plague the world. Their algorithmic recommendation engines peddled conspiracy theories (to kids, to teens, to adults). “YouTube, the Great Radicalizer” as sociologist Zeynep Tufekci put it in a NYT op-ed.

And yet the education/technology industry (press) still talked about YouTube as the future of education, cheerfully highlighting (that is, spreading) its viral bullshit. Folks still retyped the press releases Google issued and retyped the press releases Facebook issued, lauding these companies’ (and their founders’) efforts to reshape the curriculum and reshape the classroom.

This is the ninth year that I’ve reviewed the stories we’re being told about education technology. Typically, this has been a ten (or more) part series. But I just can’t do it any more. Some people think it’s hilarious that I’m ed-tech’s Cassandra, but it’s not funny at all. It’s depressing, and it’s painful. And no one fucking listens.

If I look back at what I’ve written in previous years, I feel like I’ve already covered everything I could say about 2018. Hell, I’ve already written about the whole notion of the “zombie idea” in ed-tech – that bad ideas never seem to go away, that just get rebranded and repackaged. I’ve written about misinformation and ed-tech (and ed-tech as misinformation). I’ve written about the innovation gospel that makes people pitch dangerously bad ideas like “Uber for education” or “Alexa for babysitting.” I’ve written about the tech industry’s attempts to reshape the school system as its personal job training provider. I’ve written about the promise to “rethink the transcript” and to “revolutionize credentialing.” I’ve written about outsourcing and online education. I’ve written about coding bootcamps as the “new” for-profit higher ed, with all the exploitation that entails. I’ve written about the dangers of data collection and data analysis, about the loss of privacy and the lack of security.

And yet here we are, with Mark Zuckerberg – education philanthropist and investor – blinking before Congress, promising that AI will fix everything, while the biased algorithms keep churning out bias, while the education/technology industry (press) continues to be so blinded by “disruption” it doesn’t notice (or care) what’s happened to desegregation, and with so many data breaches and privacy gaffes that they barely make headlines anymore.

Folks. I’m done.

I’m also writing a book, and frankly that’s where my time and energy is going.

There is some delicious irony, I suppose, in the fact that there isn’t much that’s interesting or “innovative” to talk about in ed-tech, particularly since industry folks want to sell us on the story that tech is moving faster than it’s ever moved before, so fast in fact that the ol’ factory model school system simply cannot keep up.

I’ve always considered these year-in-review articles to be mini-histories of sorts – history of the very, very recent past. Now, instead, I plan to spend my time taking a longer, deeper look at the history of education technology, with particular attention for the next few months, as the title of my book suggests, to teaching machines – to the promises that machines will augment, automate, standardize, and individualize instruction. My focus is on the teaching machines of the mid-twentieth century, but clearly there are echoes – echoes of behaviorism and personalization, namely – still today.

In his 1954 book La Technique (published in English a decade later as The Technological Society), the sociologist Jacques Ellul observes how education had become oriented towards creating technicians, less interested in intellectual development than in personality development – a new “psychopedagogy” that he links to Maria Montessori. “The human brain must be made to conform to the much more advanced brain of the machine,” Ellul writes. “And education will no longer be an unpredictable and exciting adventure in human enlightenment , but an exercise in conformity and apprenticeship to whatever gadgetry is useful in a technical world.” I believe today we call this "social emotional learning" and once again (and so insistently by the ed-tech press and its billionaire backers), Montessori’s name is invoked as the key to preparing students for their place in the technological society.

Despite scant evidence in support of the psychopedagogies of mindsets, mindfulness, wellness, and grit, the ed-tech industry (press) markets these as solutions to racial and gender inequality (among other things), as the psychotechnologies of personalization are now increasingly intertwined not just with surveillance and with behavioral data analytics, but with genomics as well. “Why Progressives Should Embrace the Genetics of Education,” a NYT op-ed piece argued in July, perhaps forgetting that education’s progressives (including Montessori) have been down this path before.

This is the only good grit:

[image of Gritty]

If I were writing a lengthier series on the year in ed-tech, I’d spend much more time talking about the promises made about personalization and social emotional learning. I’ll just note here that the most important “innovator” in this area this year (other than Gritty) was surely the e-cigarette maker Juul, which offered a mindfulness curriculum to schools – offered them the curriculum and $20,000, that is – to talk about vaping. “‘The message: Our thoughts are powerful and can set action in motion,’ the lesson plan states.”

The most important event in ed-tech this year might have occurred on February 14, when a gunman opened fire on his former classmates at Marjory Stone Douglas High School in Parkland, Florida, killing 17 students and staff and injuring 17 others. (I chose this particular school shooting because of the student activism it unleashed.)

Oh, I know, I know – school shootings and school security aren’t ed-tech, ed-tech evangelists have long tried to insist, an argument I’ve heard far too often. But this year – the worst year on record for school shootings (according to some calculations) – I think that argument started to shift a bit. Perhaps because there’s clearly a lot of money to be made in selling schools “security” products and services: shooting simulation software, facial recognition technology, metal detectors, cameras, social media surveillance software, panic buttons, clear backpacks, bulletproof backpacks, … [more]
audreywatters  education  technology  edtech  2018  surveillance  privacy  personalization  progressive  schools  quantification  gamification  wholechild  montessori  mariamontessori  eugenics  psychology  siliconvalley  history  venturecapital  highereducation  highered  guns  gunviolence  children  youth  teens  shootings  money  influence  policy  politics  society  economics  capitalism  mindfulness  juul  marketing  gritty  innovation  genetics  psychotechnologies  gender  race  racism  sexism  research  socialemotional  psychopedagogy  pedagogy  teaching  howweteach  learning  howwelearn  teachingmachines  nonprofits  nonprofit  media  journalism  access  donaldtrump  bias  algorithms  facebook  amazon  disruption  data  bigdata  security  jacquesellul  sociology  activism  sel  socialemotionallearning 
december 2018 by robertogreco
The Library is Open: Keynote for the 2018 Pennsylvania Library Association Conference – actualham
"So I am trying to think about ways in. Ways in to places. Ways in to places that don’t eschew the complexity of their histories and how those histories inflect the different ways the places are experienced. I am thinking that helping learners see how places are made and remade, and helping them see that every interpretation they draw up–of their places and the places that refuse to be theirs– remake those places every hour.

This for me, is at the heart of open education.

Open to the past.

Open to the place.

Open at the seams.

Open to the public.

PUBLIC

So there is our final word, “PUBLIC.” You know, it’s not that easy to find out what a public library is. I googled it in preparation for this talk. It’s like a public museum. It might be open to the public, but does that make it public? But you know, it’s not that easy to find out what what a public university is. For example, mine. Which is in New Hampshire, the state which is proudly 50th in the nation for public funding of higher education. My college is about 9% state funded. Is that a public institution?

I think we may be starting backwards if we try to think of “public” in terms of funding. We need to think of public in terms of a relationship between the institution and the public (and the public good) and the economics of these relationships can be (will be! should be!) reflective of those relationships, rather than generative of them. What is the relationship of a public library or university– or a public university library– to the public? And could that relationship be the same for any college library regardless of whether the college is public or private?

Publics are places, situated in space and time but never pinned or frozen to either. Publics are the connective tissue between people, and as Noble points out, corporate interest in the web has attempted to co-opt that tissue and privatize our publics. A similar interest in education has attempted to do the same with our learning channels. Libraries exist in a critical proximity to the internet and to learning. But because they are places, that proximity flows through the people who make and remake the library by using (or not using) it. This is not a transcendent or romantic view of libraries. Recent work by folks like Sam Popowich and Fobazi Ettarh remind us that vocational awe is misguided, because libraries, like humans and the communities they bounce around in, are not inherently good or sacred. But this is not a critique of libraries. Or in other words, these messy seams where things fall apart, this is the strength of libraries because libraries are not everywhere; they are here.

I know this is an awful lot of abstraction wrapped up in some poetry and some deflection. So let me try to find some concrete practice-oriented ideas to leave you with.

You know textbooks cost way, way too much, and lots of that money goes to commercial publishers.

Textbook costs are not incidental to the real cost of college. We can fix this problem by weaning off commercial textbooks and adopting Open Educational Resources. OER also lets us rethink the relationship between learners and learning materials; the open license lets us understand knowledge as something that is continually reshaped as new perspectives are introduced into the field.

We can engage in open pedagogical practices to highlight students as contributors to the world of knowledge, and to shape a knowledge commons that is a healthier ecosystem for learning than a system that commercializes, paywalls, or gates knowledge. And all of this is related to other wrap-around services that students need in order to be successful (childcare, transportation, food, etc), and all of that is related to labor markets, and all of that is related to whether students should be training for or transforming those markets.

As we focus on broadening access to knowledge and access to knowledge creation, we can think about the broader implications for open learning ecosystems.

What kind of academic publishing channels do we need to assure quality and transparent peer review and open access to research by other researchers and by the public at large? What kinds of tools and platforms and expertise do we need to share course materials and research, and who should pay for them and host them and make them available? What kind of centralized standards do we need for interoperability and search and retrieval, and what kind of decentralization must remain in order to allow communities to expand in organic ways?

I’d like to see academic libraries stand up and be proud to be tied to contexts and particulars. I’d like to see them care about the material conditions that shape the communities that surround and infuse them. I’d like them to own the racism and other oppressive systems and structures that infuse their own histories and practices, and model inclusive priorities that center marginalized voices. I’d like them to insist that human need is paramount. Humans need to know, learn, share, revise. I’d like them to focus on sustainability rather than growth; the first is a community-based term, the second is a market-based term. Libraries work for people, and that should make them a public good. A public resource. This is not about how we are funded; it is about how we are founded and refounded.

Helping your faculty move to OER is not about cost-savings. You all know there are much easier ways to save money. They are just really crappy for learning. Moving to OER is about committing to learning environments that respect the realities of place, that engage with the contexts for learning, that challenge barriers that try to co-opt public channels for private gain, and that see learning as a fundamentally infinite process that benefits from human interaction. Sure, technology helps us do some of that better, and technology is central to OER. But technology also sabotages a lot of our human connections: infiltrates them with impersonating bots; manipulates and monetizes them for corporate gain; subverts them for agendas that undercut the network’s transparency; skews the flow toward the privileged and cuts away the margins inhabited by the nondominant voices– the perspectives that urge change, improvement, growth, paradigm shift. So it’s not the technology, just like it’s not the cost-savings, that matters. It’s not the new furniture or the Starbucks that makes your library the place to be. It’s the public that matters. It is a place for that public to be.

Libraries are places. Libraries, especially academic libraries, are public places. They should be open for the public. Help your faculty understand open in all its complexity. Help them understand the people that make your place. Help your place shape itself around the humans who need it.:
open  libraries  access  openaccess  2018  oer  publishing  knowledge  textbooks  college  universities  robinderosa  place  past  present  future  web  internet  online  learning  howwelearn  education  highered  highereducation  joemurphy  nextgen  safiyaumojanoble  deomcracyb  inequality  donnalanclos  davidlewis  racism  algorithms  ralphwaldoemerson  thoreau  control  power  equality  accessibility 
october 2018 by robertogreco
5 Star Service: A curated reading list – Data & Society: Points
"This reading list by Data & Society Postdoctoral Scholar Julia Ticona, Researcher Alexandra Mateescu, and Researcher Alex Rosenblat accompanies the new Data & Society report Beyond Disruption: How Tech Shapes Labor Across Domestic Work & Ridehailing.

As labor platforms begin to mediate work in industries with workforces marked by centuries of economic exclusion based in gender, race, and ethnicity, this report examines the ways labor platforms are shifting the rules of the game for different populations of workers.

While ridehail driving, and other male-dominated sectors have been at the forefront in conversations about the future of work, the working lives of domestic workers like housecleaners and nannies usually aren’t included. By bringing these three types of platforms and workers together, this report complicates simple narratives about technology’s impact on labor markets and highlights the convergent and divergent challenges workers face when using labor platforms to find and carry out their work.

The report weaves together often disparate communities and kinds of knowledge, and this reading list reflects this eclectic approach. Below you’ll find opinion, research, reports, and critique about gendered service work and inequality; labor platforms and contingent work; algorithmic visibility and vulnerability; and risk and safety in the gig economy.

This list is meant for readers of Beyond Disruption who want to dig more deeply into some of the key areas explored in its pages. It isn’t meant to be exhaustive, but rather give readers a jumping off point for their own investigations. Suggestions or comments? E-mail julia at datasociety dot net."
labor  automation  economics  inequality  gender  work  contingentwork  algorithms  vulnerability  visibility  juliaticona  2018  race  ethnicity  technology  policy 
august 2018 by robertogreco
Missing the feed. — Scatological Sensibilities
[Also here: https://are.na/block/2485326 ]

[Related (via Allen): http://reallifemag.com/just-randomness/ ]

"So why the focus on the feed?

What does the want of an unfiltered linear feed mean? What are people really asking for when they ask for that? What pain are they solving for when they make this request?

A linear chronologically ordered feed is predictable. Its not hiding anything, its not wrestling control away from you. It isn't manipulating you in a way you can't ascertain. That should be the baseline.

Arguing for algorithmic feeds is fine, but it should never take away a users sense of control. If something is hidden, it better damn well be because I asked the system to explicitly hide that kind of thing from me. I don't want some hidden algorithm tuned to manipulate me, and I especially don't want it presented to me under a guise of paternalism. That smells like bullshit.

But of course Facebook hasn't done that. They started giving all the 'likes' you had liked pages, and handed control over to those pages to random people. Suddenly your feed was full of content from brands who had snuck in thru girardian style mimetic signaling good. And they're using it to manipulate us, as far as we can tell...

And its creepy.
“So half the Earth's Internet population is using Facebook. They are a site, along with others, that has allowed people to create an online persona with very little technical skill, and people responded by putting huge amounts of personal data online. So the result is that we have behavioral, preference, demographic data for hundreds of millions of people, which is unprecedented in history. And as a computer scientist, what this means is that I've been able to build models that can predict all sorts of hidden attributes for all of you that you don't even know you're sharing information about.”
Your social media “likes” espose more than you think [https://www.ted.com/talks/jennifer_golbeck_the_curly_fry_conundrum_why_social_media_likes_say_more_than_you_might_think/transcript ]

People started complaining they couldn't see all their friends. But the options at the time ran counter to Facebook's intention of being a platform for celebrities, brands and community building thru pages. They are only just now undoing this crappy mechanism design mistake.

Even their ads don't admit the mistake. They talk about friends and friends of friends. and all the crap that started polluting the feed. But the cats out of the bag and the ecosystem is polluted with people who have built up lives around those pages.

Facebook Here Together (UK) [https://www.youtube.com/watch?v=Q4zd7X98eOs ]

And when we step back and wonder what is going on?
We see something fishy and it smells rotten.
Facebook moves 1.5bn users out of reach of new European privacy law [https://amp.theguardian.com/technology/2018/apr/19/facebook-moves-15bn-users-out-of-reach-of-new-european-privacy-law ]

TL;DR: Is this Loss?

The incentives aren't there, and the arguments for changing this are misunderstood. Which is why I deleted my Facebook even though its the only way I can contact my dad.

I miss my dad.

And now you know my perspective."
feeds  algorithms  twitter  facebook  paternalism  socialmedia  control  trust  2018  nicholasperry 
july 2018 by robertogreco
Poética del lápiz, del papel y de las contradicciones | CCCB LAB
"Reflexiones de un escritor que transita entre el medio analógico y el digital, entre lo material y lo virtual."



"Aprendimos a leer en libros de papel y nuestros recuerdos yacen en fotos ampliadas a partir de un negativo. Actualmente vivimos en un entorno digital repleto de promesas y ventajas, y aun así parece que nuestro cerebro reclama dosis periódicas de tacto, artesanía y materia. El escritor Jorge Carrión reflexiona sobre este tránsito contradictorio entre un medio y otro: desde la firma de un libro garabateado o las lecturas repletas de anotaciones, hasta la necesidad de esbozar ideas con un bolígrafo o dibujar para observar y comprender, pasando por el móvil usado para tomar notas o fotografiar citas.

Hoy, en un avión que, a pesar de ser low cost, atraviesa el océano, leo estos versos en un librito extraordinario: «Escribo a mano con un lápiz Mongol Nº 2 mal afilado, / apoyando hojas de papel sobre mis rodillas. / Ésa es mi poética: escribir con lápiz es mi poética. / […] Lo del lápiz mal afilado es indispensable para mi poética. / Sólo así quedan marcas en las hojas de papel / una vez que las letras se borran y las palabras ya no / se entienden o han pasado de moda o cualquier otra cosa.»

Ayer, minutos antes de que empezara la conferencia que tenía que dar en Buenos Aires, una anciana se me acercó para que le dedicara su ejemplar de Librerías. Lo tenía lleno de párrafos subrayados y de esquinas de página dobladas («cada librería condensa el mundo», yo siempre pensé lo mismo, sí, señor), de tarjetas de visita y de fotografías de librerías («este folleto de Acqua Alta es de cuando estuve en Venecia, un viaje muy lindo»), de recortes de diario («mire, la nota de Clarín que habla del fallecimiento de Natu Poblet, qué tristeza») y hasta de cartas («ésta se la escribí a usted cuando terminé su libro y de pronto me quedé otra vez sola»). No es mi libro, le respondí, usted se lo ha apropiado: es totalmente suyo, le pertenece. De perfil el volumen parecía la maleta de cartón de un emigrante o los estratos geológicos de un acantilado. O un mapa impreso en 3D del rostro de la anciana.

La semana pasada, en mi casa, leí este pasaje luminoso de Una historia de las imágenes, un librazo extraordinario de David Hockney y Martin Gayford publicado por Siruela:

En una fotografía el tiempo es el mismo en cada porción de su superficie. No así en la pintura: ni siquiera es así en una pintura hecha a partir de una foto. Es una diferencia considerable. Por eso no podemos mirar una foto mucho tiempo. Al final no es más que una fracción de segundo, no vemos al sujeto en capas. El retrato que me hizo Lucian Freud requirió ciento veinte horas de posado, y todo ese tiempo lo veo en capas en el cuadro. Por eso tiene un interés infinitamente superior al de una foto.

Hace unos meses, en el AVE que une Barcelona con Madrid, leí un artículo sobre una tendencia incipiente: ya son varios los museos del mundo que prohíben hacer fotografías durante la visita; a cambio te regalan un lápiz y papel, para que dibujes las obras que más te interesen, para que en el proceso de la observación y de la reproducción, necesariamente lento, mires y pienses y digieras tanto con los ojos como con las manos.

Vivimos en entornos absolutamente digitales. Producimos, escribimos, creamos en teclados y pantallas. Pero al principio y al final del proceso creativo casi siempre hay un esquema, unas notas, un dibujo: un lápiz o un bolígrafo o un rotulador que se desliza sobre pósits o sobre hojas de papel. Como si en un extremo y en otro de lo digital siempre hubiera una fase predigital. Y como si nuestro cerebro, en un nuevo mundo que –como explica afiladamente Éric Sadin en La humanidad aumentada– ya se ha duplicado algorítmicamente, nos reclamara dosis periódicas de tacto y artesanía y materia (infusiones de coca para combatir el mal de altura).

Hace dos años y medio, tras mi última mudanza, pasé un rato hojeando el álbum de fotos de mi infancia. Aquellas imágenes envejecidas y palpables no sólo documentan mi vida o la moda o las costumbres de los años setenta y ochenta en España, también hablan de la evolución de la fotografía doméstica y de los procesos de revelado. Tal vez cada foto sea solamente un instante (un instante sin una segunda oportunidad, sin edición, sin filtros, sin anestesia), pero las páginas de cartulina, las anotaciones manuscritas en rotulador negro o en boli Bic azul, los cambios de cámara o las impresiones en brillo o en mate crean un conjunto (un libro) en el que la dimensión material del tiempo se puede reconstruir y tocar, elocuente o balbuciente, nítida o desdibujada, como en un yacimiento arqueológico. O como en un mapa impreso en 3D de mi futuro envejecimiento.

Hoy, ahora, acabo de leer este librito extraordinario, el poemario Apolo Cupisnique, de Mario Montalbetti, que han coeditado en Argentina Añosluz y Paracaídas. Y lo cierro, con versos subrayados, páginas con la esquina doblada, la entrada de un par de museos porteños y un lápiz de Ikea que probablemente también se quede ahí, para siempre secuestrado. Y en el avión low cost empiezo a escribir este texto gracias a mi teléfono móvil, porque no soy (no somos) más que un sinfín de contradicciones. La cita de Montalbetti la copio directamente del libro, pero para la de Hockney tengo que recurrir a la foto que hice de esa doble página la semana pasada. A la izquierda el texto, a la derecha el retrato que le hizo Freud. La foto del retrato. Se pueden ver, en efecto, las capas dinámicas que dejaron en la pintura las ciento veinte horas inmóviles. Con el dedo índice y el pulgar amplío sus ojos y durante un rato –en la noche que se disuelve en jet lag– nuestras miradas se encuentran en la pantalla sin estratos."
jorgecarrión  digital  writing  print  virtual  material  2018  art  poetry  apolocupisnique  mariomontalbetti  añosluz  paracaídas  paper  books  ebooks  éricsadin  algorithms  davidhockney  martingayford  natupoblet 
may 2018 by robertogreco
Novels Are Made of Words: Moby-Dick, Emotion, and Abridgment
"Paul Valéry tells the story: The painter Edgar Degas was backhanded-bragging to his friend Stéphane Mallarmé about the poems that he, Degas, had been trying to write. He knew they weren’t great, he said, “But I’ve got lots of ideas—too many ideas.” “But my dear Degas,” the poet replied, “poems are not made out of ideas. They’re made of words.”

Paintings, for that matter, are not made of pretty ballerinas or landscapes: they’re made of paint.

Which brings us to Syuzhet, Matthew Jockers’s new program that analyzes the words of a novel for their emotional value and graphs the sentimental shape of the book. Dan Piepenbring has explained it all here and here on the Daily, with links to the original postings and the various outcries, some of them in the comments, that have blown up around Jockers.

Many people apparently find Jockers’s research the latest assault of technocratic digitocracy on the citadel of deep humanistic feelings, but that’s not how I see it. What the graphs reveal about potboiler narrative structure versus high-literary arcs, for instance—Dan Brown’s higher average positivity than James Joyce’s, and his more regular cycle of highs and lows to force the reader through the book—is insightful, useful, and great.

In some ways, it’s hard for me to even see what the fuss is about. “It’s not that it’s wrong,” one commenter writes. “It’s just that it’s an extremely poor substitute for reading, enjoying, and discussing literature.” But who said anything about a substitute? Does this commenter not notice that the discussions of the graphs rest on having read the books and seeing how the graphs shed light on them? Another: “Okay, fuck this guy for comparing Dan Brown to James Joyce.” Well, how else can you say Joyce is better and Brown is worse? That’s what’s known as a comparison. Or do you think Joyce can’t take it?

Freak-outs aside, there are substantive rebuttals, too. What seems to be the most rigorous objection is from SUNY professor and fellow digital-humanities scholar Annie Swafford, who points out some failures in the algorithm. “I am extremely happy today” and “There is no happiness left in me,” for example, read as equally positive. And:

Longer sentences may be given greater positivity or negativity than their contents warrant, merely because they have greater number of positive or negative words. For instance, “I am extremely happy!” would have a lower positivity ranking than “Well, I’m not really happy; today, I spilled my delicious, glorious coffee on my favorite shirt and it will never be clean again.”

But let’s actually compare “Well, I’m not really happy; today, I spilled my delicious, glorious coffee on my favorite shirt and it will never be clean again” to “I’m sad.” The positivity or negativity might be the same, assuming there could be some kind of galvanometer or something attached to the emotional nodes of our brain to measure the “pure” “objective” “quantity” of positivity. But the first of those sentences is more emotional—maybe not more positive, but more expressive, more histrionic. Ranking it higher than “I’m sad” or even “I am very happy” makes a certain kind of sense.

“There is no happiness left in me” and “I am all sadness from now on” are the same seven words to a logician or a hypothetical emotiomometer, but not to a novelist or a reader. Everyone in advertising and political wordsmithing knows that people absorb the content of a statement much more than the valence: to say that something “is not horrific and apocalyptic” is a downer, despite the “not.” Or consider: “Gone for eternity is the delight that once filled my heart to overflowing—the sparkle of sun on the fresh morning dew of new experience, soft envelopments of a lover’s thighs, empyrean intellectual bliss, everything that used to give my life its alpenglow of hope and wonder—never again!” and “I’m depressed.” An algorithm that rates the first piece of writing off-the-charts positive is a more useful quantification of the words than one that would rate the emotional value of the two as the same.

Some years back, Orion Books produced a book called Moby-Dick in Half the Time, in a line of Compact Editions “sympathetically edited” to “retain all the elements of the originals: the plot, the characters, the social, historical and local backgrounds and the author’s language and style.” I have nothing against abridgments—I’ve abridged books myself—but I felt that what makes Melville Melville, in particular, is digression, texture, and weirdness. If you only have time to read half the book, which half the time is more worth spending? What elements of the original do we want to abridge for?

Moby-Dick in Half the Time seemed like it would lose something more essential than would Anna Karenina in Half the Time or Vanity Fair in Half the Time or Orion’s other offerings. I decided to find out. So I compiled every chapter, word, and punctuation mark that Orion’s abridger cut from Melville’s original Moby-Dick; or The Whale, and published the result, with its inevitable title, as a book of its own: a lost work by Herman Melville called ; or The Whale.

Half the Time keeps the plot arc of Ahab’s quest, of course, but ; or The Whale arguably turns out closer to the emotional ups and downs of Melville’s novel—and that tells us something about how Melville writes. His linguistic excess erupts at moments of emotional intensity; those moments of intensity, trimmed as excess from Half the Time, are what make up the other semibook. Chapter sixty-two, for example, consists of a single word, “hapless”—the only word Orion’s abridger cut from the chapter, trimming a 105-word sentence to 104, for some reason. That’s a pretty good sentiment analysis of Melville’s chapter as a whole. Reading ; or The Whale is a bit like watching a DVD skip ahead on fast forward, and it gets at something real about Melville’s masterpiece. About the emotion in the words.

So I would defend the automated approach to novelistic sentiment on different grounds than Piepenbring’s. I take plot as seriously as he does, as opposed to valorizing only the style or ineffable poetry of a novel; I also see Béla Tarr movies or early Nicholson Baker novels as having plots, too, just not eventful ones. Jockers’s program is called Syuzhet because of the Russian Formalist distinction between fabula, what happens in chronological order in a story, and syuzhet, the order of things in the telling (diverging from the fabula in flashbacks, for instance, or when information is withheld from the reader). It’s not easy to say how “plot” arises out of the interplay between the two. But having minimal fabula is not the same as having little or no plot.

In any case, fabula is not what Syuzhet is about. Piepenbring summarizes: “algorithms assign every word in a novel a positive or negative emotional value, and in compiling these values [Jockers is] able to graph the shifts in a story’s narrative. A lot of negative words mean something bad is happening, a lot of positive words mean something good is happening.” This may or may not be true, but novels are not made of things that happen, they are made of words. Again: “When we track ‘positive sentiment,’ we do mean, I think, that things are good for the protagonist or the narrator.” Not necessarily, but we do mean—tautologically—that things are good for the reader in the warm afternoon sunshine of the book’s positive language.

Great writers, along with everything else they are doing, stage a readerly experience and lead their readers through it from first word on first page to last. Mapping out what those paths might look like is as worthy a critical approach as any."
paulvaléry  edgardegas  writing  novels  mobydick  mattherjocker  2015  digital  words  language  hermanmelville  reading  howwewrite  automation  emotions  algorithms  narrative  nicholsonbaker  bélatarr  moby-dick 
april 2018 by robertogreco
The Ad-Free, User-Owned Future Of Social Media
"The recent revelation that Facebook allowed British firm Cambridge Analytica to harvest the data of 50 million users has led to a cultural reckoning and spelled serious trouble for the social media giant–and many of its peers. As the dust settles, the question remains: If you’re done with Facebook, what other options are there?

One alternative is Are.na. Designed by creatives for creatives, Are.na is a research platform that happens to have a social element; you can organize all kinds of “blocks” of content into themed channels, gathering ideas and inspiration slowly over time. Other users can connect your “blocks” to their ideas, creating a network of thematic links designed for collaboration and sharing.

But here’s the thing about Are.na: It has no ads, no likes, and no tracking algorithms, making it something of an anti-Facebook. And crucially, its business model is entirely different. Rather than relying on gathering user data and selling engagement to advertisers, Are.na is funded entirely by premium users who pay a monthly fee to use the platform. According to cofounder and CEO Charles Broskoski, that means that the Are.na team is focused on making a product truly designed for its 42,000 users instead of trying to serve both users and advertisers at the same time.

Of course, there’s a reason many internet giants, including Facebook and Google, rely on advertising and user data to generate revenue. Are.na’s alternative is a hard business model to make work. That’s why the platform launched a crowdfunding campaign that allows anyone to invest in Are.na on March 14. In the two weeks since, it has raised more than $100,000–double the team’s initial goal–from 326 individual investors who pitched in amounts ranging from $100 to $5,000.

The campaign kicked off just days before the Cambridge Analytica news broke, and Broskoski attributes at least some of its success to people looking for new models to support online. “It feels like a very opportune moment for alternative approaches to social media,” he says.

While the Are.na team has been overwhelmed by the response, they also say they aren’t terribly surprised by it. Some of Are.na’s most ardent users had already reached out about wanting to invest, so raising equity through the startup’s community felt like the right way to build a sustainable business model. So far, about 70% of the investments have come from Are.na members. Of the platform’s paying members, about 10% are investors.

Part of the reasoning behind opening up Are.na to individual investors is that it shows current users–and any potential new users–exactly what the company’s values are. “We’re trying to be transparent about how our business functions and how that’s good for a person,” Broskoski says. “It shows how we’re motivated. We’re trying to make a product that’s good enough for people who can afford it to pay for it.”

The money will help cover operating expenses, and Broskoski says the startup is on track to entirely cover these costs using the crowdfunded money and revenue from premium users by the end of the year. But the campaign is still going, with more than two months left. If they manage to raise $150,000, Are.na will be able to bring on another developer who can help it continue building out features for users. Right now, the team is focused on designing a version of Are.na for small teams to work together, which they hope to launch in the fall.

Anyone who buys an equity investment in Are.na receives convertible notes–an agreement that you’ve bought debt that will transform into equity when a qualifying financing round happens. In a more traditional startup, that might be through an acquisition, an IPO, or a share buyback. But Broskoski instead wants to issue dividends to the company’s investors as soon as Are.na becomes profitable. It’s not unheard of: Kickstarter pursued a similar model with its early investors.

“We love the idea of our community owning part of Are.na,” Broskoski says. “It matches up perfectly with our values and where we want to be in the future.”

Even in the last few months, the company has grown exponentially. When I last spoke to the team in January, they had 21,000 users. Just three months later, they have 42,000. The initial success of the company’s equity crowdfunding is a clear indicator: They’re onto something.

The timing could not have been better. Even before Cambridge Analytica, people were opting out of social media and looking for ways to digitally detox–citing the negative impact of Facebook and Twitter on users’ emotional lives and productivity. Even if you love using them, it can be difficult to swallow just how heavily these companies’ business models depend on mining your personal data. Though there are other alternative platforms, Are.na is one of the few making headway on a sustainable business model that puts users first.

“It’s more evidence to us that we’re doing something right and we’re reaching a type of person who wants something different on the internet,” Broskoski says. “I don’t necessarily think that Are.na is going to supplant Facebook, but this particular time is a good moment for people to think about what they want their online life to look like.”"
are.na  2018  charlesbroskoski  values  advertising  tracking  algorithms  facebook  cambridgeanalytica 
april 2018 by robertogreco
Are.na / Blog – Alternate Digital Realities
"Writer David Zweig, who interviewed Grosser about the Demetricator for The New Yorker, describes a familiar sentiment when he writes, “I’ve evaluated people I don’t know on the basis of their follower counts, judged the merit of tweets according to how many likes and retweets they garnered, and felt the rush of being liked or retweeted by someone with a large following. These metrics, I know, are largely irrelevant; since when does popularity predict quality? Yet, almost against my will, they exert a pull on me.” Metrics can be a drug. They can also influence who we think deserves to be heard. By removing metrics entirely, Grosser’s extension allows us to focus on the content—to be free to write and post without worrying about what will get likes, and to decide for ourselves if someone is worth listening to. Additionally, it allows us to push back against a system designed not to cultivate a healthy relationship with social media but to prioritize user-engagement in order to sell ads."
digital  online  extensions  metrics  web  socialmedia  internet  omayeliarenyeka  2018  race  racism  activism  davidzeig  bejamingrosser  twitter  google  search  hangdothiduc  reginafloresmir  dexterthomas  whitesupremacy  tolulopeedionwe  patriarchy  daniellesucher  jennyldavis  mosaid  shannoncoulter  taeyoonchoi  rodrigotello  elishacohen  maxfowler  jamesbaldwin  algorithms  danielhowe  helennissenbaum  mushonzer-aviv  browsers  data  tracking  surveillance  ads  facebook  privacy  are.na 
april 2018 by robertogreco
Social Inequality Will Not Be Solved By an App | WIRED
"Central to these “colorblind” ideologies is a focus on the inappropriateness of “seeing race.” In sociological terms, colorblindness precludes the use of racial information and does not allow any classifications or distinctions. Yet, despite the claims of colorblindness, research shows that those who report higher racial colorblind attitudes are more likely to be White and more likely to condone or not be bothered by derogatory racial images viewed in online social networking sites. Silicon Valley executives, as previously noted, revel in their embrace of colorblindness as if it is an asset and not a proven liability. In the midst of reenergizing the effort to connect every American and to stimulate new economic markets and innovations that the internet and global communications infrastructures will afford, the real lives of those who are on the margin are being reengineered with new terms and ideologies that make a discussion about such conditions problematic, if not impossible, and that place the onus of discriminatory actions on the individual rather than situating problems affecting racialized groups in social structures.

Formulations of postracialism presume that racial disparities no longer exist, a context within which the colorblind ideology finds momentum. George Lipsitz, a critical Whiteness scholar and professor at the University of California, Santa Barbara, suggests that the challenge to recognizing racial disparities and the social (and technical) structures that instantiate them is a reflection of the possessive investment in Whiteness—which is the inability to recognize how White hegemonic ideas about race and privilege mask the ability to see real social problems. I often challenge audiences who come to my talks to consider that at the very historical moment when structural barriers to employment were being addressed legislatively in the 1960s, the rise of our reliance on modern technologies emerged, positing that computers could make better decisions than humans. I do not think it a coincidence that when women and people of color are finally given opportunity to participate in limited spheres of decision making in society, computers are simultaneously celebrated as a more optimal choice for making social decisions. The rise of big-data optimism is here, and if ever there were a time when politicians, industry leaders, and academics were enamored with artificial intelligence as a superior approach to sense-making, it is now. This should be a wake-up call for people living in the margins, and people aligned with them, to engage in thinking through the interventions we need."
safiyaumojanoble  technosolutionism  technologu  2018  inequality  society  socialinequality  siliconvalley  neoliberalism  capitalism  blacklivesmatter  organizing  politics  policy  oppression  algorithms  race  racism  us  postracialism  colorblindness  discrimination  georgelipsitz 
march 2018 by robertogreco
Podcast, Nick Seaver: “What Do People Do All Day?” - MIT Comparative Media Studies/Writing
"The algorithmic infrastructures of the internet are made by a weird cast of characters: rock stars, gurus, ninjas, wizards, alchemists, park rangers, gardeners, plumbers, and janitors can all be found sitting at computers in otherwise unremarkable offices, typing. These job titles, sometimes official, sometimes informal, are a striking feature of internet industries. They mark jobs as novel or hip, contrasting starkly with the sedentary screenwork of programming. But is that all they do? In this talk, drawing on several years of fieldwork with the developers of algorithmic music recommenders, Seaver describes how these terms help people make sense of new kinds of jobs and their positions within new infrastructures. They draw analogies that fit into existing prestige hierarchies (rockstars and janitors) or relationships to craft and technique (gardeners and alchemists). They aspire to particular imaginations of mastery (gurus and ninjas). Critics of big data have drawn attention to the importance of metaphors in framing public and commercial understandings of data, its biases and origins. The metaphorical borrowings of role terms serve a similar function, highlighting some features at the expense of others and shaping emerging professions in their image. If we want to make sense of new algorithmic industries, we’ll need to understand how they make sense of themselves.

Nick Seaver is assistant professor of anthropology at Tufts University. His current research examines the cultural life of algorithms for understanding and recommending music. He received a masters from CMS in 2010 for research on the history of the player piano."

[direct link to audio: https://soundcloud.com/mit-cmsw/nick-seaver-what-do-people-do-all-day ]

[via: https://twitter.com/allank_o/status/961382666573561856 ]
nickseaver  2016  work  labor  algorithms  bigdata  music  productivity  automation  care  maintenance  programming  computing  hierarchy  economics  data  datascience 
february 2018 by robertogreco
Impakt Festival 2017 - Performance: ANAB JAIN. HQ - YouTube
[Embedded here: http://impakt.nl/festival/reports/impakt-festival-2017/impakt-festival-2017-anab-jain/ ]

"'Everything is Beautiful and Nothing Hurts': @anab_jain's expansive keynote @impaktfestival weaves threads through death, transcience, uncertainty, growthism, technological determinism, precarity, imagination and truths. Thanks to @jonardern for masterful advise on 'modelling reality', and @tobias_revell and @ndkane for the invitation."
https://www.instagram.com/p/BbctTcRFlFI/ ]
anabjain  2017  superflux  death  aging  transience  time  temporary  abundance  scarcity  future  futurism  prototyping  speculativedesign  predictions  life  living  uncertainty  film  filmmaking  design  speculativefiction  experimentation  counternarratives  designfiction  futuremaking  climatechange  food  homegrowing  smarthomes  iot  internetofthings  capitalism  hope  futures  hopefulness  data  dataviz  datavisualization  visualization  williamplayfair  society  economics  wonder  williamstanleyjevons  explanation  statistics  wiiliambernstein  prosperity  growth  latecapitalism  propertyrights  jamescscott  objectivity  technocrats  democracy  probability  scale  measurement  observation  policy  ai  artificialintelligence  deeplearning  algorithms  technology  control  agency  bias  biases  neoliberalism  communism  present  past  worldview  change  ideas  reality  lucagatti  alextaylor  unknown  possibility  stability  annalowenhaupttsing  imagination  ursulaleguin  truth  storytelling  paradigmshifts  optimism  annegalloway  miyamotomusashi  annatsing 
november 2017 by robertogreco
How online citizenship is unsettling rights and identities | openDemocracy
"Citizenship law and how it is applied are worth watching, as litmus tests for wider democratic freedoms."



"Jus algoritmi is a term coined by John Cheney-Lippold to describe a new form of citizenship which is produced by the surveillance state, whose primary mode of operation, like other state forms before it, is control through identification and categorisation. Jus algoritmi – the right of the algorithm – refers to the increasing use of software to make judgements about an individual’s citizenship status, and thus to decide what rights they have, and what operations upon their person are permitted."



"Moment by moment, the citizenship assigned to us, and thus the rights we may claim and the laws we are subject to, are changing, subject to interrogation and processing. We have become effectively stateless, as the concrete rights we have been accustomed to flicker and shift with a moment’s (in)attention.

But in addition to showing us a new potential vector of oppression, Citizen Ex illustrates, in the same way that the internet itself illustrates political and social relationships, the distribution of identity and culture in our everyday online behaviour. The nation state has never been a sufficient container for identity, but our technology has caught up with our situation, illuminating the many and varied failures of historical models of citizenship to account for the myriad of ways in which people live, behave, and travel over the surface of the planet. This realisation and its representation are both important and potentially emancipatory, if we choose to follow its implications.

We live in a time of both mass migrations, caused by war, climate change, economic need and demographic shift, and of a shift in mass identification, as ever greater numbers of us form social bonds with other individuals and groups outside our physical locations and historical cultures. If we accept that both of these kinds of change are, if not caused by, at least widely facilitated by modern communication technologies – from social media to banking networks and military automation – then it follows that these technologies may also be deployed to produce new forms of interaction and subjectivity which better model the actual state of the world – and one which is more desirable to inhabit."



"It remains to be seen whether e-residency will benefit those with most to gain from reengineered citizenship, or, like so many other digital products, merely augment the agency of those who already have first-class rights.

As the example of NSA’s procedures for determining citizenship illustrate, contemporary networked interventions in the sphere of identity are typically top-down, state-led, authoritarian moves to control and discipline individual subjects. Their operational processes are opaque, and they are used against their subjects, reducing their agency. The same is true for most corporate systems, from Facebook to Google to smart gas and water meters and vehicle trackers, which abstract data from the subject for financial gain. The Estonian example shows that digital citizenship regimes can point towards post-national, post-geographic territories, while continuing to reproduce the forms of identity most conducive to contemporary capitalism and nationhood. The challenge is to transform the internet, and thus the world, from a place where identity is constantly surveilled, judged, and operationalised, to a place where we can act freely as citizens of a greater sphere of social relationships: from a space which is entirely a border zone to one which is truly borderless."
jamesbridle  2017  nationalism  politics  citizenship  estonia  digital  physical  demoracy  rights  jusalgoritmi  algorithms  nsa  migration  refugees  identity  borders  borderlessness  society  mobility  travel  digitalcitizenship 
october 2017 by robertogreco
You Have a New Memory - Long View on Education
"Last night I nearly cleaned out my social media presence on Instagram as I’ve used it about 6 times in two years. More generally, I want to pull back on any social media that isn’t adding to my life (yeah, Facebook, I’m talking about you). Is there anything worth staying on Instagram for? I know students use it to show off the photographic techniques they learn in their digital photography class. When I scrolled through to see what photos have been posted from the location of our school, I was caught by a very striking image that represents a view out of a classroom.

One of the most striking things about Instagram is how students engage with it (likes) way more than they do our school Twitter stream. I care about where their engagement happens since in the last two days of learning conferences, many students told me that they got their news through Snapchat. But neither Instagram nor Snapchat are where I have the interactions that I value.

This poses a serious challenge for teaching media literacy, but also for teaching the more traditional forms of text. With my Grade 9s, we have been reading and crafting memoirs. How does their construction of ephemeral memoirs on Snapchat and curated collections of memories on Instagram shape both how they write and see themselves?

Even though I understand how Snapchat works, I will never understand what it’s like to feel the draw of streaks or notifications. And with Instagram, I’m well past a point where I’m drawn to construct images that vie for hundreds of likes. I’m simply not shaped by these medias in the same way.

Beyond different medias, students really carry around different devices than I do, even though they may both be called iPhones. Few of them read the news on it or need to sift through work emails. But in both cases, these devices form the pathway to a public presentation of self, which is something that I struggle with on many levels. I’m happy to be out here in public intellectual mode sharing and criticizing ideas, and to reflect on my teaching and share what my students are doing, and to occasionally put out parts of my personal life, but I resent the way that platforms work to combine all of those roles into one public individual.

Just this morning, I received the most bizarre notification from my Apple Photos: “You Have a New Memory”. So, even in the relatively private space between my stored photos and my screen, algorithms give birth to new things I need to be made aware of. Notified. How I go about opting out of social media now seems like an easier challenge than figuring out how I withdraw from the asocial nudges that emerge from my own archives."
2017  benjamindoxtdator  instagram  twitter  facebook  algorithms  memory  memories  photography  presentationofself  apple  iphone  smartphones  technology  teaching  education  edtech  medialiteracy  engagement  snapchat  ephemerality  text  memoirs  notifications  likes  favorites  ephemeral 
october 2017 by robertogreco
Ellen Ullman: Life in Code: "A Personal History of Technology" | Talks at Google - YouTube
"The last twenty years have brought us the rise of the internet, the development of artificial intelligence, the ubiquity of once unimaginably powerful computers, and the thorough transformation of our economy and society. Through it all, Ellen Ullman lived and worked inside that rising culture of technology, and in Life in Code she tells the continuing story of the changes it wrought with a unique, expert perspective.

When Ellen Ullman moved to San Francisco in the early 1970s and went on to become a computer programmer, she was joining a small, idealistic, and almost exclusively male cadre that aspired to genuinely change the world. In 1997 Ullman wrote Close to the Machine, the now classic and still definitive account of life as a coder at the birth of what would be a sweeping technological, cultural, and financial revolution.

Twenty years later, the story Ullman recounts is neither one of unbridled triumph nor a nostalgic denial of progress. It is necessarily the story of digital technology’s loss of innocence as it entered the cultural mainstream, and it is a personal reckoning with all that has changed, and so much that hasn’t. Life in Code is an essential text toward our understanding of the last twenty years—and the next twenty."
ellenullman  bias  algorithms  2017  technology  sexism  racism  age  ageism  society  exclusion  perspective  families  parenting  mothers  programming  coding  humans  humanism  google  larrypage  discrimination  self-drivingcars  machinelearning  ai  artificialintelligence  literacy  reading  howweread  humanities  education  publicschools  schools  publicgood  libertarianism  siliconvalley  generations  future  pessimism  optimism  hardfun  kevinkelly  computing 
october 2017 by robertogreco
Idle Words
"The real story in this mess is not the threat that algorithms pose to Amazon shoppers, but the threat that algorithms pose to journalism. By forcing reporters to optimize every story for clicks, not giving them time to check or contextualize their reporting, and requiring them to race to publish follow-on articles on every topic, the clickbait economics of online media encourage carelessness and drama. This is particularly true for technical topics outside the reporter’s area of expertise.

And reporters have no choice but to chase clicks. Because Google and Facebook have a duopoly on online advertising, the only measure of success in publishing is whether a story goes viral on social media. Authors are evaluated by how individual stories perform online, and face constant pressure to make them more arresting. Highly technical pieces are farmed out to junior freelancers working under strict time limits. Corrections, if they happen at all, are inserted quietly through ‘ninja edits’ after the fact.

There is no real penalty for making mistakes, but there is enormous pressure to frame stories in whatever way maximizes page views. Once those stories get picked up by rival news outlets, they become ineradicable. The sheer weight of copycat coverage creates the impression of legitimacy. As the old adage has it, a lie can get halfway around the world while the truth is pulling its boots on.

Earlier this year, when the Guardian published an equally ignorant (and far more harmful) scare piece about a popular secure messenger app, it took a group of security experts six months of cajoling and pressure to shame the site into amending its coverage. And the Guardian is a prestige publication, with an independent public editor. Not every story can get such editorial scrutiny on appeal, or attract the sympathetic attention of Teen Vogue.

The very machine learning systems that Channel 4’s article purports to expose are eroding online journalism’s ability to do its job.

Moral panics like this one are not just harmful to musket owners and model rocket builders. They distract and discredit journalists, making it harder to perform the essential function of serving as a check on the powerful.

The real story of machine learning is not how it promotes home bomb-making, but that it's being deployed at scale with minimal ethical oversight, in the service of a business model that relies entirely on psychological manipulation and mass surveillance. The capacity to manipulate people at scale is being sold to the highest bidder, and has infected every aspect of civic life, including democratic elections and journalism.

Together with climate change, this algorithmic takeover of the public sphere is the biggest news story of the early 21st century. We desperately need journalists to cover it. But as they grow more dependent on online publishing for their professional survival, their capacity to do this kind of reporting will disappear, if it has not disappeared already."
algorithms  amazon  internet  journalism  climatechange  maciejceglowski  moralpanic  us  clickbait  attention  ethics  machinelearning  maciejcegłowski 
september 2017 by robertogreco
Tim Maughan on Twitter: "Zuckerberg translated: I created a thing that became incredibly powerful and complex, and I now have no control over it https://t.co/nIMEez6IT5"
"Zuckerberg translated: I created a thing that became incredibly powerful and complex, and I now have no control over it [screenshot]

been saying this for ages (as has Curtis and others) - this is now the way the world works.

We build systems so complex we don't understand them, and can't control. Instead we try and manage and reactively fire-fight small parts.

see also: all markets, supply chains, the media, algorithms, economies, day to day politics, policing, advertising..just take your pick.

How do you make sense of a system no single individual can comprehend? You lose agency and blame others. You dream up conspiracy theories.

Or you try to find one single answer or reason - and you argue violently for it - when the reality is its far too complex for that.

"It was her emails! The media! Racism! It was bernie! No, it was the russians!" It was all those things, plus x more levels of complexity.

This all sounds very 'we're fucked' and defeatist and, well, yeah. Maybe. Or maybe we can try and find ways to wrestle control back.

One thing these systems all have in common: their purpose is primarily to create and hoard capital. Maybe we should pivot away from that?

More relevant quotes re complexity, control, and automation from that Zuckerberg statement (which is here https://www.facebook.com/zuck/posts/10104052907253171 …) [two screenshots]"
timmaughan  elections  2017  2016  markzuckerberg  facebook  systems  complexity  agency  cv  control  systemsthinking  economics  algorithms  media  supplychains  advertising  politics  policing  lawenforcement 
september 2017 by robertogreco
recalibrating your sites – the ANOVA
"Not too long ago, I felt the need to change the stream of personalities and attitudes that were pouring into my head, and it’s been remarkable.

This was really the product of idiosyncratic personal conditions, but it’s ended up being a good intellectual exercise too. I had to rearrange a few things in my digital social life. And concurrently I had realized that my sense of the world was being distorted by the flow of information that was being deposited into my brain via the internet. I hadn’t really lost a sense of what the “other side” thinks politically; I’m still one of those geezers who forces himself to read Reason and the Wall Street Journal op/ed page and, god help me, National Review. But I had definitely lost a sense of the mental lives of people who did not occupy my various weird interests.

What were other people thinking about, at least as far as could be gleaned by what they shared online? What appeared to be a big deal to them and what didn’t? I had lost my sense of social proportion. I couldn’t tell if the things my friends were obsessing about were things that the rest of the world was obsessing about. Talking to IRL friends that don’t post much or at all online helped give me a sense that I was missing something. But I didn’t know what.

No, I had to use the tools available to me to dramatically change the opinions and ideas and attitudes that were coming flowing into my mental life. And it had become clear that, though I have an RSS feed and I peruse certain websites and publications regularly, though I still read lots of books and physical journals and magazines, the opinions I was receiving were coming overwhelmingly through social media. People shared things and commented on what they shared on Facebook and Twitter, they made clear what ideas were permissible and what weren’t on Facebook and Twitter, they defined the shared mental world on Facebook and Twitter. They created a language that, if you weren’t paying attention, looked like the lingua franca. I’m sure there are people out there who can take all of this in with the proper perspective and not allow it to subtly shape your perception of social attitudes writ large. But I can’t.

It’s all particularly disturbing because a lot of what you see and don’t online is the product of algorithms that are blunt instruments at best.

So I set about disconnecting, temporarily, from certain people, groups, publications, and conversations. I found voices that popped up in my feeds a lot and muted them. I unfollowed groups and pages. I looked out for certain markers of status and social belonging and used them as guides for what to avoid. I was less interested in avoiding certain subjects than I was in avoiding certain perspectives, the social frames that we all use to understand the world. The news cycle was what it was; I could not avoid Trump, as wonderful as that sounds. But I could avoid a certain way of looking at Trump, and at the broader world. In particular I wanted to look past what we once called ideology: I wanted to see the ways in which my internet-mediated intellectual life was dominated by assumptions that did not recognize themselves as assumptions, to understand how the perspective that did not understand itself to be a perspective had distorted my vision of the world. I wanted to better see the water in which my school of fish swims.

Now this can be touchy – mutually connecting with people on social media has become a loaded thing in IRL relationships, for better or worse. Luckily both Facebook and Twitter give you ways to not see someone’s posts without them knowing and without severing the connection. Just make a list of people, pages, and publications that you want to take a diet from, and after a month or two of seeing how different things look, go back to following them. (Alternatively: don’t.) Really do it! The tools are there, and you can always revert back. Just keep a record of what you’re doing.

I was prepared for this to result in a markedly different online experience for me, and for it to somewhat change my perception of what “everyone” thinks, of what people are reading, watching, and listening to, etc. But even so, I’ve been floored by how dramatically different the online world looks with a little manipulation of the feeds. A few subjects dropped out entirely; the Twin Peaks reboot went from being everywhere to being nowhere, for example. But what really changed was the affect through which the world was presenting itself to me.

You would not be surprised by what my lenses appear to have been (and still largely to be): very college educated, very left-leaing, very New York, very media-savvy, very middlebrow, and for lack of a better word, very “cool.” That is, the perspective that I had tried to wean myself off of was made up of people whose online self-presentation is ostentatiously ironic, in-joke heavy, filled with cultural references that are designed to hit just the right level of obscurity, and generally oriented towards impressing people through being performatively not impressed by anything. It was made up of people who are passionately invested in not appearing to be passionately invested in anything. It’s a sensibility that you can trace back to Gawker and Spy magazine and much, much further back than that, if you care to.

Perhaps most dramatic was the changes to what – and who – was perceived as a Big Deal. By cutting out a hundred voices or fewer, things and people that everybody talks about became things and people that nobody talks about. The internet is a technology for creating small ponds for us to all be big fish in. But you change your perspective just slightly, move over just an inch, and suddenly you get a sense of just how few people know about you or could possibly care. It’s oddly comforting, to be reminded that even if you enjoy a little internet notoriety, the average person on the street could not care less who you are or what you do. I recommend it.

Of course, there are profound limits to this. My feeds are still dominantly coming from a few overlapping social cultures. Trimming who I’m following hasn’t meant that I’m suddenly connected to more high school dropouts, orthodox Jews, senior citizens, or people who don’t speak English. I would never pretend that this little exercise has given me a truly broad perspective. The point has just been to see how dramatically a few changes to my digital life could alter my perception of “the conversation.” And it’s done that. More than ever, I worry that our sense of shared political assumptions and the perceived immorality of the status quo is the result of systems that exclude a large mass of people, whose opinions will surely matter in the political wars ahead.

I am now adding some of what I cut back in to my digital life. The point was never really to avoid particular publications or people. I like some of what and who I had cut out very much. The point is to remain alive to how arbitrary and idiosyncratic changes in the constant flow of information can alter our perception of the human race. It’s something I intend to do once a year or so, to jolt myself back into understanding how limiting my perspective really is.

Everyone knows, these days, that we’re living in digitally-enabled bubbles. The trouble is that our instincts are naturally to believe that everyone else is in a bubble, or at least that their bubbles are smaller and with thicker walls. But people like me – college educated, living in an urban enclave, at least socially liberal, tuned in to arts and culture news and criticism, possessed of the vocabulary of media and the academy, “savvy” – you face unique temptations in this regard. No, I don’t think that this kind of bubble is the same as someone who only gets their news from InfoWars and Breitbart. But the fact that so many people like me write the professional internet, the fact that the creators of the idioms and attitudes of our newsmedia and cultural industry almost universally come from a very thin slice of the American populace, is genuinely dangerous.

To regain perspective takes effort, and I encourage you all to expend that effort, particularly if you are an academic or journalist. Your world is small, and our world is big."
freddiedeboer  2017  internet  twitter  facebook  filterbubbles  socialmedia  relationships  algorithms  echochambers  academia  journalism  culture  society  diversity  perspective  listening  web  media  feeds 
august 2017 by robertogreco
The Algorithm That Makes Preschoolers Obsessed With YouTube Kids - The Atlantic
"Surprise eggs and slime are at the center of an online realm that’s changing the way the experts think about human development."



"And here’s where the ouroboros factor comes in: Kids watch the same kinds of videos over and over. Videomakers take notice of what’s most popular, then mimic it, hoping that kids will click on their stuff. When they do, YouTube’s algorithm takes notice, and recommends those videos to kids. Kids keep clicking on them, and keep being offered more of the same. Which means video makers keep making those kinds of videos—hoping kids will click.

This is, in essence, how all algorithms work. It’s how filter bubbles are made. A little bit of computer code tracks what you find engaging—what sorts of videos do you watch most often, and for the longest periods of time?—then sends you more of that kind of stuff. Viewed a certain way, YouTube Kids is offering programming that’s very specifically tailored to what children want to see. Kids are actually selecting it themselves, right down to the second they lose interest and choose to tap on something else. The YouTube app, in other words, is a giant reflection of what kids want. In this way, it opens a special kind of window into a child’s psyche.

But what does it reveal?

“Up until very recently, surprisingly few people were looking at this,” says Heather Kirkorian, an assistant professor of human development in the School of Human Ecology at the University of Wisconsin-Madison. “In the last year or so, we’re actually seeing some research into apps and touchscreens. It’s just starting to come out.”

Kids’ videos are among the most watched content in YouTube history. This video, for example, has been viewed more than 2.3 billion times, according to YouTube’s count:

[video: https://www.youtube.com/watch?v=KYniUCGPGLs ]



"The vague weirdness of these videos aside, it’s actually easy to see why kids like them. “Who doesn’t want to get a surprise? That’s sort of how all of us operate,” says Sandra Calvert, the director of the Children’s Digital Media Center at Georgetown University. In addition to surprises being fun, many of the videos are basically toy commercials. (This video of a person pressing sparkly Play-Doh onto chintzy Disney princess figurines has been viewed 550 million times.) And they let kids tap into a whole internet’s worth of plastic eggs and perceived power. They get to choose what they watch. And kids love being in charge, even in superficial ways.

“It’s sort of like rapid-fire channel surfing,” says Michael Rich, a professor of pediatrics at Harvard Medical School and the director of the Center on Media and Child Health. “In many ways YouTube Kids is better suited to the attention span of a young child—just by virtue of its length—than something like a half-hour or hour broadcast program can be.”

Rich and others compare the app to predecessors like Sesame Street, which introduced short segments within a longer program, in part to keep the attention of the young children watching. For decades, researchers have looked at how kids respond to television. Now they’re examining the way children use mobile apps—how many hours they’re spending, which apps they’re using, and so on."



"“You have to do what the algorithm wants for you,” says Nathalie Clark, the co-creator of a similarly popular channel, Toys Unlimited, and a former ICU nurse who quit her job to make videos full-time. “You can’t really jump back and forth between themes.”

What she means is, once YouTube’s algorithm has determined that a certain channel is a source of videos about slime, or colors, or shapes, or whatever else—and especially once a channel has had a hit video on a given topic—videomakers stray from that classification at their peril. “Honestly, YouTube picks for you,” she says. “Trending right now is Paw Patrol, so we do a lot of Paw Patrol.”

There are other key strategies for making a YouTube Kids video go viral. Make enough of these things and you start to get a sense of what children want to see, she says. “I wish I could tell you more,” she added, “But I don’t want to introduce competition. And, honestly, nobody really understands it. ”

The other thing people don’t yet understand is how growing up in the mobile internet age will change the way children think about storytelling. “There’s a rich set of literature showing kids who are reading more books are more imaginative,” says Calvert, of the Children’s Digital Media Center. “But in the age of interactivity, it’s no longer just consuming what somebody else makes. It’s also making your own thing.”

In other words, the youngest generation of app users is developing new expectations about narrative structure and informational environments. Beyond the thrill a preschooler gets from tapping a screen, or watching The Bing Bong Song video for the umpteenth time, the long-term implications for cellphone-toting toddlers are tangled up with all the other complexities of living in a highly networked on-demand world."
algorithms  adriennelafrance  youtube  2017  children  edtech  attention  nathalieclark  michaelrich  psychology  youtubekids  rachelbar  behavior  toddlers  repetition  storytelling  narrative  preschoolers 
july 2017 by robertogreco
Frontier notes on metaphors: the digital as landscape and playground - Long View on Education
"I am concerned with the broader class of metaphors that suggest the Internet is an inert and open place for us to roam. Scott McLeod often uses the metaphor of a ‘landscape’: “One of schools’ primary tasks is to help students master the dominant information landscape of their time.”

McLeod’s central metaphor – mastering the information landscape – fits into a larger historical narrative that depicts the Internet as a commons in the sense of “communally-held space, one which it is specifically inappropriate for any single individual or subset of the community (including governments) to own or control.” Adriane Lapointe continues, “The internet is compared to a landscape which can be used in various ways by a wide range of people for whatever purpose they please, so long as their actions do not interfere with the actions of others.”

I suspect that the landscape metaphor resonates with people because it captures how they feel the Internet should work. Sarah T. Roberts argues that we are tempted to imagine the digital as “valueless, politically neutral and as being without material consequences.” However, the digital information landscape is an artifact shaped by capitalism, the US military, and corporate power. It’s a landscape that actively tracks and targets us, buys and sells our information. And it’s mastered only by the corporations, CEOs and venture capitalists.

Be brave? I have no idea what it would mean to teach students how to ‘master’ the digital landscape. The idea of ‘mastering’ recalls the popular frontier and pioneer metaphors that have fallen out of fashion since 1990s as the Internet became ubiquitous, as Jan Rune Holmevik notes. There is of course a longer history of the “frontiers of knowledge” metaphor going back to Francis Bacon and passing through Vannevar Bush, and thinking this way has become, according to Gregory Ulmer, “ubiquitous, a reflex, a habit of mind that shapes much of our thinking about inquiry” – and one that needs to be rethought if we take the postcolonial movement seriously.

While we might worry about being alert online, we aren’t exposed to enough stories about the physical and material implications of the digital. It’s far too easy to think that the online landscape exists only on our screens, never intersecting with the physical landscape in which we live. Yet, the Washington Post reports that in order to pave the way for new data centers, “the Prince William County neighborhood [in Virginia] of mostly elderly African American homeowners is being threatened by plans for a 38-acre computer data center that will be built nearby. The project requires the installation of 100-foot-high towers carrying 230,000-volt power lines through their land. The State Corporation Commission authorized Dominion Virginia Power in late June to seize land through eminent domain to make room for the towers.” In this case, the digital is transforming the physical landscape with hostile indifference to the people that live there.

Our students cannot be digitally literate citizens if they don’t know stories about the material implications about the digital. Cathy O’Neil has developed an apt metaphor for algorithms and data – Weapons of Math Destruction – which have the potential to destroy lives because they feed on systemic biases. In her book, O’Neil explains that while attorneys cannot cite the neighborhood people live in as a reason to deny prisoners parole, it is permissible to package that judgment into an algorithm that generates a prediction of recidivism."



"When I talk to students about the implications of their searches being tracked, I have no easy answers for them. How can youth use the net for empowerment when there’s always the possibility that their queries will count against them? Yes, we can use google to ask frank questions about our sexuality, diet, and body – or any of the other ways we worry about being ‘normal’ – but when we do so, we do not wander a non-invasive landscape. And there few cues that we need to be alert or smart.

Our starting point should not be the guiding metaphors of the digital as a playground where we need to practice safety or a landscape that we can master, but Shoshana Zuboff’s analysis of surveillance capitalism: “The game is selling access to the real-time flow of your daily life –your reality—in order to directly influence and modify your behavior for profit. This is the gateway to a new universe of monetization opportunities: restaurants who want to be your destination. Service vendors who want to fix your brake pads. Shops who will lure you like the fabled Sirens.”



So what do we teach students? I think that Chris Gilliard provides the right pedagogical insight to end on:
Students are often surprised (and even angered) to learn the degree to which they are digitally redlined, surveilled, and profiled on the web and to find out that educational systems are looking to replicate many of those worst practices in the name of “efficiency,” “engagement,” or “improved outcomes.” Students don’t know any other web—or, for that matter, have any notion of a web that would be different from the one we have now. Many teachers have at least heard about a web that didn’t spy on users, a web that was (theoretically at least) about connecting not through platforms but through interfaces where individuals had a significant amount of choice in saying how the web looked and what was shared. A big part of the teaching that I do is to tell students: “It’s not supposed to be like this” or “It doesn’t have to be like this.”
"
banjamindoxtdator  2017  landscapes  playgrounds  georgelakoff  markjohnson  treborscolz  digitalcitizenship  internet  web  online  mckenziewark  privacy  security  labor  playbor  daphnedragona  gamification  uber  work  scottmcleod  adrianelapointe  sarahroberts  janruneholmevik  vannevabush  gregoryulmer  francisbacon  chrisgilliard  pedagogy  criticalthinking  shoshanazuboff  surveillance  surveillancecapitalism  safiyanoble  google  googleglass  cathyo'neil  algorithms  data  bigdata  redlining  postcolonialism  race  racism  criticaltheory  criticalpedagogy  bias 
july 2017 by robertogreco
15 Sorting Algorithms in 6 Minutes - YouTube
"Visualization and "audibilization" of 15 Sorting Algorithms in 6 Minutes.
Sorts random shuffles of integers, with both speed and the number of items adapted to each algorithm's complexity.

The algorithms are: selection sort, insertion sort, quick sort, merge sort, heap sort, radix sort (LSD), radix sort (MSD), std::sort (intro sort), std::stable_sort (adaptive merge sort), shell sort, bubble sort, cocktail shaker sort, gnome sort, bitonic sort and bogo sort (30 seconds of it).

More information on the "Sound of Sorting" at http://panthema.net/2013/sound-of-sorting/ "

[via: https://boingboing.net/2017/06/28/15-sorting-algorithms-visualiz.html ]
algorithms  programming  sorting  visualization  sound  video  timobingmann  computing  classideas 
june 2017 by robertogreco
The Silicon Valley Billionaires Remaking America’s Schools - The New York Times
"The involvement by some of the wealthiest and most influential titans of the 21st century amounts to a singular experiment in education, with millions of students serving as de facto beta testers for their ideas. Some tech leaders believe that applying an engineering mind-set can improve just about any system, and that their business acumen qualifies them to rethink American education.

“They are experimenting collectively and individually in what kinds of models can produce better results,” said Emmett D. Carson, chief executive of Silicon Valley Community Foundation, which manages donor funds for Mr. Hastings, Mr. Zuckerberg and others. “Given the changes in innovation that are underway with artificial intelligence and automation, we need to try everything we can to find which pathways work.”

But the philanthropic efforts are taking hold so rapidly that there has been little public scrutiny."



"But many parents and educators said in interviews that they were unaware of the Silicon Valley personalities and money influencing their schools. Among them was Rafranz Davis, executive director of professional and digital learning at Lufkin Independent School District, a public school system in Lufkin, Tex., where students regularly use DreamBox Learning, the math program that Mr. Hastings subsidized, and have tried Code.org’s coding lessons.

“We should be asking a lot more questions about who is behind the curtain,” Ms. Davis said."
automation  education  personalization  facebook  summitpublicschools  markzuckerberg  publicschools  edtech  data  chaters  culture  2017  marcbenioff  influence  democracy  siliconvalley  hourofcode  netflix  algorithms  larrycuban  rafranzdavis  salesforce  reedhastings  dreamboxlearning  dreambox  jessiewoolley-wilson  surveillance  dianetavenner 
june 2017 by robertogreco
Instagram Created a Monster: A No B.S. Guide to What's Really Going On
"Over the last few years Instagram became THE new way to advertise, and money got in the way, creating a toxic number game. Now getting our work seen without playing this game is becoming harder and harder. What once used to be about content and originality is now reduced to some meaningless algorithm dynamics, and whoever has the time and the cash to trick this system wins the game.

I’m sure many of you have no idea what goes on behind the scenes and I’m sure even fewer of you know that some of us are using Instagram as a business tool to help us make a living.

I’m writing this with a heavy heart, as I know I’m a huge hypocrite. I’ve been playing the game for the last 6 moths, and it made me miserable. I tried to play it as ethically as possible, but when you are pushed into a corner and gasping for air, sometimes you have to set ethical aside if you want to survive. But surviving doesn’t mean living, and the artist in me is desperate to feel alive again.

I still care about doing things right. So I think it’s time to stop the bulls**t, come clean, and tell you exactly what’s happening. I owe you that, because if I get to live the life I live today, if I get to do what I love the most — traveling, writing and making art — it’s also thanks to my followers!

So here’s the truth, the whole truth and nothing but the truth: a no bulls**t guide to what’s really going on!"



"Why Numbers Matter: Influencers and Advertising…

How It All Started…

How the Game is Played: Tricks to Get Followers and Engagement…
We Buy Followers, Likes, and Comments (I’m Not Guilty)…
We Follow/Unfollow, Like, and Comment on Random People (Partially Guilty)…
We Use Instagress and Co. (I’m Guilty)…
We Go to Instagram Spots (I’m Guilty)…
We Get Featured by Collective Accounts…
We Are Part of Comment Pods (I’m Guilty)…
The Best Kept Secret: The Instagram Mafia and Explorer Page (I’m Not Guilty)…"
instagram  algorithms  facebooks  2017  saramelotti  gamification  advertising  capitalism  latecapitalism  commerce  influence  popularity 
june 2017 by robertogreco
Eyes Without a Face — Real Life
"The American painter and sculptor Ellsworth Kelly — remembered mainly for his contributions to minimalism, Color Field, and Hard-edge painting — was also a prodigious birdwatcher. “I’ve always been a colorist, I think,” he said in 2013. “I started when I was very young, being a birdwatcher, fascinated by the bird colors.” In the introduction to his monograph, published by Phaidon shortly before his death in 2015, he writes, “I remember vividly the first time I saw a Redstart, a small black bird with a few very bright red marks … I believe my early interest in nature taught me how to ‘see.’”

Vladimir Nabokov, the world’s most famous lepidopterist, classified, described, and named multiple butterfly species, reproducing their anatomy and characteristics in thousands of drawings and letters. “Few things have I known in the way of emotion or appetite, ambition or achievement, that could surpass in richness and strength the excitement of entomological exploration,” he wrote. Tom Bradley suggests that Nabokov suffered from the same “referential mania” as the afflicted son in his story “Signs and Symbols,” imagining that “everything happening around him is a veiled reference to his personality and existence” (as evidenced by Nabokov’s own “entomological erudition” and the influence of a most major input: “After reading Gogol,” he once wrote, “one’s eyes become Gogolized. One is apt to see bits of his world in the most unexpected places”).

For me, a kind of referential mania of things unnamed began with fabric swatches culled from Alibaba and fine suiting websites, with their wonderfully zoomed images that give you a sense of a particular material’s grain or flow. The sumptuous decadence of velvets and velours that suggest the gloved armatures of state power, and their botanical analogue, mosses and plant lichens. Industrial materials too: the seductive artifice of Gore-Tex and other thermo-regulating meshes, weather-palimpsested blue tarpaulins and piney green garden netting (winningly known as “shade cloth”). What began as an urge to collect colors and textures, to collect moods, quickly expanded into the delicious world of carnivorous plants and bugs — mantises exhibit a particularly pleasing biomimicry — and deep-sea aphotic creatures, which rewardingly incorporate a further dimension of movement. Walls suggest piled textiles, and plastics the murky translucence of jellyfish, and in every bag of steaming city garbage I now smell a corpse flower.

“The most pleasurable thing in the world, for me,” wrote Kelly, “is to see something and then translate how I see it.” I feel the same way, dosed with a healthy fear of cliché or redundancy. Why would you describe a new executive order as violent when you could compare it to the callous brutality of the peacock shrimp obliterating a crab, or call a dress “blue” when it could be cobalt, indigo, cerulean? Or ivory, alabaster, mayonnaise?

We might call this impulse building visual acuity, or simply learning how to see, the seeing that John Berger describes as preceding even words, and then again as completely renewed after he underwent the “minor miracle” of cataract surgery: “Your eyes begin to re-remember first times,” he wrote in the illustrated Cataract, “…details — the exact gray of the sky in a certain direction, the way a knuckle creases when a hand is relaxed, the slope of a green field on the far side of a house, such details reassume a forgotten significance.” We might also consider it as training our own visual recognition algorithms and taking note of visual or affective relationships between images: building up our datasets. For myself, I forget people’s faces with ease but never seem to forget an image I have seen on the internet.

At some level, this training is no different from Facebook’s algorithm learning based on the images we upload. Unlike Google, which relies on humans solving CAPTCHAs to help train its AI, Facebook’s automatic generation of alt tags pays dividends in speed as well as privacy. Still, the accessibility context in which the tags are deployed limits what the machines currently tell us about what they see: Facebook’s researchers are trying to “understand and mitigate the cost of algorithmic failures,” according to the aforementioned white paper, as when, for example, humans were misidentified as gorillas and blind users were led to then comment inappropriately. “To address these issues,” the paper states, “we designed our system to show only object tags with very high confidence.” “People smiling” is less ambiguous and more anodyne than happy people, or people crying.

So there is a gap between what the algorithm sees (analyzes) and says (populates an image’s alt text with). Even though it might only be authorized to tell us that a picture is taken outside, then, it’s fair to assume that computer vision is training itself to distinguish gesture, or the various colors and textures of the slope of a green field. A tag of “sky” today might be “cloudy with a threat of rain” by next year. But machine vision has the potential to do more than merely to confirm what humans see. It is learning to see something different that doesn’t reproduce human biases and uncover emotional timbres that are machinic. On Facebook’s platforms (including Instagram, Messenger, and WhatsApp) alone, over two billion images are shared every day: the monolith’s referential mania looks more like fact than delusion."
2017  rahelaima  algorithms  facebook  ai  artificialintelligence  machinelearning  tagging  machinevision  at  ellsworthkelly  color  tombrdley  google  captchas  matthewplummerfernandez  julesolitski  neuralnetworks  eliezeryudkowsky  seeing 
may 2017 by robertogreco
The Complacent Class (Episode 1/5) - YouTube
[See also: http://learn.mruniversity.com/everyday-economics/tyler-cowen-on-american-culture-and-innovation/ ]

"Restlessness has long been seen as a signature trait of what it means to be American. We've been willing to cross great distances, take big risks, and adapt to change in way that has produced a dynamic economy. From Ben Franklin to Steve Jobs, innovation has been firmly rooted in American DNA.

What if that's no longer true?

Let’s take a journey back to the 19th century – specifically, the Chicago World’s Fair of 1893. At that massive event, people got to do things like ride a ferris wheel, go on a moving sidewalk, see a dishwasher, see electric light, or even try modern chewing gum for the very first time. More than a third of the entire U.S. population at that time attended. And remember, this was 1893 when travel was much more difficult and costly.

Fairs that shortly followed Chicago included new inventions and novelties the telephone, x-ray machine, hot dogs, and ice cream cones.

These earlier years of American innovation were filled with rapid improvement in a huge array of industries. Railroads, electricity, telephones, radio, reliable clean water, television, cars, airplanes, vaccines and antibiotics, nuclear power – the list goes on – all came from this era.

After about the 1970s, innovation on this scale slowed down. Computers and communication have been the focus. What we’ve seen more recently has been mostly incremental improvements, with the large exception of smart phones.

This means that we’ve experienced a ton of changes in our virtual world, but surprisingly few in our physical world. For example, travel hasn’t much improved and, in some cases, has even slowed down. The planes we’re primarily using? They were designed half a century ago.

Since the 1960s, our culture has gotten less restless, too. It’s become more bureaucratic. The sixties and seventies ushered in a wave of protests and civil disobedience. But today, people hire protests planners and file for permits. The demands for change are tamer compared to their mid-century counterparts.

This might not sound so bad. We’ve entered a golden age for many of our favorite entertainment options. Americans are generally better off than ever before. But the U.S. economy is less dynamic. We’re stagnating. We’re complacent. What does mean for our economic and cultural future?"

[The New Era of Segregation (Episode 2/5)
https://www.youtube.com/watch?v=hNlA_Zz1_bM

Do you live in a “bubble?” There’s a good chance that the answer is, at least in part, a resounding “Yes.”

In our algorithm-driven world, digital servants cater to our individual preferences like never before. This has caused many improvements to our daily lives. For example, instead of gathering the kids together for a frustrating Blockbuster trip to pick out a VHS for family movie night, you can simply scroll through kid-friendly titles on Netflix that have been narrowed down based on your family’s previous viewing history. Not so bad.

But this algorithmic matching isn’t limited to entertainment choices. We’re also getting matched to spouses of a similar education level and earning potential. More productive workers are able to get easily matched to more productive firms. On the individual level, this is all very good. Our digital servants are helping us find better matches and improving our lives.

What about at the macro level? All of this matching can also produce more segregation – but on a much broader level than just racial segregation. People with similar income and education levels, and who do similar types of work, are more likely to cluster into their own little bubbles. This matching has consequences, and they’re not all virtual.

Power couples and highly productive workers are concentrating in metropolises like New York City and San Francisco. With many high earners, lots of housing demand, and strict building codes, rents in these types of cities are skyrocketing. People with lower incomes simply can no longer afford the cost of living, so they leave. New people with lower incomes also aren’t coming in, so we end up with a type of self-reinforcing segregation.

If you think back to the 2016 U.S. election, you’ll remember that most political commentators, who tend to reside in trendy large cities, were completely shocked by the rise of Donald Trump. What part did our new segregation play in their inability to understand what was happening in middle America?

In terms of racial segregation, there are worrying trends. The variety and level of racism of we’ve seen in the past may be on the decline, but the data show less residential racial mixing among whites and minorities.

Why does this matter? For a dynamic economy, mixing a wide variety of people in everyday life is crucial for the development of ideas and upward mobility. If matching is preventing mixing, we have to start making intentional changes to improve socio-economic integration and bring dynamism back into the American economy."]
safety  control  life  us  innovation  change  invention  risk  risktaking  stasis  travel  transportation  dynamism  stagnation  economics  crisis  restlessness  tylercowen  fiterbubbles  segregation  protest  communication  disobedience  compliance  civildisobedience  infrastructure  complacency  2017  algorithms  socialmobility  inequality  race  class  filterbubbles  incomeinequality  isolation  cities  urban  urbanism 
march 2017 by robertogreco
Uber’s ghost map and the meaning of greyballing | ROUGH TYPE
"The Uber map is a media production. It presents a little, animated entertainment in which you, the user, play the starring role. You are placed at the very center of things, wherever you happen to be, and you are surrounded by a pantomime of oversized automobiles poised to fulfill your desires, to respond immediately to your beckoning. It’s hard not to feel flattered by the illusion of power that the Uber map grants you. Every time you open the app, you become a miniature superhero on a city street. You send out a bat signal, and the batmobile speeds your way. By comparison, taking a bus or a subway, or just hoofing it, feels almost insulting.

In a similar way, a Google map also sets you in a fictionalized story about a place, whether you use the map for navigation or for searching. You are given a prominent position on the map, usually, again, at its very center, and around you a city personalized to your desires takes shape. Certain business establishments and landmarks are highlighted, while other ones are not. Certain blocks are highlighted as “areas of interest“; others are not. Sometimes the highlights are paid for, as advertising; other times they reflect Google’s assessment of you and your preferences. You’re not allowed to know precisely why your map looks the way it does. The script is written in secret.

It’s not only maps. The news and message feeds presented to you by Facebook, or Apple or Google or Twitter, are also stories about the world, fictional representations manufactured both to appeal to your desires and biases and to provide a compelling context for advertising. Mark Zuckerberg may wring his hands over “fake news,” but fake news is to the usual Facebook feed what the Greyball map is to the usual Uber map: an extreme example of the norm.

When I talk about “you,” I don’t really mean you. The “you” around which the map or the news feed or any other digitized representation of the world coalesces is itself a representation. As John Cheney-Lippold explains in his forthcoming book We Are Data, companies like Facebook and Google create digital versions of their users derived through an algorithmic analysis of the data they collect about their users. The companies rely on these necessarily fictionalized representations for both technical reasons (human beings can’t be computed; to be rendered computable, you have to be turned into a digital representation) and commercial reasons (a digital representation of a person can be bought and sold). The “you” on the Uber map or in the Facebook feed is a fake — a character in a story — but it’s a useful and a flattering fake, so you accept it as an accurate portrayal of yourself: an “I” for an I.

Greyballing is not an aberration of the virtual world. Greyballing is the essence of virtuality."

[via: https://tinyletter.com/audreywatters/letters/hewn-no-204 ]
mapping  maps  technology  self  simulacra  nicholascarr  via:audreywatters  greyballing  uber  ideology  fictions  data  algorithms  representation  news  facebooks  fakenews  cartography  business  capitalism  place  google 
march 2017 by robertogreco
Ed-Tech in a Time of Trump
"The thing is, I’d still be giving the much the same talk, just with a different title. “A Time of Trump” could be “A Time of Neoliberalism” or “A Time of Libertarianism” or “A Time of Algorithmic Discrimination” or “A Time of Economic Precarity.” All of this is – from President Trump to the so-called “new economy” – has been fueled to some extent by digital technologies; and that fuel, despite what I think many who work in and around education technology have long believed – have long hoped – is not necessarily (heck, even remotely) progressive."



"As Donna Haraway argues in her famous “Cyborg Manifesto,” “Feminist cyborg stories have the task of recoding communication and intelligence to subvert command and control.” I want those of us working in and with education technologies to ask if that is the task we’ve actually undertaken. Are our technologies or our stories about technologies feminist? If so, when? If so, how? Do our technologies or our stories work in the interest of justice and equity? Or, rather, have we adopted technologies for teaching and learning that are much more aligned with that military mission of command and control? The mission of the military. The mission of the church. The mission of the university.

I do think that some might hear Haraway’s framing – a call to “recode communication and intelligence” – and insist that that’s exactly what education technologies do and they do so in a progressive reshaping of traditional education institutions and practices. Education technologies facilitate communication, expanding learning networks beyond the classroom. And they boost intelligence – namely, how knowledge is created and shared.
Perhaps they do.

But do our ed-tech practices ever actually recode or subvert command and control? Do (or how do) our digital communication practices differ from those designed by the military? And most importantly, I’d say, does (or how does) our notion of intelligence?"



"This is a punch card, a paper-based method of proto-programming, one of the earliest ways in which machines could be automated. It’s a relic, a piece of “old tech,” if you will, but it’s also a political symbol. Think draft cards. Think the slogan “Do not fold, spindle or mutilate.” Think Mario Savio on the steps of Sproul Hall at UC Berkeley in 1964, insisting angrily that students not be viewed as raw materials in the university machine."



"We need to identify and we need to confront the ideas and the practices that are the lingering legacies of Nazism and fascism. We need to identify and we need to confront them in our technologies. Yes, in our education technologies. Remember: our technologies are ideas; they are practices. Now is the time for an ed-tech antifa, and I cannot believe I have to say that out loud to you.

And so you hear a lot of folks in recent months say “read Hannah Arendt.” And I don’t disagree. Read Arendt. Read The Origins of Totalitarianism. Read her reporting from the Nuremberg Trials.
But also read James Baldwin. Also realize that this politics and practice of surveillance and genocide isn’t just something we can pin on Nazi Germany. It’s actually deeply embedded in the American experience. It is part of this country as a technology."



"Who are the “undesirables” of ed-tech software and education institutions? Those students who are identified as “cheats,” perhaps. When we turn the cameras on, for example with proctoring software, those students whose faces and gestures are viewed – visually, biometrically, algorithmically – as “suspicious.” Those students who are identified as “out of place.” Not in the right major. Not in the right class. Not in the right school. Not in the right country. Those students who are identified – through surveillance and through algorithms – as “at risk.” At risk of failure. At risk of dropping out. At risk of not repaying their student loans. At risk of becoming “radicalized.” At risk of radicalizing others. What about those educators at risk of radicalizing others. Let’s be honest with ourselves, ed-tech in a time of Trump will undermine educators as well as students; it will undermine academic freedom. It’s already happening. Trump’s tweets this morning about Berkeley.

What do schools do with the capabilities of ed-tech as surveillance technology now in the time of a Trump? The proctoring software and learning analytics software and “student success” platforms all market themselves to schools claiming that they can truly “see” what students are up to, that they can predict what students will become. (“How will this student affect our averages?”) These technologies claim they can identify a “problem” student, and the implication, I think, is that then someone at the institution “fixes” her or him. Helps the student graduate. Convinces the student to leave.

But these technologies do not see students. And sadly, we do not see students. This is cultural. This is institutional. We do not see who is struggling. And let’s ask why we think, as the New York Times argued today, we need big data to make sure students graduate. Universities have not developed or maintained practices of compassion. Practices are technologies; technologies are practices. We’ve chosen computers instead of care. (When I say “we” here I mean institutions not individuals within institutions. But I mean some individuals too.) Education has chosen “command, control, intelligence.” Education gathers data about students. It quantifies students. It has adopted a racialized and gendered surveillance system – one that committed to disciplining minds and bodies – through our education technologies, through our education practices.

All along the way, or perhaps somewhere along the way, we have confused surveillance for care.

And that’s my takeaway for folks here today: when you work for a company or an institution that collects or trades data, you’re making it easy to surveil people and the stakes are high. They’re always high for the most vulnerable. By collecting so much data, you’re making it easy to discipline people. You’re making it easy to control people. You’re putting people at risk. You’re putting students at risk.

You can delete the data. You can limit its collection. You can restrict who sees it. You can inform students. You can encourage students to resist. Students have always resisted school surveillance.

But I hope that you also think about the culture of school. What sort of institutions will we have in a time of Trump? Ones that value open inquiry and academic freedom? I swear to you this: more data will not protect you. Not in this world of “alternate facts,” to be sure. Our relationships to one another, however, just might. We must rebuild institutions that value humans’ minds and lives and integrity and safety. And that means, in its current incarnation at least, in this current climate, ed-tech has very very little to offer us."
education  technology  audreywatters  edtech  2017  donaldtrump  neoliberalism  libertarianism  algorithms  neweconomy  economics  precarity  inequality  discrimination  donnaharaway  control  command  ppwer  mariosavio  nazism  fascism  antifa  jamesbaldwin  racism  hannaharendt  totalitarianism  politics 
february 2017 by robertogreco
TILT #1: librarians like to search, everyone else likes to find
"My father was a technologist and bullshitter. Not in that "doesn't tell the truth" way (though maybe some of that) but mostly in that "likes to shoot the shit with people" way. When he was being sociable he'd pass the time idly wondering about things. Some of these were innumeracy tests "How many of this thing do you think could fit inside this other thing?" or "How many of these things do you think there are in the world?" Others were more concrete "Can I figure out what percentage of the movies that have been released this year will wind up on Netflix in the next twelve months?" and then he'd like to talk about how you'd get the answer. I mostly just wanted to get the answer, why just speculate about something you could know?

He wasn't often feeling sociable so it was worth trying to engage with these questions to keep the conversation going. I'd try some searches, I'd poke around online, I'd ask some people, his attention would wane. Often the interactions would end abruptly with some variant of head-shaking and "Well I guess you can't know some things..." I feel like many, possibly most, things are knowable given enough time to do the research. Still do.

To impatient people many things are "unknowable". The same is true for users of Google. Google is powerful and fast, sure. But they've buried their advanced search deeper and deeper over time, continually try to coerce you to sign in and give them location data, and they save your search history unless you tell them not to. It's common knowledge that they're the largest media owner on the planet, more than Disney, more than Comcast. I use Google. I like Google. But even though they're better than most other search engines out there, that doesn't mean that searching, and finding, can't be a lot better. Getting a million results feels like some sort of accomplishment but it's not worth much if you don't have the result you want.

As filtering and curating are becoming more and more what the internet is about, having a powerful, flexible, and "thoughtful" search feature residing on top of these vast stores of poorly archived digital stuff becomes more critical. No one should settle for a search tool that is just trying to sell you something. Everyone should work on getting their librarian merit badges in order to learn to search, not just find."
jessamynwest  search  internet  google  libraries  2016  filtering  curating  web  online  archives  algorithms 
june 2016 by robertogreco
Critical Algorithm Studies: a Reading List | Social Media Collective
"This list is an attempt to collect and categorize a growing critical literature on algorithms as social concerns. The work included spans sociology, anthropology, science and technology studies, geography, communication, media studies, and legal studies, among others. Our interest in assembling this list was to catalog the emergence of “algorithms” as objects of interest for disciplines beyond mathematics, computer science, and software engineering.

As a result, our list does not contain much writing by computer scientists, nor does it cover potentially relevant work on topics such as quantification, rationalization, automation, software more generally, or big data, although these interests are well-represented in these works’ reference sections of the essays themselves.

This area is growing in size and popularity so quickly that many contributions are popping up without reference to work from disciplinary neighbors. One goal for this list is to help nascent scholars of algorithms to identify broader conversations across disciplines and to avoid reinventing the wheel or falling into analytic traps that other scholars have already identified. We also thought it would be useful, especially for those teaching these materials, to try to loosely categorize it. The organization of the list is meant merely as a first-pass, provisional sense-making effort. Within categories the entries are offered in chronological order, to help make sense of these rapid developments.

In light of all of those limitations, we encourage you to see it as an unfinished document, and we welcome comments. These could be recommendations of other work to include, suggestions on how to reclassify a particular entry, or ideas for reorganizing the categories themselves. Please use the comment space at the bottom of the page to offer suggestions and criticism; we will try to update the list in light of these suggestions.

Tarleton Gillespie and Nick Seaver"
algorithms  bibliography  ethics  bigdata  tarletongillespie  nickseaver  2016  sociology  anthropology  science  technology  criticalalgorithmstudies  via:tealtan 
june 2016 by robertogreco
'I Love My Label': Resisting the Pre-Packaged Sound in Ed-Tech
"I’ve argued elsewhere, drawing on a phrase by cyborg anthropologist Amber Case, that many of the industry-provided educational technologies we use create and reinforce a “templated self,” restricting the ways in which we present ourselves and perform our identities through their very technical architecture. The learning management system is a fine example of this, particularly with its “permissions” that shape who gets to participate and how, who gets to create, review, assess data and content. Algorithmic profiling now will be layered on top of these templated selves in ed-tech – the results, again: the pre-packaged student.

Indie ed-tech, much like the indie music from which it takes its inspiration, seeks to offer an alternative to the algorithms, the labels, the templates, the profiling, the extraction, the exploitation, the control. It’s a big task – an idealistic one, no doubt. But as the book Our Band Could Be Your Life, which chronicles the American indie music scene of the 1980s (and upon which Jim Groom drew for his talk on indie-ed tech last fall), notes, “Black Flag was among the first bands to suggest that if you didn’t like ‘the system,’ you should simply create one of your own.” If we don’t like ‘the system’ of ed-tech, we should create one of our own.

It’s actually not beyond our reach to do so.

We’re already working in pockets doing just that, with various projects to claim and reclaim and wire and rewire the Web so that it’s more just, more open, less exploitative, and counterintuitively perhaps less “personalized.” “The internet is shit today,” Pirate Bay founder Peter Sunde said last year. “It’s broken. It was probably always broken, but it’s worse than ever.” We can certainly say the same for education technology, with its long history of control, measurement, standardization.

We aren’t going to make it better by becoming corporate rockstars. This fundamental brokenness means we can’t really trust those who call for a “Napster moment” for education or those who hail the coming Internet/industrial revolution for schools. Indie means we don’t need millions of dollars, but it does mean we need community. We need a space to be unpredictable, for knowledge to be emergent not algorithmically fed to us. We need intellectual curiosity and serendipity – we need it from scholars and from students. We don’t need intellectual discovery to be trademarked, to a tab that we click on to be fed the latest industry updates, what the powerful, well-funded people think we should know or think we should become."
2016  audreywatters  edupunk  edtech  independent  indie  internet  online  technology  napster  history  serendipity  messiness  curiosity  control  measurement  standardization  walledgardens  privacy  data  schools  education  highered  highereducation  musicindustry  jimgroom  ambercase  algorithms  bigdata  prediction  machinelearning  machinelistening  echonest  siliconvalley  software 
march 2016 by robertogreco
Education Outrage: Now it is Facebook's turn to be stupid about AI
"What could Facebook be thinking here? We read stories to our children for many reasons. These are read because they have been around a long time, which is not a great reason. The reason to read frightening stories to children has never ben clear to me. The only value I saw in doing this sort of thing as a parent was to begin a discussion with the child about the story which might lead somewhere interesting. Now my particular children had been living in the real world at the time so they had some way to relate to the story because of their own fears, or because of experiences they might have had.

Facebook’s AI will be able to relate to these stories by matching words it has seen before. Oh good. It will not learn anything from the stories because it cannot learn anything from any story. Learning from stories means mapping your experiences (your own stories) to the new story and finding some commonalities and some differences. It also entails discussing those commonalties and differences with someone who is willing to have that conversation with you. In order to do that you have to be able to construct sentences on your own and be able to interpret your own experiences through conversations with your friends and family.

Facebook’s “AI” will not be doing this because it can’t. It has had no experiences. Apparently its experience is loading lots of text and counting patterns. Too bad there isn’t a children’s story about that.

Facebook hasn’t a clue about AI, but it will continue to spend money and accomplish nothing until AI is declared to have failed again,"
rogerschanck  2016  facebook  ai  artificialintelligence  algorithms  via:audreywatters  context  experience  understanding  stories  storytelling 
february 2016 by robertogreco
English 508 (Spring 2016)
[See also: https://jentery.github.io/508/notes.html ]

[From the description page:
https://jentery.github.io/508/description.html

"In both theory and practice, this seminar brushes against four popular assumptions about digital humanities: 1) as a service to researchers, the field merely develops digital resources for online discovery and builds computational tools for end-users; it does not interpret texts or meaningfully engage with “pre-digital” traditions in literary and cultural criticism; 2) digital humanities is not concerned with the literary or aesthetic character of texts; it is a techno-solutionist byproduct of instrumentalism and big data; 3) digital humanities practitioners replace cultural perspectives with uncritical computer vision; instead of privileging irony or ambivalence, they use computers to “prove” reductive claims about literature and culture, usually through graphs and totalizing visualizations; and 4) to participate in the field, you must be fluent in computer programming, or at least be willing to treat literature and culture quantitatively; if you are not a programmer, then you are not doing digital humanities.

During our seminar meetings, we will counter these four assumptions by examining, historicizing, and creating “design fictions,” which Bruce Sterling defines as “the deliberate use of diegetic prototypes to suspend disbelief about change.” Design fictions typically have a futurist bent to them. They speculate about bleeding edge technologies and emerging dynamics, or they project whiz-bang worlds seemingly ripped from films such as Minority Report. But we’ll refrain from much futurism. Instead, we will use technologies to look backwards and prototype versions of texts that facilitate interpretative practice. Inspired by Kari Kraus’s conjectural criticism, Fred Moten’s second iconicity, Bethany Nowviskie and Johanna Drucker’s speculative computing, Karen Barad’s notion of diffraction, Jeffrey Schnapp’s small data, Anne Balsamo’s hermeneutic reverse-engineering, and deformations by Lisa Samuels, Jerome McGann, and Mark Sample, we will conduct “what if” analyses of texts already at hand, in electronic format (e.g., page images in a library’s digital collections).

Doing so will involve something peculiar: interpreting our primary sources by altering them. We’ll substitute words, change formats, rearrange poems, remediate fictions, juxtapose images, bend texts, and reconstitute book arts. To be sure, such approaches have vexed legacies in the arts and humanities. Consider cut-ups, constrained writing, story-making machines, exquisite corpses, remixes, tactical media, Fluxkits, or détournement. Today, these avant-garde traditions are ubiquitous in a banal or depoliticized form, the default features of algorithmic culture and social networks. But we will refresh them, with a difference, by integrating our alterations into criticism and prompting questions about the composition of art and history today.

Instructor: Jentery Sayers
Office Hours: Monday, 12-2pm, in CLE D334
Email: jentery@uvic.ca
Office Phone (in CLE D334): 250-721-7274 (I'm more responsive by email)
Mailing Address: Department of English | UVic | P.O. Box 3070, STN CSC | Victoria, BC V8W 3W1

Philosophers have hitherto only interpreted the world in various ways; the point is to change it. —Karl Marx"]

[via: "when humanities start doing design without designers because design's too self-absorbed to notice being appropriated"
https://twitter.com/camerontw/status/700175377197563904
includes screenshot of Week 7 note from https://jentery.github.io/508/notes.html ]
jenterysayers  text  prototyping  digitalhumanities  speculativedesign  design  english  syllabus  maryanncaws  johannadrucker  wjtmitchell  jeffreyschnapp  evekosofskysedgwick  technosolutionism  brucesterling  fredmoten  karenbarad  jeromemcgann  marksample  bethanynowviskie  fluxkits  detournement  poetry  exquisitecorpses  algorithms  art  composition  rosamenkman  anthonydunne  fionaraby  dunne&raby  syllabi 
february 2016 by robertogreco
Identity, Power and Education’s Algorithms — Identity, Education and Power — Medium
"Many Twitter users seemed to balk at letting the company control their social and information networks algorithmically. It’s time we bring the same scrutiny to the algorithms we’re compelling students and teachers to use in the classroom. We must ask: how will an algorithmic education also serve to amplify the voices of the powerful and silence the voices of the marginalized? What does it mean to build ed-tech profiles: who is profiled and how? What patterns do the algorithms see? What do they reinforce? What will become “unseen” as these algorithms are opaque? How do some identities and privileges get hard-coded into these new software systems? And who stands to benefit? How will these algorithmic practices actually work to extend educational inequality?"
twitter  audreywatters  2016  algorithms  education  edtech  socialmedia  socialnetworks  teaching  learning  accessibility  voice  power  marginalization  privilege  software  inequality 
february 2016 by robertogreco
Caroline Sinders
"Hi there, I'm Caroline.

I am a User Experience and Interaction Designer, researcher, interactive story teller, bad joke collector, and ridiculous pie baker. I was born in New Orleans and I am currently based in Brooklyn (and occasionally, I live in airports). Prior to graduate school, I worked in the creative world as a photographer for Harper's Bazaar Russia, Refinery 29, Style.Com, and Hypbeast as well as a marketing coordinator. My entire professional career has been in digital culture, digital imaging, and digital branding.

Sometimes I make things with Twitter and Instagram, and I play around with APIs whenever I can. I used to design stories with stills, now I love to make things move. My design approach is think of the user first and focus on problem solving through whimsy, intelligence, and intuition. My skill set is broad: I research, conceptualize, brand, wireframe, and build. I see the big picture as a system made of very tiny and very integral moving parts. I dream in wireframes and personas.

I hold a masters from NYU's Interactive Telecommunications Program, and I have a BFA in Photography and Imaging with a focus in digital media and culture from NYU. Get at me sometime, I love to meet new people."

[via: "A talk on systems design, machine learning, and designing with empathy in digital spaces

Caroline Sinders is an artist and user researcher at IBM Watson who works with language, robots, and machine learning. Her work focuses on the line between human intervention and algorithms."
https://twitter.com/ablerism/status/693961348724690944 ]
carolinesinders  via:ablerism  ux  ui  interaction  design  twitter  instagram  apis  research  digital  digitalculture  digitalbranding  digitalimaging  machinelearning  systemsdesign  empathy  bots  humanintervention  algorithms 
february 2016 by robertogreco
Sha Hwang - Keynote [Forms of Protest] - UX Burlington on Vimeo
"Let’s close the day by talking about our responsibilities and opportunities as designers. Let’s talk about the pace of fashion and the promise of infrastructure. Let’s talk about systematic failure — failure without malice. Let’s talk about the ways to engage in this messy and complex world. Let’s throw shade on fame and shine light on the hard quiet work we call design."
shahwang  2015  design  infrastructure  fashion  systemsthinking  complexity  messiness  protest  careers  technology  systems  storytelling  scale  stewartbrand  change  thehero'sjourney  founder'sstory  politics  narrative  narratives  systemsdesign  blame  control  algorithms  systemfailure  healthcare.gov  mythmaking  teams  purpose  scalability  bias  microaggressions  dignity  abuse  malice  goodwill  fear  inattention  donellameadows  leveragepoints  making  building  constraints  coding  code  programming  consistency  communication  sharing  conversation  government  ux  law  uxdesign  simplicity  kindness  individuals  responsibility  webdev  web  internet  nava  codeforamerica  18f  webdesign 
january 2016 by robertogreco
Teaching Machines and Turing Machines: The History of the Future of Labor and Learning
"In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect – stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)

What happens to affective teaching labor when it runs up against robots, against automation? Even the tasks that education technology purports to now be able to automate – teaching, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.

This is part of our conundrum: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. At both the K12 and college level, teaching of content is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told.

Of course, we should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?

And what sorts of signals are the machines gathering in turn? What are they learning to do?
Often, of course, we do not know the answer to those last two questions, as the code and the algorithms in education technologies (most technologies, truth be told) are hidden from us. We are becoming as law professor Frank Pasquale argues a “black box society.” And the irony is hardly lost on me that one of the promises of massive collection of student data under the guise of education technology and learning analytics is to crack open the “black box” of the human brain.

We still know so little about how the brain works, and yet, we’ve adopted a number of metaphors from our understanding of that organ to explain how computers operate: memory, language, intelligence. Of course, our notion of intelligence – its measurability – has its own history, one wrapped up in eugenics and, of course, testing (and teaching) machines. Machines now both frame and are framed by this question of intelligence, with little reflection on the intellectual and ideological baggage that we carry forward and hard-code into them."



"We’re told by some automation proponents that instead of a future of work, we will find ourselves with a future of leisure. Once the robots replace us, we will have immense personal freedom, so they say – the freedom to pursue “unproductive” tasks, the freedom to do nothing at all even, except I imagine, to continue to buy things.
On one hand that means that we must address questions of unemployment. What will we do without work? How will we make ends meet? How will this affect identity, intellectual development?

Yet despite predictions about the end of work, we are all working more. As games theorist Ian Bogost and others have observed, we seem to be in a period of hyper-employment, where we find ourselves not only working numerous jobs, but working all the time on and for technology platforms. There is no escaping email, no escaping social media. Professionally, personally – no matter what you say in your Twitter bio that your Tweets do not represent the opinions of your employer – we are always working. Computers and AI do not (yet) mark the end of work. Indeed, they may mark the opposite: we are overworked by and for machines (for, to be clear, their corporate owners).

Often, we volunteer to do this work. We are not paid for our status updates on Twitter. We are not compensated for our check-in’s in Foursquare. We don’t get kick-backs for leaving a review on Yelp. We don’t get royalties from our photos on Flickr.

We ask our students to do this volunteer labor too. They are not compensated for the data and content that they generate that is used in turn to feed the algorithms that run TurnItIn, Blackboard, Knewton, Pearson, Google, and the like. Free labor fuels our technologies: Forum moderation on Reddit – done by volunteers. Translation of the courses on Coursera and of the videos on Khan Academy – done by volunteers. The content on pretty much every “Web 2.0” platform – done by volunteers.

We are working all the time; we are working for free.

It’s being framed, as of late, as the “gig economy,” the “freelance economy,” the “sharing economy” – but mostly it’s the service economy that now comes with an app and that’s creeping into our personal not just professional lives thanks to billions of dollars in venture capital. Work is still precarious. It is low-prestige. It remains unpaid or underpaid. It is short-term. It is feminized.

We all do affective labor now, cultivating and caring for our networks. We respond to the machines, the latest version of ELIZA, typing and chatting away hoping that someone or something responds, that someone or something cares. It’s a performance of care, disguising what is the extraction of our personal data."



"Personalization. Automation. Management. The algorithms will be crafted, based on our data, ostensibly to suit us individually, more likely to suit power structures in turn that are increasingly opaque.

Programmatically, the world’s interfaces will be crafted for each of us, individually, alone. As such, I fear, we will lose our capacity to experience collectivity and resist together. I do not know what the future of unions looks like – pretty grim, I fear; but I do know that we must enhance collective action in order to resist a future of technological exploitation, dehumanization, and economic precarity. We must fight at the level of infrastructure – political infrastructure, social infrastructure, and yes technical infrastructure.

It isn’t simply that we need to resist “robots taking our jobs,” but we need to challenge the ideologies, the systems that loath collectivity, care, and creativity, and that champion some sort of Randian individual. And I think the three strands at this event – networks, identity, and praxis – can and should be leveraged to precisely those ends.

A future of teaching humans not teaching machines depends on how we respond, how we design a critical ethos for ed-tech, one that recognizes, for example, the very gendered questions at the heart of the Turing Machine’s imagined capabilities, a parlor game that tricks us into believing that machines can actually love, learn, or care."
2015  audreywatters  education  technology  academia  labor  work  emotionallabor  affect  edtech  history  highered  highereducation  teaching  schools  automation  bfskinner  behaviorism  sexism  howweteach  alanturing  turingtest  frankpasquale  eliza  ai  artificialintelligence  robots  sharingeconomy  power  control  economics  exploitation  edwardthorndike  thomasedison  bobdylan  socialmedia  ianbogost  unemployment  employment  freelancing  gigeconomy  serviceeconomy  caring  care  love  loving  learning  praxis  identity  networks  privacy  algorithms  freedom  danagoldstein  adjuncts  unions  herbertsimon  kevinkelly  arthurcclarke  sebastianthrun  ellenlagemann  sidneypressey  matthewyglesias  karelčapek  productivity  efficiency  bots  chatbots  sherryturkle 
august 2015 by robertogreco
Is It Time to Give Up on Computers in Schools?
"This is a version of the talk I gave at ISTE today on a panel titled "Is It Time to Give Up on Computers in Schools?" with Gary Stager, Will Richardson, Martin Levins, David Thornburg, and Wayne D'Orio. It was pretty damn fun.

Take one step into that massive shit-show called the Expo Hall and it’s hard not to agree: “yes, it is time to give up on computers in schools.”

Perhaps, once upon a time, we could believe ed-tech would change things. But as Seymour Papert noted in The Children’s Machine,
Little by little the subversive features of the computer were eroded away: … the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation.

I think we were naive when we ever thought otherwise.

Sure, there are subversive features, but I think the computers also involve neoliberalism, imperialism, libertarianism, and environmental destruction. They now involve high stakes investment by the global 1% – it’s going to be a $60 billion market by 2018, we’re told. Computers are implicated in the systematic de-funding and dismantling of a public school system and a devaluation of human labor. They involve the consolidation of corporate and governmental power. They involve scientific management. They are designed by white men for white men. They re-inscribe inequality.

And so I think it’s time now to recognize that if we want education that is more just and more equitable and more sustainable, that we need to get the ideologies that are hardwired into computers out of the classroom.

In the early days of educational computing, it was often up to innovative, progressive teachers to put a personal computer in their classroom, even paying for the computer out of their own pocket. These were days of experimentation, and as Seymour teaches us, a re-imagining of what these powerful machines could enable students to do.

And then came the network and, again, the mainframe.

You’ll often hear the Internet hailed as one of the greatest inventions of mankind – something that connects us all and that has, thanks to the World Wide Web, enabled the publishing and sharing of ideas at an unprecedented pace and scale.

What “the network” introduced in educational technology was also a more centralized control of computers. No longer was it up to the individual teacher to have a computer in her classroom. It was up to the district, the Central Office, IT. The sorts of hardware and software that was purchased had to meet those needs – the needs and the desire of the administration, not the needs and the desires of innovative educators, and certainly not the needs and desires of students.

The mainframe never went away. And now, virtualized, we call it “the cloud.”

Computers and mainframes and networks are points of control. They are tools of surveillance. Databases and data are how we are disciplined and punished. Quite to the contrary of Seymour’s hopes that computers will liberate learners, this will be how we are monitored and managed. Teachers. Students. Principals. Citizens. All of us.

If we look at the history of computers, we shouldn’t be that surprised. The computers’ origins are as weapons of war: Alan Turing, Bletchley Park, code-breakers and cryptography. IBM in Germany and its development of machines and databases that it sold to the Nazis in order to efficiently collect the identity and whereabouts of Jews.

The latter should give us great pause as we tout programs and policies that collect massive amounts of data – “big data.” The algorithms that computers facilitate drive more and more of our lives. We live in what law professor Frank Pasquale calls “the black box society.” We are tracked by technology; we are tracked by companies; we are tracked by our employers; we are tracked by the government, and “we have no clear idea of just how far much of this information can travel, how it is used, or its consequences.” When we compel the use of ed-tech, we are doing this to our students.

Our access to information is constrained by these algorithms. Our choices, our students’ choices are constrained by these algorithms – and we do not even recognize it, let alone challenge it.

We have convinced ourselves, for example, that we can trust Google with its mission: “To organize the world’s information and make it universally accessible and useful.” I call “bullshit.”

Google is at the heart of two things that computer-using educators should care deeply and think much more critically about: the collection of massive amounts of our personal data and the control over our access to knowledge.

Neither of these are neutral. Again, these are driven by ideology and by algorithms.

You’ll hear the ed-tech industry gleefully call this “personalization.” More data collection and analysis, they contend, will mean that the software bends to the student. To the contrary, as Seymour pointed out long ago, instead we find the computer programming the child. If we do not unpack the ideology, if the algorithms are all black-boxed, then “personalization” will be discriminatory. As Tressie McMillan Cottom has argued “a ‘personalized’ platform can never be democratizing when the platform operates in a society defined by inequalities.”

If we want schools to be democratizing, then we need to stop and consider how computers are likely to entrench the very opposite. Unless we stop them.

In the 1960s, the punchcard – an older piece of “ed-tech” – had become a symbol of our dehumanization by computers and by a system – an educational system – that was inflexible, impersonal. We were being reduced to numbers. We were becoming alienated. These new machines were increasing the efficiency of a system that was setting us up for a life of drudgery and that were sending us off to war. We could not be trusted with our data or with our freedoms or with the machines themselves, we were told, as the punchcards cautioned: “Do not fold, spindle, or mutilate.”

Students fought back.

Let me quote here from Mario Savio, speaking on the stairs of Sproul Hall at UC Berkeley in 1964 – over fifty years ago, yes, but I think still one of the most relevant messages for us as we consider the state and the ideology of education technology:
We’re human beings!

There is a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can’t take part; you can’t even passively take part, and you’ve got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you’ve got to make it stop. And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all!

We’ve upgraded from punchcards to iPads. But underneath, a dangerous ideology – a reduction to 1s and 0s – remains. And so we need to stop this ed-tech machine."
edtech  education  audreywatters  bias  mariosavio  politics  schools  learning  tressuemcmillancottom  algorithms  seymourpapert  personalization  data  security  privacy  howwteach  howwelearn  subversion  computers  computing  lms  neoliberalism  imperialism  environment  labor  publicschools  funding  networks  cloud  bigdata  google  history 
july 2015 by robertogreco
The Invented History of 'The Factory Model of Education'
[Follow-up notes here: http://www.aud.life/2015/notes-on-the-invented-history-of-the-factory-model-of ]

"Sal Khan is hardly the only one who tells a story of “the factory of model of education” that posits the United States adopted Prussia’s school system in order to create a compliant populace. It’s a story cited by homeschoolers and by libertarians. It’s a story told by John Taylor Gatto in his 2009 book Weapons of Mass Instruction. It’s a story echoed by The New York Times’ David Brooks. Here he is in 2012: “The American education model…was actually copied from the 18th-century Prussian model designed to create docile subjects and factory workers.”

For what it’s worth, Prussia was not highly industrialized when Frederick the Great formalized its education system in the late 1700s. (Very few places in the world were back then.) Training future factory workers, docile or not, was not really the point.

Nevertheless industrialization is often touted as both the model and the rationale for the public education system past and present. And by extension, it’s part of a narrative that now contends that schools are no longer equipped to address the needs of a post-industrial world."



"Despite these accounts offered by Toffler, Brooks, Khan, Gatto, and others, the history of schools doesn’t map so neatly onto the history of factories (and visa versa). As education historian Sherman Dorn has argued, “it makes no sense to talk about either ‘the industrial era’ or the development of public school systems as a single, coherent phase of national history.”"



"As Dorn notes, phrases like “the industrial model of education,” “the factory model of education,” and “the Prussian model of education” are used as a “rhetorical foil” in order make a particular political point – not so much to explain the history of education, as to try to shape its future."



"Many education reformers today denounce the “factory model of education” with an appeal to new machinery and new practices that will supposedly modernize the system. That argument is now and has been for a century the rationale for education technology. As Sidney Pressey, one of the inventors of the earliest “teaching machines” wrote in 1932 predicting "The Coming Industrial Revolution in Education,"
Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an “industrial revolution” in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.

Pressey, much like Sal Khan and other education technologists today, believed that teaching machines could personalize and “revolutionize” education by allowing students to move at their own pace through the curriculum. The automation of the menial tasks of instruction would enable education to scale, Pressey – presaging MOOC proponents – asserted.

We tend to not see automation today as mechanization as much as algorithmization – the promise and potential in artificial intelligence and virtualization, as if this magically makes these new systems of standardization and control lighter and liberatory.

And so too we’ve invented a history of “the factory model of education” in order to justify an “upgrade” – to new software and hardware that will do much of the same thing schools have done for generations now, just (supposedly) more efficiently, with control moved out of the hands of labor (teachers) and into the hands of a new class of engineers, out of the realm of the government and into the realm of the market."
factoryschools  education  history  2015  audreywatters  edtech  edreform  mechanization  automation  algorithms  personalization  labor  teaching  howweteach  howwelearn  mooc  moocs  salkhan  sidneypressey  1932  prussia  horacemann  lancastersystem  frederickjohngladman  mikecaulfield  jamescordiner  prussianmodel  frederickengels  shermandorn  alvintoffler  johntaylorgatto  davidbrooksm  monitorialsystem  khanacademy  stevedenning  rickhess  us  policy  change  urgency  futureshock  1970  bellsystem  madrassystem  davidstow  victorcousin  salmankhan 
april 2015 by robertogreco
My Objections to the Common Core State Standards (1.0) : Stager-to-Go
"The following is an attempt to share some of my objections to Common Core in a coherent fashion. These are my views on a controversial topic. An old friend I hold in high esteem asked me to share my thoughts with him. If you disagree, that’s fine. Frankly, I spent a lot of time I don’t have creating this document and don’t really feel like arguing about the Common Core. The Common Core is dying even if you just discovered it.

This is not a research paper, hence the lack of references. You can Google for yourself. Undoubtedly, this post contains typos as well. I’ll fix them as I find them.

This critique shares little with the attacks from the Tea Party or those dismissed by the Federal Education Secretary or Bill Gates as whiney parents.

I have seven major objections to the Common Core State Standards (CCSS)

1. The CCSS are a solution in search of a problem.

2. The CCSS were implemented in a remarkably undemocratic fashion at great public expense to the benefit of ideologues and corporations.

3. The standards are preposterous and developmentally inappropriate.

4. The inevitable failure of the Common Core cannot be blamed on poor implementation when poor implementation is baked into the design.

5. Standardized curriculum lowers standards, diminishes teacher agency, and lowers the quality of educational experiences.

6. The CCSS will result in an accelerated erosion of public confidence in public education.

7. The requirement that CCSS testing be conducted electronically adds unnecessary complexity, expense, and derails any chance of computers being used in a creative fashion to amplify student potential."

[continues on to elaborate on each objection, some pull quotes here]

"there is abundant scholarship by Linda Darling-Hammond, Diane Ravitch, Gerald Bracey, Deborah Meier, and others demonstrating that more American kids are staying in school longer than at any time in history. If we control for poverty, America competes quite favorably against any other nation in the world, if you care about such comparisons."



"As my colleague and mentor Seymour Papert said, “At best school teaches a billionth of a percent of the knowledge in the world and yet we quibble endlessly about which billionth of a percent is important enough to teach.” Schools should prepare kids to solve problems their teachers never anticipated with the confidence and competence necessary to overcome any obstacle, even if only to discover that there is more to learn."



"When teachers are not required to make curricular decisions and design curriculum based on the curiosity, thinking, understanding, passion, or experience of their students, the resulting loss in teacher agency makes educators less thoughtful and reflective in their practice, not more. The art of teaching has been sacrificed at the expense of reducing pedagogical practice to animal control and content delivery."



"The singular genius of George W. Bush and his No Child Left Behind legislation (kicked-up a notch by Obama’s Race-to-the-Top) was the recognition that many parents hate school, but love their kids’ teachers. If your goal is to privatize education, you need to concoct a way to convince parents to withdraw support for their kid’s teacher. A great way to achieve that objective is by misusing standardized tests and then announcing that your kid’s teacher is failing your kid. This public shaming creates a manufactured crisis used to justify radical interventions before calmer heads can prevail.

These standardized tests are misunderstood by the public and policy-makers while being used in ways that are psychometrically invalid. For example, it is no accident that many parents confuse these tests with college admissions requirements. Using tests designed to rank students mean that half of all test-takers be below the norm and were never intended to measure teacher efficacy.

The test scores come back up to six months after they are administered, long after a child advances to the next grade. Teachers receive scores for last year’s students, with no information on the questions answered incorrectly. These facts make it impossible to use the testing as a way of improving instruction, the stated aim of the farcical process."



"It is particularly ironic how much of the public criticism of the Common Core is related to media accounts and water cooler conversations of the “crazy math” being taught to kids. There are actually very few new or more complex concepts in the Common Core than previous math curricula. In fact, the Common Core hardly challenges any of the assumptions of the existing mathematics curriculum. The Common Core English Language Arts standards are far more radical. Yet, our innumerate culture is up in arms about the “new new math” being imposed by the Common Core.

What is different about the Common Core approach to mathematics, particularly arithmetic, is the arrogant imposition of specific algorithms. In other words, parents are freaking out because their kids are being required to solve problems in a specific fashion that is different from how they solve similar problems.

This is more serious than a matter of teaching old dogs new tricks. The problem is teaching tricks at all. There are countless studies by Constance Kamii and others demonstrating that any time you teach a child the algorithm, you commit violence against their mathematical understanding. Mathematics is a way of making sense of the world and Piaget teaches us that it is not the job of the teacher to correct the child from the outside, but rather to create the conditions in which they correct themselves from the inside. Mathematical problem solving does not occur in one way no matter how forcefully you impose your will on children. If you require a strategy competing with their own intuitions, you add confusion that results in less confidence and understanding.

Aside from teaching one algorithm (trick), another way to harm a child’s mathematical thinking development is to teach many algorithms for solving the same problem. Publishers make this mistake frequently. In an attempt to acknowledge the plurality of ways in which various children solve problems, those strategies are identified and then taught to every child. Doing so adds unnecessary noise, undermines personal confidence, and ultimately tests memorization of tricks (algorithms) at the expense of understanding.

This scenario goes something like this. Kids estimate in lots of different ways. Let’s teach them nine or ten different ways to estimate, and test them along the way. By the end of the process, many kids will be so confused that they will no longer be able to perform the estimation skill they had prior to the direct instruction in estimation. Solving a problem in your head is disqualified."
garystager  commoncore  2015  education  policy  schools  publicschools  standardization  standardizedtesting  standards  learning  teaching  pedagogy  technology  testing  democracy  process  implementation  agency  howweteach  howwelearn  publicimage  seymourpapert  numeracy  matheducation  math  mathematics  numbersense  understanding  memorization  algorithms  rttt  gatesfoundation  pearson  nclb  georgewbush  barackobama 
april 2015 by robertogreco
79 Theses on Technology. For Disputation. | The Infernal Machine
"Alan Jacobs has written seventy-nine theses on technology for disputation. A disputation is an old technology, a formal technique of debate and argument that took shape in medieval universities in Paris, Bologna, and Oxford in the twelfth and thirteenth centuries. In its most general form, a disputation consisted of a thesis, a counter-thesis, and a string of arguments, usually buttressed by citations of Aristotle, Augustine, or the Bible.

But disputations were not just formal arguments. They were public performances that trained university students in how to seek and argue for the truth. They made demands on students and masters alike. Truth was hard won; it was to be found in multiple, sometimes conflicting traditions; it required one to give and recognize arguments; and, perhaps above all, it demanded an epistemic humility, an acknowledgment that truth was something sought, not something produced.

It is, then, in this spirit that Jacobs offers, tongue firmly in cheek, his seventy-nine theses on technology and what it means to inhabit a world formed by it. They are pithy, witty, ponderous, and full of life. And over the following weeks, we at the Infernal Machine will take Jacobs’ theses at his provocative best and dispute them. We’ll take three or four at a time and offer our own counter-theses in a spirit of generosity.

So here they are:

1. Everything begins with attention.

2. It is vital to ask, “What must I pay attention to?”

3. It is vital to ask, “What may I pay attention to?”

4. It is vital to ask, “What must I refuse attention to?”

5. To “pay” attention is not a metaphor: Attending to something is an economic exercise, an exchange with uncertain returns.

6. Attention is not an infinitely renewable resource; but it is partially renewable, if well-invested and properly cared for.

7. We should evaluate our investments of attention at least as carefully and critically as our investments of money.

8. Sir Francis Bacon provides a narrow and stringent model for what counts as attentiveness: “Some books are to be tasted, others to be swallowed, and some few to be chewed and digested: that is, some books are to be read only in parts, others to be read, but not curiously, and some few to be read wholly, and with diligence and attention.”

9. An essential question is, “What form of attention does this phenomenon require? That of reading or seeing? That of writing also? Or silence?”

10. Attentiveness must never be confused with the desire to mark or announce attentiveness. (“Can I learn to suffer/Without saying something ironic or funny/On suffering?”—Prospero, in Auden’s The Sea and the Mirror)

11. “Mindfulness” seems to many a valid response to the perils of incessant connectivity because it confines its recommendation to the cultivation of a mental stance without objects.

12. That is, mindfulness reduces mental health to a single, simple technique that delivers its user from the obligation to ask any awkward questions about what his or her mind is and is not attending to.

13. The only mindfulness worth cultivating will be teleological through and through.

14. Such mindfulness, and all other healthy forms of attention—healthy for oneself and for others—can only happen with the creation of and care for an attentional commons.

15. This will not be easy to do in a culture for which surveillance has become the normative form of care.

16. Simone Weil wrote that ‘Attention is the rarest and purest form of generosity’; if so, then surveillance is the opposite of attention.

17. The primary battles on social media today are fought by two mutually surveilling armies: code fetishists and antinomians.

18. The intensity of those battles is increased by a failure by any of the parties to consider the importance of intimacy gradients.

19. “And weeping arises from sorrow, but sorrow also arises from weeping.”—Bertolt Brecht, writing about Twitter

20. We cannot understand the internet without perceiving its true status: The Internet is a failed state.

21. We cannot respond properly to that failed-state condition without realizing and avoiding the perils of seeing like a state.

22. If instead of thinking of the internet in statist terms we apply the logic of subsidiarity, we might be able to imagine the digital equivalent of a Mondragon cooperative.

23. The internet groans in travail as it awaits its José María Arizmendiarrieta."

[continues on]

[A collection of follow-ups and responses is accummulating here:
http://iasc-culture.org/THR/channels/Infernal_Machine/tag/79-theses-on-technology/

For example: “79 Theses on Technology: On Attention”
http://iasc-culture.org/THR/channels/Infernal_Machine/2015/03/79-theses-on-technology-on-attention/

And another round-up of responses:
http://text-patterns.thenewatlantis.com/2015/04/more-on-theses.html ]
alanjacobs  anthropology  culture  digital  history  technology  attention  dunning-krugereffect  anosognosia  pleasure  ethics  writing  howwewrite  jaronlanier  alextabattok  stupidity  logic  loki  cslewis  algorithms  akrasia  physical  patheticfallacy  hacking  hackers  kevinkelly  georgebernardshaw  agency  philosophy  tommccarthy  commenting  frankkermode  text  texts  community  communication  resistance  mindfulness  internet  online  web  josémaríaarizmendiarrieta  simonwiel  society  whauden  silence  attentiveness  textualist  chadwellmon  surveillance  2015 
april 2015 by robertogreco
2, 6: Neighborhoods, the Anti-Algorithm
"So what does this have to do with my neighborhood?

G.K. Chesterton, in a collection of essays titled Heretics, wrote:
"The man who lives in a small community lives in a much larger world. He knows much more of the fierce variety and uncompromising divergences of men…In a large community, we can choose our companions. In a small community, our companions are chosen for us. Thus in all extensive and highly civilized society groups come into existence founded upon sympathy, and shut out the real world more sharply than the gates of a monastery."

That was 1905. Long before the internet would give us the largest community of all. Yet, the oft-cited "filter bubble" of the internet is Chesterton's community of choice. The easy community. The one that just happens because we want what we want. And in a less troubled world, that wouldn't be much to fuss about. But in our world, the "filter bubble" is dangerous. It makes fear, hatred, and oppression all the more abundant, both online and off. Before the internet, we called "filter bubbles" segregation. We call them "filter bubbles" now because its easier to see them as a manifestation of technology than the effect of our choices. Because if we saw them for what they truly are, we'd have to call them segregation again. We thought we left that behind. But, no, we haven't. Segregation, of every kind, is the entropy against which we all struggle, the product of bodies living in time, wired down to our cells to survive at all costs, responding to their loudest signal, fear. Chesterton understood that the principal challenge we humans are given to work out in this life is each other, and that nowhere better than next door is that challenge met.

But in Facebook's world, next-door has no greater offer of intimacy than across town, or state, or country. In the large community, as Chesterton said, we can choose our companions. That's the appeal of the network. Community on my terms. Forget that we know it's not good for us. Or that it's dangerous. Forget that it's a shinier, faster form of segregation. Forget that it's invisible and layered, making it easier to explain away. None of this is Facebook's fault. If it wasn't them, then it would be AOL, or Friendster, or MySpace, or any of the many networks that came before it. We can't blame them — any of them — for segregation, however technological its 21st century incarnation may be. But we can blame them for selling it. The economic benefit of segregation is nothing new; it makes selling things easier. But segregation is Facebook's secret sauce. It's an economic imperative. Like just about every "platform" of the internet today, it is ruthlessly driven to box each one of us in. To confine us to an echo-chamber. Not for our own benefit, but for theirs. Because it makes it easier for them to control us. And no, not to usher in some dramatic, Illuminati-style new world order. It's hardly that interesting! It's to sell tiny display ads and make heaps of money. That's it. Controlling us is simply an act of inventory management.

Of course, it's easy to look past all of this. To point at the good that thrives on the network — and of that, there is plenty. The lonely who are no longer lonely because of it. The oppressed who grow more powerful when bound together. But to celebrate the network's role in that only heightens my awareness that it is something we could have — should have — without it. It's too easy, also, to celebrate the engines of our ingenuity. See this? Look what we have made! But that we are as enamored with the algorithm as we so clearly are is an indicator that our hearts are way out of sync with our minds. We have engineered such sophisticated tools for connecting, ordering, and studying ourselves; it's an astounding achievement. It's one we might even celebrate if it were truly an open project for the common good. But it isn't. Not even close. So why do we pretend that it is? The network is not ours. It's the other way around. We are the network's. To sell. That is, unless we get off the network. Or at least spend a whole lot less time there.

My neighbors have convinced me that community is not only of the network. Saying such a thing sounds trite. But it's another thing entirely to live it. Here's an example: Last year, the doctor and his wife down the street decided to organize weekly neighborhood dinners. Each Sunday evening, someone hosts dinner for the neighborhood. When I first heard the idea, I was aghast. Weekly! As in, every week? No, I thought, monthly, maybe. But we went to a few, then we hosted one of our own — which wasn't nearly as much work as we thought it would be — and we've regularly gone to most of the others since. It's not obligatory. It's not like if you go to one, you must go to them all. Or even that if you go to one, you must host one. Few people have gone to every dinner, but many of us have gone to most of them. And many of us have hosted one.

Spending this time together — committing to it — is how the work gets done, not the Facebook group. It's through being together, in each other's homes, in real life. Don't get me wrong, it's no utopia. People get on each other's nerves. Not everyone will become best friends. We're talking about people here. But that's the point. The network can't sell that. It can sell our attention, but the less of our lives we live on the network, the less our attention feels like us. That's the control we still have. Eventually, hopefully, leveraging that control could change the economics of the network. Consider the neighborhood the anti-algorithm."
2015  chrisbutler  facebook  socialnetworks  gkchesterson  difference  filterbubbles  algorithms  neighborhoods  discovery  community  communities  understanding  empathy  small  attention  feeds  segregation  diversity  technology  separation  togetherness  companionship  sympathy 
february 2015 by robertogreco
sevensixfive: Shaky Tripod
"From the inception of American architectural education, our discipline has always been an unstable hybrid. William Ware, the founder of MIT's program, observed in 1866, after studying architectural education in Europe, that: "the French courses of study are mainly artistic, and the German scientific, and the English practical." His program, one of the first in the nation, would represent an attempt at synthesis.

Today this uneasy balance of art, science, and practice is in more danger of collapsing than ever.

We've ceded speculation to designers from other disciplines, the best work about the future relationship between technology, design, and culture at large is now coming from the fields of product design and industrial design. Within architecture, the production of novel form is now almost instantly commodified in the global marketplaces, going wherever labor is cheap and politics are autocratic. We've lost the majority of the everyday built environment to dullness and risk-averse bad planning. Meanwhile, with the exception of too few responsible firms engaged in mentorship, we have a professional culture that privileges technical skill and low wages over critical thinking. And we have an academic culture that looks for hard, measurable, machine readable metrics to decide if education is taking place or not.

University cultures, now focused on quantitative assessment over narrative in annual reports, are asking how many faculty are licensed architects, and how many graduating students are going on to licensure, meanwhile our professional organizations are re-entering the academy in several ways. NAAB intends to merge with ACSA, and NCARB wants to retool curriculum so that students receive licensure upon graduation. This is against the backdrop of a university academic culture that's getting hollowed out from within, as administration expands while teachers are asked to do more with less. Never mind time for research and speculation about the future, the academy must produce students that serve the profession now, because offices want affordable labor in the seats at 9am Monday, and they'd best be proficient in the latest version of Revit.

What can American architectural education offer back to these challenges? We can re-emphasize the historical mandate of the M. Arch degree: sustained critique, sustained speculation, in parallel with practice, scholarship and service, as a complement to the profession-oriented pedagogy of the B. Arch, and the deep dive methodology of the PhD. We can advocate for a return to an attitude towards the study and practice of architecture that places it back alongside the liberal arts and the fine arts.

The most useful things that architectural education can offer students in regards to professional practice are being buried under a futile race to keep up with software. If we teach practical skills, then let us focus on methodology over technique, the "why" over the "what." The proliferation of job descriptions designated "X Architect", where "X" is "Software", "Experience", or "User Interface", shows that other disciplines are hungry for the rigorous systems-level design methodologies that architectural education offers. And if one of the things we do best is speculation about the future, then let us serve practice by speculating with our students about the future of practice. This way, they will be able to anticipate, not the new plugins for parametric modeling that come out next week, but the new paradigms that will change how the built environment is made over the next decade."
fredscharmen  2015  architecture  education  criticalthinking  highered  methodology  practice  software  design  architecturaleducation  measurement  algorithms  quantification  curriculum  culture  academia  metrics  howweteach  howwelearn  why  theywhy 
february 2015 by robertogreco
Anab Jain, “Design for Anxious Times” on Vimeo
"As 2014 rushes past us, a venture capital firm appoints a computer algorithm to its board of directors, robots report news events such as earthquakes before any human can, fully functioning 3D printed ears, bones and guns are in use, the world’s biggest search company acquires large scale, fully autonomous military robots, six-year old children create genetically modified glow fish and an online community of 50,000 amateurs build drones. All this whilst extreme weather events and political unrest continue to pervade. This is just a glimpse of the increased state of technological acceleration and cultural turbulence we experience today. How do we make sense of this? What can designers do? Dissecting through her studio Superflux’s projects, research practice and approach, Anab will make a persuasive case for designers to adopt new roles as sense-makers, translators and agent provocateurs of the 21st century. Designers with the conceptual toolkits that can create a visceral connection with the complexity and plurality of the worlds we live in, and open up an informed dialogue that help shape better futures for all."
anabjain  superflux  2014  design  future  futures  via:steelemaley  criticaldesign  speculativedesign  speculativefiction  designfiction  designdiscourse  film  filmmaking  technology  interaction  documentary  uncertainty  reality  complexity  algorithms  data  society  surveillance  cloud  edwardsnowden  chelseamanning  julianassange  whistleblowing  science  bentobox  genecoin  bitcoin  cryptocurrency  internet  online  jugaad  war  warfare  information  politics  drones  software  adamcurtis  isolation  anxiety  capitalism  quantification  williamgibson  art  prototyping  present 
february 2015 by robertogreco
Context collapse, performance piety and civil inattention – the web concepts you need to understand in 2015 | Technology | The Guardian
"Civil inattention
In the 1950s, sociologist Erving Goffman described what happened to humans who live in cities. “When in a public place, one is supposed to keep one’s nose out of other people’s activity and go about one’s own business,” he wrote in The Presentation of Self in Everyday Life. “It is only when a woman drops a package, or when a fellow motorist gets stalled in the middle of the road, or when a baby left alone in a carriage begins to scream, that middle-class people feel it is all right to break down momentarily the walls which effectively insulate them.” Dara Ó Briain picked up this idea in a standup routine in which he dared people to get into a lift last, and then, instead of facing the door, turn and face the other occupants. It would be truly chilling.

Civil inattention happens all the time in everyday life, unless you’re the kind of a weirdo who joins in other people’s conversations on the train. But we haven’t got the grip of it in the “public squares” of the internet, like social media platforms and comment sections. No one knows who is really talking to whom, and – surprise! – a conversation between anything from two to 2,000 people can feel disorienting and cacophonous. There have been various attempts to combat it – Twitter’s “at sign”, Facebook’s name-tagging, threaded comments – but nothing has yet replicated the streamlined simplicity of real life, where we all just know there is NO TALKING AT THE URINAL.

Conservative neutrality
We live in a world ruled by algorithms: that’s how Netflix knows what you want to watch, how Amazon knows what you want to read and how the Waitrose website knows what biscuits to put in the “before you go” Gauntlet of Treats before you’re allowed to check out. The suggestion is that these algorithms are apolitical and objective, unlike humans, with their petty biases and ingrained prejudices. Unfortunately, as the early computer proverb had it, “garbage in, garbage out”. Any algorithm created in a society where many people are sexist, racist or homophobic won’t magically be free of those things.

Google’s autocomplete is a classic example: try typing “Women are ...” or “Asians are ...” and recoil from the glimpse into our collective subconscious. Christian Rudder’s book Dataclysm discusses how autocomplete might reaffirm prejudices, not merely reflect them: “It’s the site acting not as Big Brother, but as Older Brother, giving you mental cigarettes.” Remember this the next time a tech company plaintively insists that it doesn’t want to take a political stance: on the net, “neutral” often means “reinforces the status quo”.

Context collapse
The problem of communicating online is that, no matter what your intended audience is, your actual audience is everyone. The researchers Danah Boyd and Alice Marwick put it like this: “We may understand that the Twitter or Facebook audience is potentially limitless, but we often act as if it were bounded.”

So, that tasteless joke your best Facebook friend will definitely get? Not so funny when it ends up on a BuzzFeed round-up of The Year’s Biggest Bigots and you get fired. That dating profile where you described yourself as “like Casanova, only with a degree in computing”? Not so winsome when it lands you on Shit I’ve Seen On Tinder and no one believes that you were being sarcastic. On a more serious level, context collapse is behind some “trolling” prosecutions: is it really the role of the state to prosecute people for saying offensive, unpleasant things about news stories in front of other people who have freely chosen to be their friends on Facebook? I don’t think so.

What is happening here is that we are turning everyone into politicians (the horror). We are demanding that everyone should speak the same way, present the same face, in all situations, on pain of being called a hypocrite. But real life doesn’t work like this: you don’t talk the same way to your boss as you do to your boyfriend. (Unless your boss is your boyfriend, in which case I probably don’t need to give you any stern talks on the difficulties of negotiating tricky social situations.) To boil this down, 2015 needs to be the year we reclaim “being two-faced” and “talking behind people’s backs”. These are good things.

Performative piety
What’s Kony up to these days? Did anyone bring back our girls? Yes, surprisingly enough, the crimes of guerrilla groups in Uganda and Nigeria have not been avenged by hashtag activism. The internet is great for what feminists once called “consciousness raising” – after all, it’s a medium in which attention is a currency – but it is largely useless when it comes to the hard, unglamorous work of Actually Sorting Shit Out.

The internet encourages us all into performative piety. People spend time online not just chatting or arguing, but also playing the part of the person they want others to see them as. Anyone who has run a news organisation will tell you that some stories are shared like crazy on social media, but barely read. Leader columns in newspapers used to show the same pattern: research showed that people liked to read a paper with a leader column in it – they just didn’t actually want to read the column.

So, next time you’re online and everyone else seems to be acting like a cross between Mother Teresa and Angelina Jolie, relax. They might leave comments saying “WHAT ABOUT SYRIA?” but they have, in fact, clicked on a piece about a milk carton that looks like a penis. As ever, actions speak louder than words."
contextcollapse  2014  internet  socialmedia  communication  conservativeneutrality  algorithms  alicemarwick  kony  performativepiety  activism  performance  presentationofself  online  socialnetworking  privacy  audience  via:chromacolaure  civics  urban  urbanism  twitter  facebook  civilinattention  attention  discourse  ervinggoffman  daraóbriain  silence  inattention  kathysierra  helenlewis  serialpodcast 
january 2015 by robertogreco
Eric's Archived Thoughts: Inadvertent Algorithmic Cruelty
"I didn’t go looking for grief this afternoon, but it found me anyway, and I have designers and programmers to thank for it. In this case, the designers and programmers are somewhere at Facebook.

I know they’re probably pretty proud of the work that went into the “Year in Review” app they designed and developed. Knowing what kind of year I’d had, though, I avoided making one of my own. I kept seeing them pop up in my feed, created by others, almost all of them with the default caption, “It’s been a great year! Thanks for being a part of it.” Which was, by itself, jarring enough, the idea that any year I was part of could be described as great.

Still, they were easy enough to pass over, and I did. Until today, when I got this in my feed, exhorting me to create one of my own. “Eric, here’s what your year looked like!”

[image]

A picture of my daughter, who is dead. Who died this year.

Yes, my year looked like that. True enough. My year looked like the now-absent face of my little girl. It was still unkind to remind me so forcefully.

And I know, of course, that this is not a deliberate assault. This inadvertent algorithmic cruelty is the result of code that works in the overwhelming majority of cases, reminding people of the awesomeness of their years, showing them selfies at a party or whale spouts from sailing boats or the marina outside their vacation house.

But for those of us who lived through the death of loved ones, or spent extended time in the hospital, or were hit by divorce or losing a job or any one of a hundred crises, we might not want another look at this past year.

To show me Rebecca’s face and say “Here’s what your year looked like!” is jarring. It feels wrong, and coming from an actual person, it would be wrong. Coming from code, it’s just unfortunate. These are hard, hard problems. It isn’t easy to programmatically figure out if a picture has a ton of Likes because it’s hilarious, astounding, or heartbreaking.

Algorithms are essentially thoughtless. They model certain decision flows, but once you run them, no more thought occurs. To call a person “thoughtless” is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves.

Where the human aspect fell short, at least with Facebook, was in not providing a way to opt out. The Year in Review ad keeps coming up in my feed, rotating through different fun-and-fabulous backgrounds, as if celebrating a death, and there is no obvious way to stop it. Yes, there’s the drop-down that lets me hide it, but knowing that is practically insider knowledge. How many people don’t know about it? Way more than you think.

This is another aspect of designing for crisis, or maybe a better term is empathetic design. In creating this Year in Review app, there wasn’t enough thought given to cases like mine, or friends of Chloe, or anyone who had a bad year. The design is for the ideal user, the happy, upbeat, good-life user. It doesn’t take other use cases into account.

Just to pick two obvious fixes: first, don’t pre-fill a picture until you’re sure the user actually wants to see pictures from their year. And second, instead of pushing the app at people, maybe ask them if they’d like to try a preview—just a simple yes or no. If they say no, ask if they want to be asked again later, or never again. And then, of course, honor their choices.

It may not be possible to reliably pre-detect whether a person wants to see their year in review, but it’s not at all hard to ask politely—empathetically—if it’s something they want. That’s an easily-solvable problem. Had the app been designed with worst-case scenarios in mind, it probably would have been.

If I could fix one thing about our industry, just one thing, it would be that: to increase awareness of and consideration for the failure modes, the edge cases, the worst-case scenarios. And so I will try."

[follow-up post: http://meyerweb.com/eric/thoughts/2014/12/27/well-that-escalated-quickly/ ]

[previously: https://vimeo.com/114393677 ]
facebook  algorithms  design  cruelty  2014  failure  ericmeyer 
december 2014 by robertogreco
danah boyd | apophenia » What is Fairness?
"Increasingly, tech folks are participating in the instantiation of fairness in our society. Not only do they produce the algorithms that score people and unevenly distribute scarce resources, but the fetishization of “personalization” and the increasingly common practice of “curation” are, in effect, arbiters of fairness.

The most important thing that we all need to recognize is that how fairness is instantiated significantly affects the very architecture of our society. I regularly come back to a quote by Alistair Croll:
Our social safety net is woven on uncertainty. We have welfare, insurance, and other institutions precisely because we can’t tell what’s going to happen — so we amortize that risk across shared resources. The better we are at predicting the future, the less we’ll be willing to share our fates with others. And the more those predictions look like facts, the more justice looks like thoughtcrime.

The market-driven logic of fairness is fundamentally about individuals at the expense of the social fabric. Not surprisingly, the tech industry — very neoliberal in cultural ideology — embraces market-driven fairness as the most desirable form of fairness because it is the model that is most about individual empowerment. But, of course, this form of empowerment is at the expense of others. And, significantly, at the expense of those who have been historically marginalized and ostracized.

We are collectively architecting the technological infrastructure of this world. Are we OK with what we’re doing and how it will affect the society around us?"
algorithms  culture  economics  us  finance  police  policing  lawenforcement  technology  equality  equity  2014  danahboyd  alistaircroll  justice  socialjustice  crime  civilrights  socialsafetynet  welfare  markets  banks  banking  capitalism  socialism  communism  scarcity  abundance  uncertainty  risk  predictions  profiling  race  business  redlining  privilege 
november 2014 by robertogreco
How one startup mapped Brazil's confusing favelas | Motherboard
" Pedro, Ramos, and Viera decided to take matters into their own hands, and make some money in the process. The first step was to make a map of the community and create virtual addresses that they could use to create a company to deliver the Post Office mail.

The task was much more complex than they had thought. If you typed 'Rocinha' in Google Maps a few months ago, you would only get the Gavea Road when, in fact, there are hundreds of streets, alleys, back-alleys, and stairs throughout the community.

One of the problems for mapping a slum via satellite is that many buildings create tunnels over the alleys and stairs below. Another problem is that sometimes the concrete slabs used for roofs are used as streets.

They gave up on the idea of a visual map and started a logic map by generating algorithms. Algorithms are a set of instructions for specific operations; a good example of a simple algorithm is a recipe for lasagna.

The algorithms created by Pedro and his partners are way more complicated than a lasagna recipe, of course. Without a visual image, they created a pseudo-code, an informal language of categories to explain each fixed structure, natural or built, which is on each street, stairs, or alley inside the huge Rocinha community. For example, a “condominium” is defined as a blind alley with less than 12 homes.

As there are no official names for most of the streets in Rocinha, the residents make them up. A street usually has at least two to three names. The streets do not start in an arbitrary way; depending on who you are speaking with, a street can begin on the upper side of the slum and come down, or vice versa, or even somewhere in the middle. Pedro and his friends had to create a virtual beginning and end for each street.

The end result is an algorithm for each street, stairs, or alley. Together, these hundreds of handwritten pages turned into a huge map, chock full of lines and codes, impossible for anyone without understanding of its logic to decipher.

A typical sequence goes like this: "Wall, stone, henhouse, store, house, building, condominium," Pedro explains. Each one of these concepts has the same specific definition that makes their work easier. "Rocinha is constantly under construction," he adds. "It is possible that a month from now a henhouse is gone and there is a house there instead. For this reason we need to register everything; it’s easier to make changes when we need to."

When they finished the map, they patented it, and after this, they created a service to deliver Post Office mail called Friendly Mailman. It was a success, and also the first franchise in Brazil’s history born in a slum. Currently, Friendly Mailman acts in eight slums in Rio.

Each residence using the service pays a monthly fee—currently R$16 in Rocinha, or $6.64 USD. Their houses get an address, a number based on the order the service was hired. Every day the Post Office van stops by the Friendly Mailman office and leaves all mail for the community. The employees sort out those for their thousands of customers. Later, the Post Office van stops by to get whatever was left and then they park on top of the hill and allow people to look in the boxes to check if they have any mail.

Back on the trail at the Vila Verde community, we snake through a hole in the middle of two buildings and up a long stairway.

"What is this?" Pedro asks. "Is it a street? Where does it start?" He shows the page on the map that refers to this part of the stairway. "Look at this. Is it a building or a house? And this here, is it a street or a condominium? The map tells you all."

Pedro indicates the doors on the houses. "This one here is a customer of the Friendly Mailman," he says, pointing to a sticker with the Friendly Mailman’s logo and the number 1166.

"You’ll notice that the numbers do not appear in order," he goes on. "Look at this house here: 8044. This is because they get their addresses according to the date when they hire the service. No one can locate these houses without our map. And no one will understand the map unless we explain how to use it."

Pedro explained the map in a general way, but there are certain elements that are secret. It also changes every day.

"Each time one of our mailmen go on duty, he will update it," he explains. "It could be that there was a wall the week before, and now something else is being built. We have made our map digital and we want to create an app so that our mailmen can do the updating in their smart phones."

"Are there problems with drug trafficking?" I asked.

"None," he said. "Do you think the drug lords don’t want to get their mail? Do you think they don’t want to buy tennis shoes over the internet? Everybody likes it. After the Friendly Mailman, sales bursted all over in Rocinha. And as you can see, there is a lot of money in this community, a lot of trade, most people living here are middle class. This is a characteristic of Rocinha. If you go to the Juramento hill, for instance, you won’t see trade. It’s a poorer community, and for that reason we charge less over there. If you look over the world, it is full of slums, and everybody needs mail service. So we are making money, which is good, but also supplying a service for the betterment of the community.”

We went back to the Friendly Mailman’s headquarters for coffee. There is a large traditional map on the wall, showing all the alleys in the community. "Look at this," Pedro says. "We made this based on all our mapping. Google came by here last month. They asked if they could take a photo of our map. I said: 'No way.' Let them do their own.""
brasil  brazil  maps  mapping  favelas  brianmier  2014  rocinha  riodejaneiro  deliveries  postalservice  addressess  algorithms 
october 2014 by robertogreco
Why Twitter Should Not Algorithmically Curate the Timeline — The Message — Medium
"Twitter brims with human judgment, and the problem with algorithmic filtering is not losing the chronology, which I admit can be clumsy at times, but it’s losing the human judgment that makes the network rewarding and sometimes unpredictable. I also recently wrote about how #Ferguson surfaced on Twitter while it remained buried, at least for me, in curated Facebook—as many others noted, Facebook was awash with the Ice Bucket Challenge instead, which invites likes and provides videos and tagging of others; just the things an algorithm would value. This isn’t a judgement of the value of the ALS challenge but a clear example of how algorithms work—and don’t work.

Algorithms are meant to be gamed—my Facebook friends have now taken to posting faux “congratulations” to messages they want to push to the top of everyone’s feeds, because Facebook’s algorithm pushes such posts with the phrase “congratulations” in the comments to top of your feed. Recently, a clever friend of mine asked to be faux congratulated on her sale of used camera equipment. Sure enough! Her network reported that it stayed on top of everyone’s feed for days. (And that’s why you have so many baby/marriage/engagement announcements in your Facebook feed—and commercial marketers are also already looking to exploit this).

For another thing, algorithmic curation will make writing to be retweeted, which already plagues Twitter much worse. I’m not putting down the retweetable quote; just the behavior that optimizes for that above everything else — and I know you've seen that kind of user. Some are quite smart. Many are very good writers. Actually, many are unfortunately very good writers. They are also usually insufferable. I can see them taking over an algorithmic Twitter.

Bleargh, I say.

But the bigger loss will be the networked intelligence that prizes emergence over engagement and interaction above the retweetable— which gets very boring very quickly. I know Twitter thinks it may increase engagement, but it will decrease engagement among some of its most creative segments.

What else will a curated feed optimize for? It will almost certainly look more like television since there is a reason television looks like television: that’s what advertisers like. There will be more celebrities. There will be more pithy quotes. There will be even more outrage, and even more lovable, fluffy things (both are engaging, and remember, algorithms will optimize for engagement). There will be more sports and television events. There will be less random, weird and otherwise obscure content being surfaced by the collective, networked judgement of the users I choose to follow.

Does Twitter have a signal-to-noise problem? Sure, sometimes. But remember, one person’s noise is another’s signal. Is the learning curve too steep? Yes, it is. Is there a harassment issue, especially for the users with amplified footprints? Absolutely."



"Never forget: the algorithm giveth but it also taketh away. Don’t let it take away the network because it’s the flock, not the bird, that provides the value."
algorithms  twitter  zeyneptufekci  2014  fliters  filtering  human  judgement  unpredictability  emergence  voice  facebook  socialmedia 
september 2014 by robertogreco
Can Algorithms Replace Your English Professor? — Who’s Afraid of Online Education? — Medium
"Algorithms are quickly becoming our new tastemakers and gatekeepers. Social media feeds are increasingly the most immediate source of news for many people, which means we are becoming more and more beholden to algorithms. Social media algorithms have been a popular topic of discussion lately, with people undertaking experiments on what happens when you “like” everything on Facebook, or when you refrain from “liking” anything. The Facebook algorithm is being held up as the primary reason why the #Ferguson protests are not showing up on user’s Facebook feeds, in comparison to Twitter, which is the only network that shows you what you choose to follow, rather than what its algorithm thinks you should. (Note that this may also be changing.)

Algorithms are becoming our curators. They show us—based on a secret, proprietary formula—what they think we want to see. In this experiment, Tim Herrara demonstrates that Facebook’s algorithm prefers to show its users older, more popular content than new content that has not been engaged with. Despite him trying to consume his entire Facebook feed for an entire day, he realized that he only saw 29% of new content produced by his network—and that for most users, that percentage is probably a lot lower. On Facebook there isn’t a way to bypass this algorithm, even if you select“most recent” posts rather than “most popular” posts in your setting (interestingly enough, I’ve heard reports that Facebook tends to secretly reset your settings back to “most popular” no matter what you do).

There’s a lot of controversy over the power that we are giving algorithms to display and represent our world to us. But these critiques miss an important point: we’ve never not had curators and filters. Before we had algorithms, we had “experts”, “authorities”, tastemakers—we had (or have)professors and academics, we had (have) institutions that studied things and told us what was important or unimportant about the world, we had (have) editors and publishers who decided what was “good” enough to be shared with the world. But the importance and reliabilty of these authorities and tastemakers is coming under serious fire because of the impact of some social media; for example in the reporting on Ferguson on major news networks versus Twitter. Furthermore, if you take the work of postcolonial studies critics like Edward Said seriously, much of our humanistic and scientific forms of research inquiry are hardly free of cultural prejudice, and are in fact informed and dictated by these modes of thinking.

Given all of this, I have two thoughts:

One. How is algorithmic selection actually similar to older modes of tastemaking and gatekeeping (i.e. experts and authorities who tell us what to value and what not to)? How is it different? Does either mode entertain the feedback of those who they serve (i.e., can you help train an algorithm to show you more of what you want, or can you have impact on your “experts” in having them study what you think is important?)

Two. A great deal of virtual ink has been spilled on whether educators are going to be replaced by online courses such as MOOCs. Less has been said, however, about the replacement of the tastemaking function of educators/researchers—especially in the humanities, our goal has been to train students to find value in what they otherwise might not, to make legible to our students modes of seeing and doing which depart from their own. Can an algorithm replace that tastemaking function? Put another way: instead of having the “best” news and information filtered to you by “experts” (your teachers, your professors, editors and publishers etc.), what happens when an algorithm starts taking over this process? Is this necessarily good, bad, or neither? And how similar is this filtering of information to previous modes of filtering? In other words—can an algorithm become smart enough to replace your English literature professor? And what would be the result of such a scenario?"

[via (great thread follows): https://twitter.com/Jessifer/status/502632112261169152 ]
adelinekoh  2014  algorithms  facebook  twitter  education  curation  curators  gatekeepers  tastemakers  trendsetters  mooc  moocs  tastemaking  experts  authority  authorities  humanism  humanities  power  control  academia  highereducation  highered  feeds  filters 
august 2014 by robertogreco
Too Much World: Is the Internet Dead? | e-flux
"Postproduction

But if images start pouring across screens and invading subject and object matter, the major and quite overlooked consequence is that reality now widely consists of images; or rather, of things, constellations, and processes formerly evident as images. This means one cannot understand reality without understanding cinema, photography, 3D modeling, animation, or other forms of moving or still image. The world is imbued with the shrapnel of former images, as well as images edited, photoshopped, cobbled together from spam and scrap. Reality itself is postproduced and scripted, affect rendered as after-effect. Far from being opposites across an unbridgeable chasm, image and world are in many cases just versions of each other.14They are not equivalents however, but deficient, excessive, and uneven in relation to each other. And the gap between them gives way to speculation and intense anxiety.

Under these conditions, production morphs into postproduction, meaning the world can be understood but also altered by its tools. The tools of postproduction: editing, color correction, filtering, cutting, and so on are not aimed at achieving representation. They have become means of creation, not only of images but also of the world in their wake. One possible reason: with digital proliferation of all sorts of imagery, suddenly too much world became available. The map, to use the well-known fable by Borges, has not only become equal to the world, but exceeds it by far.15 A vast quantity of images covers the surface of the world—very in the case of aerial imaging—in a confusing stack of layers. The map explodes on a material territory, which is increasingly fragmented and also gets entangled with it: in one instance, Google Maps cartography led to near military conflict.16

While Borges wagered that the map might wither away, Baudrillard speculated that on the contrary, reality was disintegrating.17 In fact, both proliferate and confuse one another: on handheld devices, at checkpoints, and in between edits. Map and territory reach into one another to realize strokes on trackpads as theme parks or apartheid architecture. Image layers get stuck as geological strata while SWAT teams patrol Amazon shopping carts. The point is that no one can deal with this. This extensive and exhausting mess needs to be edited down in real time: filtered, scanned, sorted, and selected—into so many Wikipedia versions, into layered, libidinal, logistical, lopsided geographies.

This assigns a new role to image production, and in consequence also to people who deal with it. Image workers now deal directly in a world made of images, and can do so much faster than previously possible. But production has also become mixed up with circulation to the point of being indistinguishable. The factory/studio/tumblr blur with online shopping, oligarch collections, realty branding, and surveillance architecture. Today’s workplace could turn out to be a rogue algorithm commandeering your hard drive, eyeballs, and dreams. And tomorrow you might have to disco all the way to insanity.

As the web spills over into a different dimension, image production moves way beyond the confines of specialized fields. It becomes mass postproduction in an age of crowd creativity. Today, almost everyone is an artist. We are pitching, phishing, spamming, chain-liking or mansplaining. We are twitching, tweeting, and toasting as some form of solo relational art, high on dual processing and a smartphone flat rate. Image circulation today works by pimping pixels in orbit via strategic sharing of wacky, neo-tribal, and mostly US-American content. Improbable objects, celebrity cat GIFs, and a jumble of unseen anonymous images proliferate and waft through human bodies via Wi-Fi. One could perhaps think of the results as a new and vital form of folk art, that is if one is prepared to completely overhaul one’s definition of folk as well as art. A new form of storytelling using emojis and tweeted rape threats is both creating and tearing apart communities loosely linked by shared attention deficit."

[via: http://finalbossform.com/post/88613954773/while-borges-wagered-that-the-map-might-wither ]
internet  technology  images  communication  newaesthetic  web  socialmedia  production  art  folkart  infrastructure  hitosteyerl  2014  borges  baudrillard  maps  mapping  territory  reality  tumblr  processing  online  algorithms 
june 2014 by robertogreco
Episode One Hundred: Taking Stock; And The New
"It took a while, but one of the early themes that emerged was that of the Californian Ideology. That phrase has become a sort of short-hand for me to take a critical look at what's coming out of the west coast of the USA (and what that west coast is inspiring in the rest of the world). It's a conflicting experience for me, because I genuinely believe in the power of technology to enhance the human experience and to pull everyone, not just some people, up to a humane standard of living. But there's a particular heady mix that goes into the Ideology: one of libertarianism, of the power of the algorithm and an almost-blind belief in a purity of an algorithm, of the maths that goes into it, of the fact that it's executed in and on a machine substrate that renders the algorithm untouchable. But the algorithms we design reflect our intentions, our beliefs and our predispositions. We're learning so much about how our cognitive architecture functions - how our brains work, the hacks that evolution "installed" in us that are essentially zero-day back-door unpatched vulnerabilties - that I feel like someone does need to be critical about all the ways software is going to eat the world. Because software is undeniably eating the world, and it doesn't need to eat it in a particular way. It can disrupt and obsolete the world, and most certainly will, but one of the questions we should be asking is: to what end? 

This isn't to say that we should ask these questions to impede progress just as a matter of course: just that if we're doing these things anyway, we should also (because we *do* have the ability to) feel able to examine the long term consequences and ask: is this what we want?"
danhon  2014  californianideology  howwethink  brain  algorithms  libertarianism  progress  technology  technosolutionism  ideology  belief  intention 
june 2014 by robertogreco
In the Loop: Designing Conversations With Algorithms | superflux
"As algorithmic systems become more prevalent, I’ve begun to notice of a variety of emergent behaviors evolving to work around these constraints, to deal with the insufficiency of these black box systems. These behaviors point to a growing dissatisfaction with the predominant design principles, and imply a new posture towards our relationships with machines.

Adaptation

The first behavior is adaptation. These are situations where I bend to the system’s will. For example, adaptations to the shortcomings of voice UI systems — mispronouncing a friend’s name to get my phone to call them; overenunciating; or speaking in a different accent because of the cultural assumptions built into voice recognition. We see people contort their behavior to perform for the system so that it responds optimally. This is compliance, an acknowledgement that we understand how a system listens, even when it’s not doing what we expect. We know that it isn’t flexible or responsive enough, so we shape ourselves to it. If this is the way we move forward, do half of us end up with Google accents and the other half with Apple accents? How much of our culture ends up being an adaptation to systems we can’t communicate well with?

Negotiation

The second type of behavior we’re seeing is negotiation — strategies for engaging with a system to operate within it in more nuanced ways. One example of this is Ghostery, a browser extension that allows one to see what data is being tracked from one’s web browsing and limit it or shape it according to one’s desires. This represents a middle ground: a system that is intended to be opaque is being probed in order to see what it does and try and work with it better. In these negotiations, users force a system to be more visible and flexible so that they can better converse with it.

We also see this kind of probing of algorithms becoming a new and critical role in journalism, as newsrooms take it upon themselves to independently investigate systems through impulse response modeling and reverse engineering, whether it's looking at the words that search engines censor from their autocomplete suggestions, how online retailers dynamically target different prices to different users, or how political campaigns generate fundraising emails.

Antagonism

Third, rather than bending to the system or trying to better converse with it, some take an antagonistic stance: they break the system to assert their will. Adam Harvey’s CV Dazzle is one example of this approach, where people hack their hair and makeup in order to foil computer vision and opt out of participating in facial recognition systems. What’s interesting here is that, while the attitude here is antagonistic, it is also an extreme acknowledgement of a system’s power — understanding that one must alter one’s identity and appearance in order to simply exert free will in an interaction."



"Julian Oliver states this problem well, saying: “Our inability to describe and understand [technological infrastructure] reduces our critical reach, leaving us both disempowered and, quite often, vulnerable. Infrastructure must not be a ghost. Nor should we have only mythic imagination at our disposal in attempts to describe it. 'The Cloud' is a good example of a dangerous simplification at work, akin to a children's book.”

So, what I advocate is designing interactions that acknowledge the peer-like status these systems now have in our lives. Interactions where we don't shield ourselves from complexity but actively engage with it. And in order to engage with it, the conduits for those negotiations need to be accessible not only to experts and hackers but to the average user as well. We need to give our users more respect and provide them with more information so that they can start to have empowered dialogues with the pervasive systems around them.

This is obviously not a simple proposition, so we start with: what are the counterpart values? What’s the alternative to the black box, what’s the alternative to “it just works”? What design principles should we building into new interactions?

Transparency

The first is transparency. In order to be able to engage in a fruitful interaction with a system, I need to be able to understand something about its decision-making process. And I want to be clear that transparency doesn’t mean complete visibility, it doesn’t mean showing me every data packet sent or every decision tree.



Agency

The second principle here is agency, meaning that a system’s design should empower users to not only accomplish tasks, but should also convey a sense that they are in control of their participation with a system at any moment. And I want to be clear that agency is different from absolute and granular control.



Virtuosity

The last principle, virtuosity, is something that usually comes as a result of systems that support agency and transparency well. And when I say virtuosity, what I mean is the ability to use a technology expressively.

A technology allows for virtuosity when it contains affordances for all kinds of skilled techniques that can become deeply embedded into processes and cultures. It’s not just about being able to adapt something to one’s needs, but to “play” a system with skill and expressiveness."
superflux  anabjain  agency  algorithms  complexity  design  networks  wearables  christinaagapakis  paulgrahamraven  scottsmith  alexislloyd  2014  communication  adaptation  negotiation  antagonism  ghostery  julianoliver  transparency  virtuosity  visibility  systemsthinking  systems  expressiveness 
april 2014 by robertogreco
On Reverse Engineering — Anthropology and Algorithms — Medium
"As a cultural anthropologist in the middle of a long-term research project on algorithmic filtering systems, I am very interested in how people think about companies like Netflix, which take engineering practices and apply them to cultural materials. In the popular imagination, these do not go well together: engineering is about universalizable things like effectiveness, rationality, and algorithms, while culture is about subjective and particular things, like taste, creativity, and artistic expression. Technology and culture, we suppose, make an uneasy mix. When Felix Salmon, in his response to Madrigal’s feature, complains about “the systematization of the ineffable,” he is drawing on this common sense: engineers who try to wrangle with culture inevitably botch it up.

Yet, in spite of their reputations, we always seem to find technology and culture intertwined. The culturally-oriented engineering of companies like Netflix is a quite explicit case, but there are many others. Movies, for example, are a cultural form dependent on a complicated system of technical devices — cameras, editing equipment, distribution systems, and so on. Technologies that seem strictly practical — like the Māori eel trap pictured above—are influenced by ideas about effectiveness, desired outcomes, and interpretations of the natural world, all of which vary cross-culturally. We may talk about technology and culture as though they were independent domains, but in practice, they never stay where they belong. Technology’s straightforwardness and culture’s contingency bleed into each other.

This can make it hard to talk about what happens when engineers take on cultural objects. We might suppose that it is a kind of invasion: The rationalizers and quantifiers are over the ridge! They’re coming for our sensitive expressions of the human condition! But if technology and culture are already mixed up with each other, then this doesn’t make much sense. Aren’t the rationalizers expressing their own cultural ideas? Aren’t our sensitive expressions dependent on our tools? In the present moment, as companies like Netflix proliferate, stories trying to make sense of the relationship between culture and technology also proliferate. In my own research, I examine these stories, as told by people from a variety of positions relative to the technology in question. There are many such stories, and they can have far-reaching consequences for how technical systems are designed, built, evaluated, and understood."



"So what does “reverse engineering” mean? What kind of things can be reverse engineered? What assumptions does reverse engineering make about its objects? Like any frame, reverse engineering constrains as well as enables the presentation of certain stories. I want to suggest here that, while reverse engineering might be a useful strategy for figuring out how an existing technology works, it is less useful for telling us how it came to work that way. Because reverse engineering starts from a finished technical object, it misses the accidents that happened along the way — the abandoned paths, the unusual stories behind features that made it to release, moments of interpretation, arbitrary choice, and failure. Decisions that seemed rather uncertain and subjective as they were being made come to appear necessary in retrospect. Engineering looks a lot different in reverse."



"All engineering mixes culture and technology. Even Madrigal’s “reverse engineering” does not stay put in technical bounds: he supplements the work of his bot by talking with people, drawing on their interpretations and offering his own, reading the altgenres, populated with serendipitous algorithmic accidents, as “a window unto the American soul.” Engineers, reverse and otherwise, have cultural lives, and these lives inform their technical work. To see these effects, we need to get beyond the idea that the technical and the cultural are necessarily distinct. But if we want to understand the work of companies like Netflix, it is not enough to simply conclude that culture and technology — humans and computers — are mixed. The question we need to answer is how."
algorithms  culture  engineering  netflix  nickseaver  anthropology  reverseengineering  alexismadrigal  nicholasdiakopoulos  technology  invention  2014 
march 2014 by robertogreco
Lighthouse: IMPROVING REALITY 2013 - FILMS
"HOW ARE ARTISTS, TECHNOLOGISTS & WRITERS SUBVERTING OUR NOTION OF REALITY?

Lighthouse's digital culture conference, Improving Reality, returned for a third year this September. Talks included tours through worlds that artists are growing rather than making, critical revelations of the systems and infrastructures that shape our world, and narratives of radical alternative futures.

We’ve collected together the videos of the days talks, and invite you to join us in the discussion on Twitter and Facebook, or in any way you’d like. Visit the relevant session to watch the videos, and find out more about the themes, issues and ideas up or discussion.

In between sessions were a set of Tiny Talks, interventions from artists and designers involved in Brighton Digital Festival.

Session 1. Revealing Reality
http://lighthouse.org.uk/programme/improving-reality-2013-films-session-one

Social, political and technological infrastructures are the invisible “dark matter” which underlies contemporary life, influencing our environment and behaviour. This session explores how the spaces where we live, such as our cities, are being transformed by increasingly interlinked technological and architectural infrastructures. We will see how artists and designers are making these infrastructures visible, so that we may better understand and critique them.

Speakers: Timo Arnall, Keller Easterling and Frank Swain. Chair: Honor Harger.


Session 2. Re-imagining Reality
http://lighthouse.org.uk/programme/improving-reality-2013-films-session-two

Our increasingly technologised world, with its attendant infrastructures, is in a constant state of flux. This session explores how artists, designers and writers are imagining how our infrastructures may evolve. We will learn what writers might reveal about our infrastructures, using tools such as design fiction. We will go on tours through worlds that artists are growing, rather than making, using new materials like synthetic biology and nanotechnology. And we’ll see how artists are imagining new realities using techniques from futurism and foresight.

Speakers: Paul Graham Raven, Maja Kuzmanovic, Tobias Revell and Alexandra Daisy Ginsberg. Chair: Simon Ings.


Session 3. Reality Check
http://lighthouse.org.uk/programme/improving-reality-2013-films-session-three

The growing reach of technological infrastructures and engineered systems into our lives creates uneasy social and ethical challenges. The recent scandals relating to the NSA, the revelation of the PRISM surveillance programme, and the treatment of whistleblowers such as Edward Snowden and Bradley Manning, have revealed how fundamentally intertwined our civil liberties are with our technological infrastructures. These systems can both enable, and threaten, both our privacy and our security. Ubiquitous networked infrastructures create radical new creative opportunities for a coming generation of makers and users, whilst also presenting us with major social dilemmas. In this session we will look at the social and ethical questions which will shape our technological infrastructures in the future. We will examine algorithmic infrastructures, power dynamics, and ask, “whose reality we are trying to improve”.

Speakers: Farida Vis, Georgina Voss, Paula Le Dieu, and Justin Pickard. Chair: Scott Smith."
timoarnall  kellereasterling  frankswain  honorharger  paulgrahamraven  majakuzmanovic  tobiasrevell  alexandradaisy-ginsberg  simonings  faridavis  georginavoss  paulaledieu  justinpickard  scottsmitt  reality  art  systems  infrastructure  politics  technology  darkmatter  behavior  environment  architecture  2013  flux  change  nanotechnology  syntheticbiology  materials  futurism  ethics  surveillance  nsa  edwardsnowden  bradleymanning  civilliberties  security  privacy  algorithms  networks  ubiquitouscomputing  powerdynamics  towatch 
october 2013 by robertogreco
How Google accidentally uncovered a Chinese ring of car thieves | The Verge
"The answer turned out to be even stranger. They were real cars, but they weren't really for sale. Scammers were taking pictures of cars on the street, and when a hapless customer showed up a few days later offering money, they'd steal the car and hand it over. By the time the mark realized he had purchased stolen goods, the sellers were long gone, taking his money with them. It's a lucrative scam, and in China, a well-known one — but to anyone looking at the ads, it just looks like one more crop of used-car ads.

For those who study fraud in China, on the other hand, this is far from surprising. "These people are very professional," says Dahui Li, an information systems expert at the University of Minnesota who specializes in Chinese online fraud. In the case of the car scam, he says the offline component is the most important part, as a way to assure skeptical customers that the sale is legit. "Chinese people want to see the product before they pay for it," Li says. "They have to see the car." So the criminal element developed a scheme that could show it to them."



:More importantly, it doesn’t take human prejudice into account. Baker and his team weren’t looking for cars or car thieves. But the algorithm saw a pattern of quick buys from new accounts, tied together with larger and more subtle patterns, and deduced something was up. It’s not an airtight system: more than a few valid accounts have had their orders delayed while the team checked them out. But in this case, it was able to reach across continents to suss out a scheme its engineers had never even imagined. Cultural differences could fool the humans, but they couldn’t fool the machine."

[via: http://www.marketplace.org/topics/tech/google-algorithm-inadvertently-takes-down-ring-chinese-car-thieves ]
google  crime  adwords  china  2013  cars  algorithms  patterns 
july 2013 by robertogreco
Venus of Google - Matthew Plummer-Fernandez
"The Venus of Google was ‘found’ via a Google search-by-image, googling a photograph taken of an object I had been handed over in a game of exquisite corpse. The Google search returned visually similar results, one of these being an image of a woman modelling a body-wrap garment. I then used a similar algorithmic image-comparison technique to drive the automated design of a 3D printable object. The 'Hill-Climbing' algorithm starts with a plain box shape and tries thousands of random transformations and comparisons between the shape and the image, eventually mutating towards a form resembling the found image in both shape and colour.

I’m interested in this early era of artificial intelligence, computer vision and algorithmic artefacts, exemplifying the paradox of technology being both advanced and primitive at the same time. The Long Tail Multiplier series investigates the potential use of algorithms to create virtually infinite cultural artefacts, inspired by the stories of these algorithmic books and t-shirts."
google  googleimagesearch  art  matthewplummer-fernandez  photography  algorithms  newaesthetic  3dprinting  automation 
june 2013 by robertogreco
wandering wandering star • DISAPPEAR US ALGORITHMS, AESTHETICS, AND THE...
"Amidst the complicated and abundant cultural and political significances that “camo” has acquired over the past half century, we often forget that on the front lines of modern warfare, camouflage is a matter of life and death, just as in Darwin’s theory of natural selection. No matter where you stand on its confounding form and controversial function, camouflage is a powerful assimilative tool: it is a polyvalent social marker, as much in the street as on the catwalk, as seen most recently in the Men’s Spring/Summer 2013 collections by VALENTINO, DRIES VAN NOTEN, and PRINGLE OF SCOTLAND. In the field, it can completely absorb you, incognito, into an environment."
algorithms  camouflage  design  clothing  war  military  fashion  valentino  2013  driesvannoten  pingleofscotland 
june 2013 by robertogreco
Design for the New Normal (Revisited) | superflux
"I was invited to talk at the NEXT Conference in Berlin by Peter Bihr, as he felt that a talk I gave last year would fit well with the conference's theme Here Be Dragons: "We fret about data, who is collecting it and why. We fret about privacy and security. We worry and fear disruption, which changes business models and renders old business to ashes. Some would have us walk away, steer clear of these risks. They’re dangerous, we don’t know what the consequences will be. Maintain the status quo, don’t change too much.Here and now is safe. Over there, in the future? Well, there be dragons."

This sounded like a good platform to expand upon the 'Design for the New Normal' presentation I gave earlier, especially as its an area Jon and I are thinking about in the context of various ongoing projects. So here it is, once again an accelerated slideshow (70 slides!) where I followed up on some of the stories to see what happened to them in the last six months, and developed some of the ideas further. This continues to be a work-in-progress that Superflux is developing as part of our current projects. "

[Video: http://nextberlin.eu/2013/07/design-for-the-new-normal-3/ ]
anabjain  2013  drones  weapons  manufacturing  3dprinting  bioengineering  droneproject  biotechnology  biotech  biobricks  songhojun  ossi  zemaraielali  empowerment  technology  technologicalempowerment  raspberrypi  hackerspaces  makerspaces  diy  biology  diybio  shapeways  replicators  tobiasrevell  globalvillageconstructionset  marcinjakubowski  crowdsourcing  cryptocurrencies  openideo  ideo  wickedproblems  darpa  innovation  india  afghanistan  jugaad  jugaadwarfare  warfare  war  syria  bitcoins  blackmarket  freicoin  litecoin  dna  dnadreams  bregtjevanderhaak  bgi  genomics  23andme  annewojcicki  genetics  scottsmith  superdensity  googleglass  chaos  complexity  uncertainty  thenewnormal  superflux  opensource  patents  subversion  design  jonardern  ux  marketing  venkateshrao  normalityfield  strangenow  syntheticbiology  healthcare  healthinsurance  insurance  law  economics  ip  arnoldmann  dynamicgenetics  insects  liamyoung  eleanorsaitta  shingtatchung  algorithms  superstition  bahavior  numerology  dunne&raby  augerloizeau  bionicrequiem  ericschmidt  privacy  adamharvey  makeu 
april 2013 by robertogreco
Null Object - interview - Domus
"To create Null Object, a block of Portland Roach (a type of limestone deposited 145 million years ago in the Jurassic geological period) was cut into a perfect cube measuring 50 cm on each side and then excavated using a KUKA industrial robot. The form of the void created by the robot is derived from an EEG (electroencephalogram) recording of Metzger's brain while he attempted to think about nothing for a period of 20 minutes."
nullobjects  objects  computation  eeg  robots  algorithms  brain  thinking  2013 
march 2013 by robertogreco
Algorithmic Rape Jokes in the Library of Babel | Quiet Babylon
"Jorge Luis Borges’ Library of Babel twisted through the logic of SEO and commerce."

"Part of what tips the algorithmic rape joke t-shirts over from very offensive to shockingly offensive is that they are ostensibly physical products. Intuitions are not yet tuned for spambot clothes sellers."

"Amazon isn’t a store, not really. Not in any sense that we can regularly think about stores. It’s a strange pulsing network of potential goods, global supply chains, and alien associative algorithms with the skin of a store stretched over it, so we don’t lose our minds."
algorithms  amazon  culture  internet  borges  timmaly  2013  jamesbridle  apologies  non-apologies  brianeno  generative  crapjects  georginavoss  rape  peteashton  software  taste  poortaste  deniability  secondlife  solidgoldbomb  t-shirts  keepcalmand  spam  objects  objectspam  quinnnorton  masscustomization  rapidprototyping  shapersubcultures  scale  libraryofbabel  thelibraryofbabel  tshirts 
march 2013 by robertogreco
Dr. Jeannette Wing | Jon Udell's Interviews with Innovators
"For Interviews with Innovators, Jon Udell speaks with Jeannette Wing, a Carnegie Mellon computer scientists who coined the term computational thinking. Her idea is that ways of thinking and problem-solving that involve algorithms and data structures and levels of abstraction and refactoring aren't just for computer scientists, they're really for everybody."
podcasts  tolisten  jeannettewing  computationalthinking  problemsolving  algorithms  datastructures  2007  abstraction  refactoring  compsci  thinking 
february 2013 by robertogreco
[this is aaronland] signs of life [These quotes are only from the beginning. I recommend reading the whole thing.]
"I've been thinking a lot about motive & intent for the last few years. How we recognize motive &… how we measure its consequence.

This is hardly uncharted territory. You can argue easily enough that it remains the core issue that all religion, philosophy & politics struggle with. Motive or trust within a community of individuals.

…Bruce Schneier…writes:

"In today's complex society, we often trust systems more than people. It's not so much that I trusted the plumber at my door as that I trusted the systems that produced him & protect me."

I often find myself thinking about motive & consequence in the form of a very specific question: Who is allowed to speak on behalf of an organization?

To whom do we give not simply the latitude of interpretation, but the luxury of association, with the thing they are talking about …

Institutionalizing or formalizing consequence is often a way to guarantee an investment but that often plows head-first in to the subtlies of real-life."

[Video here: https://vimeo.com/51515289 ]
dunbartribes  schrodinger'sbox  scale  francisfukuyama  capitalism  industrialrevolution  technology  rules  control  algorithms  creepiness  siri  drones  robots  cameras  sensors  robotreadableworld  humans  patterns  patternrecognition  patternmatching  gerhardrichter  robotics  johnpowers  dia:beacon  jonathanwallace  portugal  lisbon  brandjacking  branding  culturalheritage  culture  joannemcneil  jamesbridle  future  politics  philosophy  religion  image  collections  interpretation  representation  complexity  consequences  cooper-hewitt  photography  filters  instagram  flickr  museums  systemsthinking  systems  newaesthetic  voice  risk  bruceschneier  2012  aaronstraupcope  aaron  intent  motive  storiesfromthenewaesthetic  canon 
october 2012 by robertogreco
Rhizome | The Universal Texture
"By capturing screenshots of these images in Google Earth, I am pausing them and pulling them out of the update cycle. I capture these images to archive them - to make sure there is a record that this image was produced by the Universal Texture at a particular time and place. As I kept looking for more anomalies, and revisiting anomalies I had already discovered, I noticed the images I had discovered were disappearing. The aerial photographs were getting updated, becoming 'flatter' – from being taken at less of an angle or having the shadows below bridges muted. Because Google Earth is constantly updating its algorithms and three-dimensional data, each specific moment could only be captured as a still image. I know Google is trying to fix some of these anomalies too – I’ve been contacted by a Google engineer who has come up with a clever fix for the problem of drooping roads and bridges. Though the change has yet to appear in the software, it’s only a matter of time."
algorithms  art  google  googlemaps  maps  googleearth  via:stml  clementvalla 
august 2012 by robertogreco
A conversation between Rob Walker and co-founder of Area/Code, Kevin Slavin : Observatory: Design Observer
"I know some of the people involved in Museum of the Phantom City, and they’re good people. But, in order to see the things that they want to point out, I have to go that place — well, okay. But then, once I’m there, the best way to display that information is the juxtaposition of it in front of what I’ve just traveled there to see? I don’t think so. Bottom line, maybe, is that visualizing the invisible is difficult, and might not be best expressed through the metaphor of the camera."

"What's important to me about the kinds of things we were doing with Area/Code — and all the designers around us — is that we were building systems in the middle of the data, some systems that gave us a way to read, and reasons to read it. The stories we were telling with locative games were fiction, but as always, good fiction describes the real world rather precisely."
trading  algorithmictrading  gps  geocaching  design  urban  softwareforcities  software  algorithms  cities  finance  paolaantonelli  reality  phantomcity  augmentedreality  storytelling  fiction  photography  area/code  robwalker  2011  kevinslavin  ar  from delicious
may 2012 by robertogreco
Kevin Slavin: How algorithms shape our world | Video on TED.com
"Kevin Slavin argues that we're living in a world designed for -- and increasingly controlled by -- algorithms. In this riveting talk from TEDGlobal, he shows how these complex computer programs determine: espionage tactics, stock prices, movie scripts, and architecture. And he warns that we are writing code we can't understand, with implications we can't control."
kevinslavin  algorithms  complexity  coding  ted  data  finance  art  architecture  math  mathematics  control  2011  netflix  markets  bots  from delicious
july 2011 by robertogreco
Eli Pariser: Beware online "filter bubbles" | Video on TED.com
"As web companies strive to tailor their services (including news and search results) to our personal tastes, there's a dangerous unintended consequence: We get trapped in a "filter bubble" and don't get exposed to information that could challenge or broaden our worldview. Eli Pariser argues powerfully that this will ultimately prove to be bad for us and bad for democracy."
elipariser  echochambers  serendipity  internet  online  web  media  relevance  search  google  facebook  exposure  2011  ted  via:jessebrand  politics  crosspollination  dialogue  walledgardens  algorithms  censorship  personalization  advertising  yahoonews  huffingtonpost  nytimes  washingtonpost  impulse  aspirationalselves  filterbubble  dialog  from delicious
may 2011 by robertogreco
Amazon’s $23,698,655.93 book about flies
"behavior of profnath is easy to deconstruct. They presumably have a new copy of the book, & want to make sure theirs is the lowest priced…Why though would bordeebook want to make sure theirs is always more expensive? Since prices of all the sellers are posted, this would seem to guarantee they would get no sales. But maybe this isn’t right…some buyers might choose to pay a few extra $ for level of confidence in transaction…seems fairly risky…most people probably don’t behave that way…meanwhile you’ve got a book sitting on the shelf collecting dust…<br />
<br />
My preferred explanation for bordeebook’s pricing…they do not actually possess the book. Rather, they noticed that someone else listed a copy for sale, and so they put it up as well – relying on their better feedback record to attract buyers. But, of course, if someone actually orders the book, they have to get it – so they have to set their price significantly higher than the price they’d have to pay to get the book elsewhere."
amazon  algorithms  books  pricingbots  pricing  money  michaeleisen  from delicious
april 2011 by robertogreco
Khan Academy and the mythical math cure « Generation YES Blog
"There is no doubt that Khan Academy fills a perceived need that something needs to be fixed about math instruction. But at some point, when you talk about learning math, you have to define your terms. If you are a strict instructionist – you are going to love Khan Academy. If you are a constructivist, you are going to find fault with a solution that is all about instruction. So any discussion of Khan Academy in the classroom has to start with the question, how do YOU believe people learn?

I have more to say about Khan Academy and math education in the US — this post turned into 4 parts!

Part 1 – Khan Academy and the mythical math cure (this post)
Part 2 – Khan Academy – algorithms and autonomy
Part 3 – Don’t we need balance? and other questions
Part 4 – Monday… Someday"
math  learning  khanacademy  education  constructivism  instruction  memorization  algorithms  schools  teaching  sylviamartinez  2011  instructionism  mathematics  tcsnmy  from delicious
april 2011 by robertogreco
The Way You Learned Math Is So Old School : NPR
"there's a reason elementary schools are teaching arithmetic in a new way.

"…largely to reflect the different needs of society. No one ever in their real life anymore needs to — & in most cases never does — do the calculations themselves."

Computers do arithmetic for us…but making computers do the things we want them to do requires algebraic thinking. For instance, take a computer spreadsheet. The computer does all the calculations for you automatically. But you have to write the macros that tell it what calculations to do —& that is algebraic thinking.

"You cannot become good at algebra w/out a mastery of arithmetic, but arithmetic itself is no longer the ultimate goal." Thus the emphasis in teaching mathematics today is on getting people to be sophisticated, algebraic thinkers.

That doesn't mean that kids can skip learning their multiplications tables. "But the way it's taught now is you get to multiplication tables by understanding number system & what numbers mean"
education  math  teaching  learning  algebra  algebraicthinking  criticalthinking  mathematics  change  algorithms  parenting  tcsnmy  deschooling  from delicious
march 2011 by robertogreco
Kevin Slavin on Lift 11: Geneva - live streaming video powered by Livestream
Quotes transcribed by David Smith: "things we write but can no longer read"; "three problems … opacity, inscrutability … The third one is darker and a little bit harder to describe — I don't even know what to call it yet"; flash crash; dark pools; 60% of all movies rented on Netflix are rented because Netflix recommended them; 70% of current Wall St trades are algorithms trying to be invisible or other algorithms trying to find the invisible algorithms"
kevinslavin  technology  algorithms  evolution  wallstreet  cities  darkpools  netflix  trading  finance  invisibilealgorithms  financialservices  realestate  nyc  manhattan  songs  film  television  tv  opacity  inscrutability  elevators  lift11  roomba  robots  from delicious
february 2011 by robertogreco
Hard-Coding Bias in Google "Algorithmic" Search Results
"I present categories of searches for which available evidence indicates Google has "hard-coded" its own links to appear at the top of algorithmic search results, and I offer a methodology for detecting certain kinds of tampering by comparing Google results for similar searches. I compare Google's hard-coded results with Google's public statements and promises, including a dozen denials but at least one admission. I tabulate affected search terms and examine other mechanisms also granting favored placement to Google's ancillary services. I conclude by analyzing the impact of Google's tampering on users and competition, and by proposing principles to block Google's bias."
algorithms  google  hard-coding  bias  ethics  programming  seo  ranking  analytics  from delicious
november 2010 by robertogreco
Horizons [iPhone, iPad, oF] - "Exploration of colour, sound and form" by @julapy + Eli Murray | CreativeApplications.Net
"Horizons is a interactive sound toy which brings together the atmospheric sounds of Eli Murray (Gentleforce) and generative visuals of Lukasz Karluk. The app is an exploration of colour, sound and form.

The design of the piece focuses on creating subtle colour refractions in a rich colour scape using an algorithmic process known as triangulation. Fluidity of interaction is achieved using real-time physics made using the Box2d library and openFrameworks."
horizons  iphone  applications  ipad  sound  toys  color  form  lukaszkarluk  elimurray  gentleforce  algorithms  triangulation  physics  ios  from delicious
september 2010 by robertogreco
Self-organizing map - Wikipedia
"A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space."
maps  mathematics  networks  optimization  datamining  database  clustering  classification  algorithms  ai  learning  programming  research  statistics  visualization  neuralnetworks  mapping  som  self-organizingmaps 
june 2010 by robertogreco
Perlin Noise
"Many people have used random number generators in their programs to create unpredictability, make the motion and behavior of objects appear more natural, or generate textures. Random number generators certainly have their uses, but at times their output can be too harsh to appear natural. This article will present a function which has a very wide range of uses, more than I can think of, but basically anywhere where you need something to look natural in origin. What's more it's output can easily be tailored to suit your needs."
animation  mathematics  processing  algorithms  math  graphics  perlinnoise  random  howto  programming  visualization  software  design  gamedev  texture  noise  tutorial  via:robinsloan 
february 2010 by robertogreco
Teach Computer Science without a computer! | Computer Science Unplugged
"Computer Science Unplugged is a series of learning activities that reveals a little-known secret: computer science isn't really about computers at all!"
education  learning  programming  children  science  teaching  games  computers  compsci  algorithms  puzzles  computerscience  tcsnmy  edg  srg  coding 
august 2009 by robertogreco
« earlier      
per page:    204080120160

related tags

3dprinting  18f  23andme  79theses  2000s  aaron  aaronstraupcope  abstract  abstraction  abundance  abuse  academia  access  accessibility  activism  adamcurtis  adamharvey  adaptability  adaptation  addressess  adelinekoh  adjuncts  adrianelapointe  adriennelafrance  ads  advertising  adwords  affect  affordances  afghanistan  agameforsomeone  age  ageism  agency  aggregator  aging  ai  akrasia  alanjacobs  alanturing  alexandradaisy-ginsberg  alexislloyd  alexismadrigal  alextabattok  alextaylor  algebra  algebraicthinking  algorithmictrading  algorithms  alicemarwick  alistaircroll  alvintoffler  amazon  amazonprime  ambercase  ambient  anabjain  analytics  animals  animation  annalowenhaupttsing  annatsing  annegalloway  annewojcicki  anonymity  anosognosia  antagonism  anthonydunne  anthropology  antifa  anxiety  api  apis  apolocupisnique  apologies  apple  applications  ar  architecturaleducation  architecture  archives  are.na  area/code  arizona  arnoldmann  art  arthurcclarke  artificialintelligence  aspirationalselves  assistivetechnology  at  attention  attentiveness  audience  audreywatters  augerloizeau  augmentedreality  authorities  authority  automation  awareness  añosluz  bahavior  banjamindoxtdator  banking  banks  barackobama  baudrillard  behavior  behaviorism  bejamingrosser  belief  bellsystem  benjamindoxtdator  bentobox  bethanynowviskie  bfskinner  bgi  bias  biases  bibliography  bigdata  biobricks  bioengineering  biology  biomimetics  biomimicry  bionicrequiem  biotech  biotechnology  bitcoin  bitcoins  blacklivesmatter  blackmarket  blackmirror  blame  blind  blindness  bobdylan  bodies  body  bookmarkling  books  border  borderlessness  borders  borges  bots  bradleymanning  brain  brand  branding  brandjacking  brasil  brazil  bregtjevanderhaak  brianeno  brianmier  browsers  bruceschneier  brucesterling  building  business  butterflies  bélatarr  cabbroskoski  californianideology  cambridgeanalytica  cameras  camouflage  canon  capitalism  captchas  care  careers  caring  carolinesinders  cars  cartography  casademusica  cathyo'neil  ceciliavicuña  censorship  cgi  chadwellmon  change  chaos  charlesbroskoski  charlesdarwin  chatbots  chaters  chelseamanning  chicago  children  china  chrisbutler  chrisgilliard  chrissherron  christinaagapakis  chunking  cinema  cities  citizenship  civics  civildisobedience  civilinattention  civilliberties  civilrights  class  classideas  classification  clementvalla  clickbait  climatechange  clothing  cloud  clustering  code  codeforamerica  coding  cognitivedissonance  colinpowell  collaboration  collecting  collections  collective  collectiveintelligence  college  colonialism  color  colorblindness  command  commenting  commerce  commodification  commoncore  commons  communication  communism  communities  community  companionship  competition  complacency  complexity  compliance  complicity  composition  compsci  computation  computationalthinking  computers  computerscience  computing  confirmationbias  consequences  conservativeneutrality  consistency  constraints  constructivism  content  context  contextcollapse  contingentwork  control  conversation  conviviality  convivialtools  cooking  cooper-hewitt  coreyarcangel  corporations  corporatism  counternarratives  crapjects  creative  creativegeneralists  creativity  creativitysingapore  creepiness  crime  crisis  criticalalgorithmstudies  criticaldesign  criticalpedagogy  criticaltheory  criticalthinking  crossdisciplinary  crosspollination  crowds  crowdsourcing  cruelty  cryptocurrencies  cryptocurrency  cslewis  culturalheritage  culture  curating  curation  curators  curiosity  curriculum  cv  cyborgs  damonzucconi  danagoldstein  danahboyd  danger  danhon  danielhowe  daniellesucher  daphnedragona  daraóbriain  darkmatter  darkpools  darkweb  darpa  darwin  data  database  databases  datacollection  datakarma  datamining  datascience  datastructures  datavisualization  dataviz  davidbohm  davidbrooksm  davidhockney  davidlewis  davidstow  davidzeig  death  deeplab  deeplearning  del.icio.us  deliveries  democracy  demoracy  deniability  deomcracyb  deschooling  design  designdiscourse  designfiction  destruction  detournement  development  dexterthomas  dia:beacon  diagrams  dialog  dialogue  dianetavenner  dictatorship  difference  digital  digitalbranding  digitalcitizenship  digitalculture  digitalhumanities  digitalimaging  dignity  disabilities  disability  discourse  discovery  discrimination  disobedience  displacement  disruption  dissent  diversity  diy  diybio  dna  dnadreams  documenta14  documentary  donaldtrump  donellameadows  donnaharaway  donnalanclos  dopplr  dreambox  dreamboxlearning  driesvannoten  droneproject  drones  dunbartribes  dunne&raby  dunning-krugereffect  dynamic  dynamicgenetics  dynamism  dystopia  earth  eating  ebooks  echochambers  echonest  economics  edg  edgardegas  edreform  edtech  education  eduoardokohn  edupunk  edwardsnowden  edwardthorndike  eeg  effects  efficiency  eleanorsaitta  elections  elevators  eliezeryudkowsky  elimurray  elipariser  elishacohen  eliza  ellenlagemann  ellenullman  ellsworthkelly  elviejoantonio  emergence  emergent  emotionallabor  emotions  empathy  employment  empowerment  engagement  engineering  english  environment  ephemeral  ephemerality  equality  equity  ericmeyer  ericschmidt  ervinggoffman  español  estonia  eternal  ethics  ethnicity  eugenics  evekosofskysedgwick  evolution  evolvinglogos  exclusion  exile  experience  experimentation  experts  explanation  exploitation  exposure  expressiveness  exquisitecorpses  extensions  extinction  eyeo  facebook  facebooks  faces  factoryschools  failure  fakenews  families  faridavis  fascism  fashion  favelas  favicons  favorites  fear  feedbackloops  feeds  fiction  fictions  film  filmmaking  filterbubble  filterbubbles  filtering  filters  finance  financialservices  fionaraby  fiterbubbles  flexibility  flickr  fliters  flux  fluxkits  foia  folkart  folksonomy  food  forgetting  form  founder'sstory  fractals  francisbacon  francisfukuyama  frankkermode  frankpasquale  frankswain  fredappel  freddiedeboer  frederickengels  frederickjohngladman  fredmoten  fredscharmen  freedom  freelancing  freicoin  friends  funding  future  futuremaking  futures  futureshock  futurism  gamechanging  gamedesign  gamedev  games  gamification  gaming  garystager  gatekeepers  gatesfoundation  gender  genecoin  generalists  generations  generative  generativelogos  generator  genetics  genomics  gentleforce  geocaching  geocoding  geometry  georgebernardshaw  georgelakoff  georgelipsitz  georgewbush  georginakleege  georginavoss  geotagging  gerhardrichter  ghostery  gigeconomy  gkchesterson  glancing  globalvillageconstructionset  globalwarming  glvo  goodwill  google  googleearth  googleglass  googleimagesearch  googlemaps  government  gps  graphicdesign  graphics  greed  gregorybateson  gregoryulmer  greyballing  gritty  growth  guns  gunviolence  habits  hackers  hackerspaces  hacking  hangdothiduc  hannaharendt  hard-coding  hardfun  healing  healthcare  healthcare.gov  healthinsurance  helekeller  helenlewis  helennissenbaum  herbertsimon  hermanmelville  hierarchy  highered  highereducation  history  hitosteyerl  homegrowing  honorharger  hope  hopefulness  horacemann  horizons  hourofcode  howto  howwelearn  howweread  howweteach  howwethink  howwewrite  howwteach  huffingtonpost  human  humanintervention  humanism  humanities  humans  ianbogost  ideals  ideas  identity  ideo  ideology  illustration  image  images  imagination  imaging  immigrants  immigration  imperialism  implementation  impulse  inattention  inclusion  inclusivity  incomeinequality  independent  india  indie  indifference  individuals  industrialrevolution  inequality  influence  infodesign  infographics  information  infrastructure  injustice  inlcusivity  innovation  inscrutability  insects  inspiration  instagram  instruction  instructionism  insurance  intelligence  intent  intention  interaction  interactive  interdisciplinary  interestingness  interface  internet  internetofthings  interpretation  interview  invention  invisibilealgorithms  ios  iot  ip  ipad  iphone  isolation  ivanillich  jacquesellul  jamesbaldwin  jamesbridle  jamescordiner  jamescscott  janruneholmevik  japan  jaronlanier  jarrettfuller  jasonrohrer  jeannettewing  jeffreyschnapp  jenlowe  jennyldavis  jenterysayers  jeromemcgann  jessamynwest  jessiewoolley-wilson  jimgroom  joannemcneil  joemurphy  johannadrucker  johnpowers  johntaylorgatto  jonardern  jonathanwallace  jorgecarrión  josboys  josémaríaarizmendiarrieta  journalism  joy  judgement  jugaad  jugaadwarfare  julesolitski  julianassange  julianbozeman  julianoliver  juliaticona  jusalgoritmi  justice  justification  justinpickard  juul  karelčapek  karenbarad  kathysierra  kaushitakiupanishad  keepcalmand  kellereasterling  kevinkelly  kevinslavin  khanacademy  kindness  knowledge  kony  kurosawa  labor  lambrosmalafouris  lancastersystem