robertogreco + computing   317

Laurel Schwulst, "Blogging in Motion" - YouTube
"This video was originally published as part of peer-to-peer-web.com's NYC lecture series on Saturday, May 26, 2018 at the at the School for Poetic Computation.

It has been posted here for ease of access.

You can find many other great talks on the site:
https://peer-to-peer-web.com

And specifically more from the NYC series:
https://peer-to-peer-web.com/nyc "

[See also:
https://www.are.na/laurel-schwulst/blogging-in-motion ]
laurelschwulst  2019  decentralization  p2p  web  webdesign  blogging  movement  travel  listening  attention  self-reflection  howwewrite  writing  walking  nyc  beakerbrowser  creativity  pokemon  pokemonmoon  online  offline  internet  decentralizedweb  dat  p2ppublishing  p2pweb  distributed  webdev  stillness  infooverload  ubiquitous  computing  internetofthings  casygollan  calm  calmtechnology  zoominginandout  electricity  technology  copying  slow  small  johnseelybrown  markweiser  xeroxparc  sharing  oulipo  constraints  reflection  play  ritual  artleisure  leisurearts  leisure  blogs  trains  kylemock  correspondence  caseygollan  apatternlanguage  intimacy 
10 weeks ago by robertogreco
anton on Twitter: "Things that happen in Silicon Valley and also the Soviet Union: - waiting years to receive a car you ordered, to find that it's of poor workmanship and quality - promises of colonizing the solar system while you toil in drudgery day in,
"Things that happen in Silicon Valley and also the Soviet Union:

- waiting years to receive a car you ordered, to find that it's of poor workmanship and quality

- promises of colonizing the solar system while you toil in drudgery day in, day out

- living five adults to a two room apartment

- being told you are constructing utopia while the system crumbles around you

- 'totally not illegal taxi' taxis by private citizens moonlighting to make ends meet

- everything slaved to the needs of the military-industrial complex

- mandatory workplace political education

- productivity largely falsified to satisfy appearance of sponsoring elites

- deviation from mainstream narrative carries heavy social and political consequences

- networked computers exist but they're really bad

- Henry Kissinger visits sometimes for some reason

- elite power struggles result in massive collateral damage, sometimes purges

- failures are bizarrely upheld as triumphs

- otherwise extremely intelligent people just turning the crank because it's the only way to get ahead

- the plight of the working class is discussed mainly by people who do no work

- the United States as a whole is depicted as evil by default

- the currency most people are talking about is fake and worthless

- the economy is centrally planned, using opaque algorithms not fully understood by their users"
siliconvalley  sovietunion  tesla  uber  lyft  us  2018  antontroynikov  russia  space  utopia  society  propaganda  labor  work  housing  politics  social  elitism  collateraldamage  militaryindustrialcomplex  evil  currency  fake  economics  economy  planning  algorithms  mainstream  computing  henrykissinger 
10 weeks ago by robertogreco
I Embraced Screen Time With My Daughter—and I Love It | WIRED
I often turn to my sister, Mimi Ito, for advice on these issues. She has raised two well-adjusted kids and directs the Connected Learning Lab at UC Irvine, where researchers conduct extensive research on children and technology. Her opinion is that “most tech-privileged parents should be less concerned with controlling their kids’ tech use and more about being connected to their digital lives.” Mimi is glad that the American Association of Pediatrics (AAP) dropped its famous 2x2 rule—no screens for the first two years, and no more than two hours a day until a child hits 18. She argues that this rule fed into stigma and parent-shaming around screen time at the expense of what she calls “connected parenting”—guiding and engaging in kids’ digital interests.

One example of my attempt at connected parenting is watching YouTube together with Kio, singing along with Elmo as Kio shows off the new dance moves she’s learned. Everyday, Kio has more new videos and favorite characters that she is excited to share when I come home, and the songs and activities follow us into our ritual of goofing off in bed as a family before she goes to sleep. Her grandmother in Japan is usually part of this ritual in a surreal situation where she is participating via FaceTime on my wife’s iPhone, watching Kio watching videos and singing along and cheering her on. I can’t imagine depriving us of these ways of connecting with her.

The (Unfounded) War on Screens

The anti-screen narrative can sometimes read like the War on Drugs. Perhaps the best example is Glow Kids, in which Nicholas Kardaras tells us that screens deliver a dopamine rush rather like sex. He calls screens “digital heroin” and uses the term “addiction” when referring to children unable to self-regulate their time online.

More sober (and less breathlessly alarmist) assessments by child psychologists and data analysts offer a more balanced view of the impact of technology on our kids. Psychologist and baby observer Alison Gopnik, for instance, notes: “There are plenty of mindless things that you could be doing on a screen. But there are also interactive, exploratory things that you could be doing.” Gopnik highlights how feeling good about digital connections is a normal part of psychology and child development. “If your friends give you a like, well, it would be bad if you didn’t produce dopamine,” she says.

Other research has found that the impact of screens on kids is relatively small, and even the conservative AAP says that cases of children who have trouble regulating their screen time are not the norm, representing just 4 percent to 8.5 percent of US children. This year, Andrew Przybylski and Amy Orben conducted a rigorous analysis of data on more than 350,000 adolescents and found a nearly negligible effect on psychological well-being at the aggregate level.

In their research on digital parenting, Sonia Livingstone and Alicia Blum-Ross found widespread concern among parents about screen time. They posit, however, that “screen time” is an unhelpful catchall term and recommend that parents focus instead on quality and joint engagement rather than just quantity. The Connected Learning Lab’s Candice Odgers, a professor of psychological sciences, reviewed the research on adolescents and devices and found as many positive as negative effects. She points to the consequences of unbalanced attention on the negative ones. “The real threat isn’t smartphones. It’s this campaign of misinformation and the generation of fear among parents and educators.”

We need to immediately begin rigorous, longitudinal studies on the effects of devices and the underlying algorithms that guide their interfaces and their interactions with and recommendations for children. Then we can make evidence-based decisions about how these systems should be designed, optimized for, and deployed among children, and not put all the burden on parents to do the monitoring and regulation.

My guess is that for most kids, this issue of screen time is statistically insignificant in the context of all the other issues we face as parents—education, health, day care—and for those outside my elite tech circles even more so. Parents like me, and other tech leaders profiled in a recent New York Times series about tech elites keeping their kids off devices, can afford to hire nannies to keep their kids off screens. Our kids are the least likely to suffer the harms of excessive screen time. We are also the ones least qualified to be judgmental about other families who may need to rely on screens in different ways. We should be creating technology that makes screen entertainment healthier and fun for all families, especially those who don’t have nannies.

I’m not ignoring the kids and families for whom digital devices are a real problem, but I believe that even in those cases, focusing on relationships may be more important than focusing on controlling access to screens.

Keep It Positive

One metaphor for screen time that my sister uses is sugar. We know sugar is generally bad for you and has many side effects and can be addictive to kids. However, the occasional bonding ritual over milk and cookies might have more benefit to a family than an outright ban on sugar. Bans can also backfire, fueling binges and shame as well as mistrust and secrecy between parents and kids.

When parents allow kids to use computers, they often use spying tools, and many teens feel parental surveillance is invasive to their privacy. One study showed that using screen time to punish or reward behavior actually increased net screen time use by kids. Another study by Common Sense Media shows what seems intuitively obvious: Parents use screens as much as kids. Kids model their parents—and have a laserlike focus on parental hypocrisy.

In Alone Together, Sherry Turkle describes the fracturing of family cohesion because of the attention that devices get and how this has disintegrated family interaction. While I agree that there are situations where devices are a distraction—I often declare “laptops closed” in class, and I feel that texting during dinner is generally rude—I do not feel that iPhones necessarily draw families apart.

In the days before the proliferation of screens, I ran away from kindergarten every day until they kicked me out. I missed more classes than any other student in my high school and barely managed to graduate. I also started more extracurricular clubs in high school than any other student. My mother actively supported my inability to follow rules and my obsessive tendency to pursue my interests and hobbies over those things I was supposed to do. In the process, she fostered a highly supportive trust relationship that allowed me to learn through failure and sometimes get lost without feeling abandoned or ashamed.

It turns out my mother intuitively knew that it’s more important to stay grounded in the fundamentals of positive parenting. “Research consistently finds that children benefit from parents who are sensitive, responsive, affectionate, consistent, and communicative” says education professor Stephanie Reich, another member of the Connected Learning Lab who specializes in parenting, media, and early childhood. One study shows measurable cognitive benefits from warm and less restrictive parenting.

When I watch my little girl learning dance moves from every earworm video that YouTube serves up, I imagine my mother looking at me while I spent every waking hour playing games online, which was my pathway to developing my global network of colleagues and exploring the internet and its potential early on. I wonder what wonderful as well as awful things will have happened by the time my daughter is my age, and I hope a good relationship with screens and the world beyond them can prepare her for this future."
joiito  parenting  screentime  mimiito  techology  screens  children  alisongopnik  2019  computers  computing  tablets  phones  smartphones  mobile  nicholaskardaras  addiction  prohibition  andrewprzybylski  aliciablum-ross  sonialvingstone  amyorben  adolescence  psychology  candiceodgers  research  stephaniereich  connectedlearning  learning  schools  sherryturkle  trust 
march 2019 by robertogreco
Gradients are everywhere from Facebook to the New York Times - Vox
"Here’s why The Daily, Coachella, and Facebook all use backgrounds that look like a sunset."



"What it is: A digital or print effect where one color fades into another. Typically rendered in soft or pastel tones.

Where it is: Gradients are seemingly everywhere in media and marketing. They are part of a suite of Facebook status backdrops introduced in 2017 and the branding for the New York Times’ popular podcast The Daily, which displays a yellow to blue gradient.

Gradients have taken over Coachella’s app and website (if you watch carefully, the colors shift). Ally’s billboard in A Star Is Born is a full-on gradient, and so was the branding for the Oscars ceremony that recognized Lady Gaga.

On Instagram, they provide a product backdrop for popular Korean beauty brand Glow, and have been embraced by indie magazines Gossamer and Anxy — both designed by Berkeley studio Anagraph.

On the luxury front, Brooklyn wallpaper company Calico has released an entire collection of gradient wallpapers called Aurora. Meanwhile, Spanish fashion house Loewe has introduced a version of their trendy Elephant bag in a spectrum of pink to yellow.

Are gradients drinkable? Heck yes, they are. Seltzer startup Recess has gone all-in on gradients in their branding.

Why you’re seeing it everywhere: Gradients are the confluence of three different trends: Light and Space art, vaporwave, and bisexual lighting.

In the art and design world, Light and Space — developed in the 1960s and ’70s — has been experiencing a revival thanks to its Instagramability. Light and Space pioneer James Turrell has been embraced by celebrities like Beyoncé, Drake, and Kanye West. Drake’s Hotline Bling video was inspired by Turrell’s light-infused rooms called Ganzfelds. The Kardashian-Jenner-West crew posted an Instagram in front of one of Turrell’s works in Los Angeles. (I was yelled at by security for taking a picture there but it’s fine.)

[image]

Most recently, West donated $10 million dollars to the artist.

James Turrell’s works come with a warning because the visitor quickly loses all depth perception. Soft gradients are alluring because they cut through the noise of social media, but they also are disorienting. The Twitter bot soft landscapes operates on a similar principle, but some days the landscape all but disappears.

“It’s nice to see calming things amongst all of the social ramifications of Instagram,” says Rion Harmon of Day Job, the design firm of record for Recess. Harmon compares the Recess branding to a sunset so beautiful you can’t help but stare (or take a picture) however busy you are. Changes to the sky are even more pronounced in Los Angeles, where Harmon’s studio is now based. “The quality of light in LA is something miraculous,” he says. The Light and Space movement was also started in Southern California, and it’s in the DNA of Coachella.

Gradients might be a manifestation of longing for sunshine and surf. But they also belong to the placeless digital citizen. 1980s and ’90s kids may remember messing around in Microsoft Paint and Powerpoint as a child, filling in shapes with these same gradients. It’s no surprise that this design effect is part of the technological nostalgia that fuels the vaporwave movement.

Vaporwave is a musical and aesthetic movement (started in the early 2010s) that spliced ambient music, advertising, and imagery from when the internet started. Gradient artwork shared by the clothing brand Public Space is vaporwave. So is this meme posted by direct-to-consumer health startup Hers.

[image]

When Facebook rolled out gradient status backgrounds in 2017, they knew what they were doing. “They have so much data into how the world works,” says Kerry Flynn, platforms reporter at Digiday. “They had a slow rollout to the color gradients … Obviously they could have pulled the plug anytime.”

Flynn goes on to explain that Facebook realized they had become their own worst enemy. There was so much information on their platform that personal sharing was down and they had to make it novel again. “Facebook wants our personal data, as much as possible. Hence, colorful backgrounds that encourage me to post information about myself and for my friends to ‘Like’ it and comment,” she says.

It’s ironic that in order to do so, Facebook borrowed from a digital texture most millennials associate with a time before Facebook. But it also mimics a current trend in film and television: bisexual lighting.

As Know Your Meme explains, “bisexual lighting is a slang in the queer community for neon lighting with high emphasis on pinks, purples, and blues in film.” These pinks, purples and blues often fade into one another — appearing like a gradient when rendered in two dimensions. Bisexual lighting shows up in the futuristic genre cyberpunk, which imagines an era in which high technology and low technology combine and cities are neon-bathed, landmarkless Gothams. (Overlapping with vaporwave.) Mainstream examples of cyberpunk include Blade Runner, Ghost in the Shell, and Black Mirror (specifically the “San Junipero” episode). Hotline Bling makes the list of examples for bisexual lighting; the gradients come full circle.

Tati Pastukhova, co-founder of interactive art space ARTECHOUSE, says gradients have become more popular as computer display quality increases. She says the appeal of gradients is “the illusion of dimension, and giving 2-D designs 3-D appeal.” ARTECHOUSE is full of light-based digital installations, but visitors naturally gravitate toward what is most photogenic — including, unexpectedly, the soft lighting the space installed along their staircase for safety reasons.

[image]

Before gradients, neon lettering was the Instagram lighting aesthetic du jour. Gradients are wordless — like saying Live Laugh Love with just colors. “There’s an inherent progression in gradients, you are being taken through something. Like that progression of Live Laugh Love. Of starting at one point and ending at another point. Evoking that visually is something people are very drawn to,” says Taylor Lorenz, a staff writer at the Atlantic who covers internet culture.

Gradients are also boundaryless. In 2016, artist Wolfgang Tillmans used gradients in his anti-Brexit poster campaign. Through gradients, designers have found the perfect metaphor for subjectivity in an era when even the word “fact” is up for debate. “Gradients are a visual manifestation of all of these different spectrums that we live on,” including those of politics, gender, and sexuality, says Lorenz. “Before, I think we lived in a binary world. [Gradients are] a very modern representation of the world.”

At the very least, gradients offer an opportunity to self-soothe.

Calico co-founder Nick Cope says the Aurora collection is often used in meditation rooms. He and his wife have installed it across from their bed at home. “The design was created to immerse viewers in waves and washes of tranquil atmospheric color,” Cope says, adding, “Regardless of the weather, we wake up to a sunrise every morning.”"

[See also:
"Is 'bisexual lighting' a new cinematic phenomenon?"
https://www.bbc.com/news/entertainment-arts-43765856 ]
color  gradients  design  socialmedia  jamesturrell  2019  light  space  perception  neon  desig  graphicdesign  ux  ui  wolfgangtillmans  nickcope  meditation  colors  tatipastukhova  artechouse  computing  bisexuallighting  lighting  queer  knowyourmeme  pink  purple  blue  cyberpunk  future  technology  hightechnology  lowtechnology  vaporwave  bladerunner  ghostintheshell  blackmirror  sanjunipero  hotlinebling  kerryflynn  facebook  microsoftpaint  rionharmon  sunsets  california  socal  losangeles  coachella  depthperception  ganzfelds  drake  kanyewest  beyoncé  anagraph  ladygaga  daisyalioto 
march 2019 by robertogreco
Playing at City Building | MIT Architecture
"A century ago, American children regularly played at city building in schools and youth serving institutions. Much of this activity took the form of “junior republics” – miniature cities, states, and nations run by kids. With supervising adults in the background, the young officials made laws, took civil service exams, paid taxes, ran restaurants, printed newspapers, and role played other civic activities. This talk, which draws on my forthcoming book States of Childhood, explores the historical and contemporary significance of these participatory simulations. I'll argue that the history of the republic movement helps to make visible children’s widespread contributions to American city building, and how their varied contributions were rendered invisible through an earlier era’s discourse about simulation and play. I'll also discuss the republic movement's resonances with a range of contemporary techniques and technologies from role playing and gamification to virtual worlds and augmented reality games, and suggest how recent work in the history of computing and information technology is making available new bodies of theoretical and empirical research for scholars and practitioners seeking a “usable past.”

Playing at City Building
A century ago, American children regularly played at city building in schools and youth serving institutions. Much of this activity took the form of “junior republics” – miniature cities, states, and nations run by kids. With supervising adults in the background, the young officials made laws, took civil service exams, paid taxes, ran restaurants, printed newspapers, and role played other civic activities. This talk, which draws on my forthcoming book States of Childhood, explores the historical and contemporary significance of these participatory simulations. I'll argue that the history of the republic movement helps to make visible children’s widespread contributions to American city building, and how their varied contributions were rendered invisible through an earlier era’s discourse about simulation and play. I'll also discuss the republic movement's resonances with a range of contemporary techniques and technologies from role playing and gamification to virtual worlds and augmented reality games, and suggest how recent work in the history of computing and information technology is making available new bodies of theoretical and empirical research for scholars and practitioners seeking a “usable past.”

Jennifer Light

Director of the Program in Science, Technology, and Society; Bern Dibner Professor of the History of Science and Technology; Professor of Urban Studies and Planning
Jen Light’s eclectic interests span the history of science and technology in America over the past 150 years. She is the author of three books as well as articles and essays covering topics from female programming pioneers, to early attempts to organize smart cities, to the racial implications of algorithmic thinking in federal housing policy, to the history of youth political media production, to the uptake of scientific and technical ideas and innovations across other fields. Professor Light is especially fascinated by smart peoples’ bad ideas: efforts by well-intentioned scientists and engineers to apply scientific methods and technological tools to solve social and political problems—and how the history of their failures can inform contemporary scientific and engineering practice.

Light holds degrees from Harvard University and the University of Cambridge. She has been a member of the Institute for Advanced Study and the Derek Brewer Visiting Fellow at Emmanuel College, University of Cambridge. Her work has been supported by the Graham Foundation for Advanced Studies in the Fine Arts and the Andrew W. Mellon Foundation, and honored with the Catherine Bauer Wurster Prize from the Society for American City and Regional Planning History and an honorary doctorate from the Illinois Institute of Technology. Light serves on the editorial boards IEEE Annals of the History of Computing; Information and Culture; Historical Studies in the Natural Sciences; and Journal of Urban History. Professor Light was previously on the faculty of the School of Communication and the Departments of History and Sociology at Northwestern University."
jenniferlight  2018  children  youth  teens  urban  urbanism  cityplanning  cities  citybuilding  schools  education  civics  modeling  participatory  simulations  participation  government  governance  democracy  politics  computing  technology  society  history  via:nickkaufmann  childhood  play  roleplaying  gamification  virtualworlds  worldbuilding 
december 2018 by robertogreco
Bay Area Disrupted: Fred Turner on Vimeo
"Interview with Fred Turner in his office at Stanford University.

http://bayareadisrupted.com/

https://fredturner.stanford.edu

Graphics: Magda Tu
Editing: Michael Krömer
Concept: Andreas Bick"
fredturner  counterculture  california  opensource  bayarea  google  softare  web  internet  history  sanfrancisco  anarchism  siliconvalley  creativity  freedom  individualism  libertarianism  2014  social  sociability  governance  myth  government  infrastructure  research  online  burningman  culture  style  ideology  philosophy  apolitical  individuality  apple  facebook  startups  precarity  informal  bureaucracy  prejudice  1960s  1970s  bias  racism  classism  exclusion  inclusivity  inclusion  communes  hippies  charism  cultofpersonality  whiteness  youth  ageism  inequality  poverty  technology  sharingeconomy  gigeconomy  capitalism  economics  neoliberalism  henryford  ford  empowerment  virtue  us  labor  ork  disruption  responsibility  citizenship  purpose  extraction  egalitarianism  society  edtech  military  1940s  1950s  collaboration  sharedconsciousness  lsd  music  computers  computing  utopia  tools  techculture  location  stanford  sociology  manufacturing  values  socialchange  communalism  technosolutionism  business  entrepreneurship  open  liberalism  commons  peerproduction  product 
december 2018 by robertogreco
iPad Pro (2018) Review: Two weeks later! - YouTube
[at 7:40, problems mentioned with iOS on the iPad Pro as-is for Rene Ritchie keeping it from being a laptop replacement]

"1. Import/export more than just photo/video [using USB drive, hard drive, etc]

2. Navigate with the keyboard [or trackpad/mouse]

3. 'Desktop Sites' in Safari [Why not a desktop browser (maybe in addition to Safari, something like a "pro" Safari with developer tools and extensions?]

4. Audio recording [system-wide like the screen recording for capturing conversations from Skype/Facetime/etc]

5. Develop for iPad on iPad

6. Multi-user for everyone [like on a Chromebook]"

[I'd be happy with just 1, 2, and 3. 6 would also be nice. 4 and 5 are not very important to me, but also make sense.]

[Some of my notes regarding the state of the tablet-as-laptop replacement in 2018, much overlap with what is above:

iOS tablets
no mouse/trackpad support, file system is still a work in process, no desktop browser equivalents, Pro models are super expensive given these tradeoffs, especially with additional keyboard and pen costs

Microsoft Surface
tablet experience is lacking, Go (closest to meeting my needs and price) seems a little overpriced for the top model (entry model needs more RAM and faster storage), also given the extra cost of keyboard and pen

Android tablets
going nowhere, missing desktop browser

ChromeOS tablets
underpowered (Acer Chromebook Tab 10) or very expensive (Google Pixel Slate) or I don’t like it enough (mostly the imbalance between screen and keyboard, and the keyboard feel) for the cost (HP x2), but ChromeOS tablets seem as promising as iPads as laptop replacements at this point

ChromeOS convertibles
strange having the keyboard in the back while using as a tablet (Samsung Chromebook Plus/Pro, ASUS Chromebook Flip C302CA, Google Pixelbook (expensive)) -- I used a Chromebook Pro for a year (as work laptop) and generally it was a great experience, but they are ~1.5 years old now and haven’t been refreshed. Also, the Samsung Chromebook Plus (daughter has one of these, used it for school and was happy with it until new college provided a MacBook Pro) refresh seems like a step back because of the lesser screen, the increase in weight, and a few other things.

Additional note:
Interesting how Microsoft led the way in this regard (tablet as laptop replacement), but again didn't get it right enough and is now being passed by the others, at least around me]

[finally, some additional discussion and comparison:

The Verge: "Is this a computer?" (Apr 11, 2018)
https://www.youtube.com/watch?v=K7imG4DYXlM

Apple's "What's a Computer?" iPad ad (Jan 23, 2018, no longer available directly from Apple)
https://www.youtube.com/watch?v=llZys3xg6sU

Apple's "iPad Pro — 5 Reasons iPad Pro can be your next computer — Apple" (Nov 19, 2018)
https://www.youtube.com/watch?v=tUQK7DMys54

The Verge: "Google Pixel Slate Review: half-baked" (Nov 27, 2018)
https://www.youtube.com/watch?v=BOa6HU_he2A
https://www.theverge.com/2018/11/27/18113447/google-pixel-slate-review-tablet-chrome-os-android-chromebook-slapdash

Unbox Therapy: "Can The Google Pixel Slate Beat The iPad Pro?" (Nov 28, 2018)
https://www.youtube.com/watch?v=lccvHF4ODNY

The Verge: "Google keeps failing to understand tablets" (Nov 29, 2018)
https://www.theverge.com/2018/11/29/18117520/google-tablet-android-chrome-os-pixel-slate-failure

The Verge: "Chrome OS isn't ready for tablets yet" (Jul 18, 2018)
https://www.youtube.com/watch?v=Eu9JBj7HNmM

The Verge: "New iPad Pro review: can it replace your laptop?" (Nov 5, 2018)
https://www.youtube.com/watch?v=LykS0TRSHLY
https://www.theverge.com/2018/11/5/18062612/apple-ipad-pro-review-2018-screen-usb-c-pencil-price-features

Navneet Alang: "The misguided attempts to take down the iPad Pro" (Nov 9, 2018)
https://theweek.com/articles/806270/misguided-attempts-take-down-ipad-pro

Navneet Alang: "Apple is trying to kill the laptop" (Oct 31, 2018)
https://theweek.com/articles/804670/apple-trying-kill-laptop

The Verge: "Microsoft Surface Go review: surprisingly good" (Aug 7, 2018)
https://www.youtube.com/watch?v=N7N2xunvO68
https://www.theverge.com/2018/8/7/17657174/microsoft-surface-go-review-tablet-windows-10

The Verge: "The Surface Go Is Microsoft's Hybrid PC Dream Made Real: It’s time to think of Surface as Surface, and not an iPad competitor" (Aug 8, 2018)
https://www.theverge.com/2018/8/8/17663494/microsoft-surface-go-review-specs-performance

The Verge: "Microsoft Surface Go hands-on" (Aug 2, 2018)
https://www.youtube.com/watch?v=dmENZqKPfws

Navneet Alang: "Is Microsoft's Surface Go doomed to fail?" (Jul 12, 2018)
https://theweek.com/articles/784014/microsofts-surface-doomed-fail

Chrome Unboxed: "Google Pixel Slate: Impressions After A Week" (Nov 27, 2018)
https://www.youtube.com/watch?v=ZfriNj2Ek68
https://chromeunboxed.com/news/google-pixel-slate-first-impressions/

Unbox Therapy: "I'm Quitting Computers" (Nov 18, 2018)
https://www.youtube.com/watch?v=w3oRJeReP8g

Unbox Therapy: "The Truth About The iPad Pro..." (Dec 5, 2018)
https://www.youtube.com/watch?v=JXqou3SVbMw

The Verge: "Tablet vs laptop" (Mar 22, 2018)
https://www.youtube.com/watch?v=Rm_zQP9JIJI

Marques Brownlee: "iPad Pro Review: The Best Ever... Still an iPad!" (Nov 14, 2018)
https://www.youtube.com/watch?v=N1e_voQvHYk

Engadget: "iPad Pro 2018 Review: Almost a laptop replacement" (Nov 6, 2018)
https://www.youtube.com/watch?v=jZzmMpP2BNw

Matthew Moniz: "iPad Pro 2018 - Overpowered Netflix Machine or Laptop Replacement?" (Nov 8, 2018)
https://www.youtube.com/watch?v=P0ZFlFG67kY

WSJ: "Can the New iPad Pro Be Your Only Computer?" (Nov 16, 2018)
https://www.youtube.com/watch?v=kMCyI-ymKfo
https://www.wsj.com/articles/apples-new-ipad-pro-great-tablet-still-cant-replace-your-laptop-1541415600

Ali Abdaal: "iPad vs Macbook for Students (2018) - Can a tablet replace your laptop?" (Oct 10, 2018)
https://www.youtube.com/watch?v=xIx2OQ6E6Mc

Washington Post: "Nope, Apple’s new iPad Pro still isn’t a laptop" (Nov 5, 2018)
https://www.washingtonpost.com/technology/2018/11/05/nope-apples-new-ipad-pro-still-isnt-laptop/

Canoopsy: "iPad Pro 2018 Review - My Student Perspective" (Nov 19, 2018)
https://www.youtube.com/watch?v=q4dgHuWBv14

Greg' Gadgets: "The iPad Pro (2018) CAN Replace Your Laptop!" (Nov 24, 2018)
https://www.youtube.com/watch?v=Y3SyXd04Q1E

Apple World: "iPad Pro has REPLACED my MacBook (my experience)" (May 9, 2018)
https://www.youtube.com/watch?v=vEu9Zf6AENU

Dave Lee: "iPad Pro 2018 - SUPER Fast, But Why?" (Nov 11, 2018)
https://www.youtube.com/watch?v=Aj6vXhN-g6k

Shahazad Bagwan: "A Week With iPad Pro // Yes It Replaced A Laptop!" (Oct 20, 2017)
https://www.youtube.com/watch?v=jhHwv9QsoP0

Apple's "Homework (Full Version)" iPad ad (Mar 27, 2018)
https://www.youtube.com/watch?v=IprmiOa2zH8

The Verge: "Intel's future computers have two screens" (Oct 18, 2018)
https://www.youtube.com/watch?v=deymf9CoY_M

"The Surface Book 2 is everything the MacBook Pro should be" (Jun 26, 208)
https://char.gd/blog/2018/the-surface-book-2-is-everything-the-macbook-pro-should-be-and-then-some

"Surface Go: the future PC that the iPad Pro failed to deliver" (Aug 27, 2018)
https://char.gd/blog/2018/surface-go-a-better-future-pc-than-the-ipad-pro

"Microsoft now has the best device lineup in the industry" (Oct 3, 2018)
https://char.gd/blog/2018/microsoft-has-the-best-device-lineup-in-the-industry ]
ipadpro  ipad  ios  computing  reneritchie  2018  computers  laptops  chromebooks  pixelslate  surfacego  microsoft  google  apple  android  microoftsurface  surface 
november 2018 by robertogreco
James Bridle on New Dark Age: Technology and the End of the Future - YouTube
"As the world around us increases in technological complexity, our understanding of it diminishes. Underlying this trend is a single idea: the belief that our existence is understandable through computation, and more data is enough to help us build a better world.

In his brilliant new work, leading artist and writer James Bridle surveys the history of art, technology, and information systems, and reveals the dark clouds that gather over our dreams of the digital sublime."
quantification  computationalthinking  systems  modeling  bigdata  data  jamesbridle  2018  technology  software  systemsthinking  bias  ai  artificialintelligent  objectivity  inequality  equality  enlightenment  science  complexity  democracy  information  unschooling  deschooling  art  computation  computing  machinelearning  internet  email  web  online  colonialism  decolonization  infrastructure  power  imperialism  deportation  migration  chemtrails  folkliterature  storytelling  conspiracytheories  narrative  populism  politics  confusion  simplification  globalization  global  process  facts  problemsolving  violence  trust  authority  control  newdarkage  darkage  understanding  thinking  howwethink  collapse 
september 2018 by robertogreco
CoCalc - Collaborative Calculation in the Cloud
"CoCalc is a sophisticated online environment for

• Mathematical calculation: SageMath, GAP, SymPy, Maxima, …;
• Statistics and Data Science: R Project, Pandas, Statsmodels, Scikit-Learn, TensorFlow, NLTK, …;
• Document authoring: LaTeX, Markdown/HTML, ...
• General purpose computing: Python, Octave, Julia, Scala, …

Zero Setup: getting started does not require any software setup.

1. First, create your personal account.
2. Then, create a project to instantiate your own private workspace.
3. Finally, create a worksheet or upload your own files: CoCalc supports online editing of Jupyter Notebooks, Sage Worksheets, LaTeX files, etc.

Collaborative Environment

• Share your files privately with project collaborators — all files are synchronized in real-time.
• Time-travel is a detailed history of all your edits and everything is backed up in consistent snapshots.
• Finally, you can select any document to publish it online.

A default project under a free plan has a quota of 1.0 GB memory and 3.0 GB of disk space. Subscriptions make hosting more robust and increase quotas."
computing  collaboration  cloud  math  python  latex  chromebooks 
august 2018 by robertogreco
Designing better file organization around tags, not hierarchies
"Computer users organize their files into folders because that is the primary tool offered by operating systems. But applying this standard hierarchical model to my own files, I began to notice shortcomings of this paradigm over the years. At the same time, I used some other information systems not based on hierarchical path names, and they turned out to solve a number of problems. I propose a new way of organizing files based on tagging, and describe the features and consequences of this method in detail.

Speaking personally, I’m fed up with HFSes, on Windows, Linux, and online storage alike. I struggled with file organization for just over a decade before finally writing this article to describe problems and solutions. Life would be easier if I could tolerate the limitations of hierarchical organization, or at least if the new proposal can fit on top of existing HFSes. But fundamentally, there is a mismatch between the narrowness of hierarchies and the rich structure of human knowledge, and the proposed system will not presuppose the features of HFSes. I wish to solicit public feedback on these ideas, and end up with a design plan that I can implement to solve the problems I already have today.

This article is more of a brainstorm than a prescriptive formula. I begin by illustrating how hierarchies fall short on real-life problems, and how existing alternative systems like Git and Danbooru bypass HFS problems to deliver a better user experience. Then I describe a step-by-step model, starting from basic primitives, of a proposed file organization system that includes a number of desirable features by design. Finally, I present some open questions on aspects of the proposal where I’m unsure of the right answer.

I welcome any feedback about anything written here, especially regarding errors, omissions, and alternatives. For example, I might have missed helpful features of traditional HFSes. I know I haven’t read about or tested every alternative file system out there. I know that my proposed file organization scheme might have issues with conceptual and computational complexity, be too general or not expressive enough, or fail to offer a useful feature. And certainly, I don’t know all the ramifications of the proposed system if it gets implemented, on aspects ranging from security to sharing to networks. But I try my best to present tangible ideas as a start toward designing a better system. And ultimately, I want to implement such a proposed file system so that I can store and find my data sanely.

In the arguments presented below, I care most about the data model and less about implementation details. For example in HFSes, I focus on the fact that the file system consists of a tree of labeled edges with file content at the leaves; I ignore details about inodes, journaling, defragmentation, permissions, etc. For example in my proposal, I care about what data each file should store and what each field means; I assert that querying over all files in the file system is possible but don’t go into detail about how to do it efficiently. Also, the term “file system” can mean many things – it could be just a model of what data is stored (e.g. directories and files), or an abstract API of possible commands (e.g. mkdir(), walk(), open(), etc.), or it could refer to a full-blown implementation like NTFS with all its idiosyncratic features and characteristics. When I critique hierarchical file systems, I am mostly commenting at the data model level – regardless of the implementation flavor (ext4, HFS+, etc.). When I propose a new way of organizing files, I am mainly designing the data model, and leaving the implementation details for later work."
tags  tagging  design  folksonomy  files  filing  computing  organization  via:jslr  hierarchy  hypertext  complexity  multiverse  search 
april 2018 by robertogreco
OCCULTURE: 52. John Michael Greer in “The Polymath” // Druidry, Storytelling & the History of the Occult
"The best beard in occultism, John Michael Greer, is in the house. We’re talking “The Occult Book”, a collection of 100 of the most important stories and anecdotes from the history of the occult in western society. We also touch on the subject of storytelling as well as some other recent material from John, including his book “The Coelbren Alphabet: The Forgotten Oracle of the Welsh Bards” and his translation of a neat little number called “Academy of the Sword”."



"What you contemplate [too much] you imitate." [Uses the example of atheists contemplating religious fundamentalists and how the atheists begin acting like them.] "People always become what they hate. That’s why it's not good idea to wallow in hate."
2017  johnmichaelgreer  druidry  craft  druids  polymaths  autodidacts  learning  occulture  occult  ryanpeverly  celts  druidrevival  history  spirituality  thedivine  nature  belief  dogma  animism  practice  life  living  myths  mythology  stories  storytelling  wisdom  writing  howwewrite  editing  writersblock  criticism  writer'sblock  self-criticism  creativity  schools  schooling  television  tv  coelbrenalphabet  1980s  ronaldreagan  sustainability  environment  us  politics  lies  margaretthatcher  oraltradition  books  reading  howweread  howwelearn  unschooling  deschooling  facetime  social  socializing  cardgames  humans  human  humanism  work  labor  boredom  economics  society  suffering  misery  trapped  progress  socialmedia  computing  smarthphones  bullshitjobs  shinto  talismans  amulets  sex  christianity  religion  atheism  scientism  mainstream  counterculture  magic  materialism  enlightenment  delusion  judgement  contemplation  imitation  fundamentalism  hate  knowledge 
february 2018 by robertogreco
What free software is so good you can't believe it's available for free? : AskReddit
"I compiled a list of all the software in this thread that got a 1000+ score (in order from top to bottom), along with a short description of each.

Over 1000 upvotes:

Google Maps: Navigation app - https://www.google.com/maps
Blender: 3D modeling software - https://www.blender.org/
VLC: Video player - https://www.videolan.org/index.html
The Windows Snipping Tool: Screen capture tool - https://support.microsoft.com/en-us/help/4027213/windows-open-snipping-tool-and-take-a-screenshot
Space Engine: Space exploration simulator - http://spaceengine.org/
Wikipedia: Online encyclopedia - https://www.wikipedia.org/
MuseScore: Music notation software - https://musescore.org/en
Audacity: Audio editing software - https://www.audacityteam.org/
Handbrake: video converter - https://handbrake.fr/
Zotero: Reference manager - https://www.zotero.org/
Desmos.com: Online Calculator - https://www.desmos.com/
Calibre: ebook manager - https://calibre-ebook.com/download
Notepad++: Text Editor - https://notepad-plus-plus.org/
stud.io: Lego simulator - https://studio.bricklink.com/v2/build/studio.page
Search Everything: Instant file search software - https://www.voidtools.com/
LaTeX: Document software - https://www.latex-project.org/
http://archive.org/: Contains music, movies, books, software, games, and webpages - http://archive.org/
Linux/Apache/Postgres/Gcc: Various Linux based OS’s, webservers, compilers, etc. - https://www.linux.org/
Discord: Chat and Communication software - https://discordapp.com/
OBS Studio: Streaming and Recording software - https://obsproject.com/
Krita: Digital design - https://krita.org/en/
R: Statistics software - https://www.r-project.org/
pfSense: Firewall software - https://www.pfsense.org/
FreeNAS: File server software - http://www.freenas.org/
Gimp: Digital design - https://www.gimp.org/
OpenSCAD: 3D Model scripting software - http://www.openscad.org/
This list - https://www.reddit.com/r/AskReddit/comments/7x639l/what_free_software_is_so_good_you_cant_believe/
Malwarebytes: Malware protection - https://www.malwarebytes.com/
Unity: Game design software - https://unity3d.com/
https://www.draw.io/: Online diagram software - https://www.draw.io/
Paint.NET: Image design - https://www.getpaint.net/
Draftsight: Free CAD - https://www.3ds.com/products-services/draftsight-cad-software/free-download/
7Zip: File archiving - http://www.7-zip.org/
Plex: Media storage access - https://www.plex.tv/
Libre Office: Document editing suite - https://www.libreoffice.org/
KeePass: Password manager - https://keepass.info/
DaVinci Resolve: Video color correcting/editing - https://www.blackmagicdesign.com/products/davinciresolve/
Inkscape: Vector art software - https://inkscape.org/en/
Google's Apps: Google’s document suite (Docs, Sheets, Gmail, etc) - https://www.google.com/
Duolingo: Language learning - https://www.duolingo.com/
Darktable: Photo workflow a la lightroom - https://www.musicpd.org/ and https://www.darktable.org/
MPD/Mopidy: F/OSS music player daemon - https://www.mopidy.com/
Doom shareware: A classic game - a 3.5' floppy disk


Over 150 upvotes:

fxSolver/Cymath/Mathway - Math/engineering/chemistry problem solving - https://www.fxsolver.com/ and https://www.cymath.com/ and https://www.mathway.com/Algebra
Recuva: Restores deleted files - https://www.ccleaner.com/recuva
Python: A programming language for quickly writing scripts - https://www.python.org/
foobar2000: Freeware audio player - https://www.foobar2000.org/
Robin Hood: Stock trading app - https://www.robinhood.com/
Flux: Day/Night cycle on monitor color/brightness - https://justgetflux.com/
Fusion 360: Free 3D CAD/CAM design software - https://www.autodesk.com/products/fusion-360/students-teachers-educators
Steam: Platform for game distribution - http://store.steampowered.com/
Shazam: App that tells you what song is playing - https://www.shazam.com/
Audio Router: Sound routing - https://github.com/audiorouterdev/audio-router
Arduino: Open-source electronics platform (software is free) - https://www.arduino.cc/
LMMS: Music studio - https://lmms.io/
Kodi: Entertainment center software - https://kodi.tv/
Git: Version control system - https://git-scm.com/
REAPER: Audio workstation - https://www.reaper.fm/
Greenshot: Print screen tool - http://getgreenshot.org/
Irfanview: Image viewer, editor, organiser and converter - http://www.irfanview.com/
TeamViewer: Remote desktop software - https://www.teamviewer.us/
Firefox: Web browser - https://www.mozilla.org/en-US/firefox/new/
Alarm Clock on Cell Phones: Alarm clock on cell phones - On your cell phone
Wireshark: Open source packet analyze - https://www.wireshark.org/
Disk Fan: Visually see how much space is being used on a volume - http://www.diskspacefan.com/
Beyond Compare: Compare two files/directories: whole tree's and directories - https://www.scootersoftware.com/
VNCServer/Viewer: Remote desktop software - https://www.realvnc.com/en/connect/download/vnc/
Ubuntu: A Linux OS - https://www.ubuntu.com/
WinDirStat: Graphical disk usage analyzer - https://windirstat.net/
Oracle VirtualBox: Open-source hypervisor - https://www.virtualbox.org/
PuTTy: An all in one protocol terminal - https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
Visual Studio Code: Code editor - https://code.visualstudio.com/
Reddit: This website - https://www.reddit.com/


EDIT: WOW! This is by far my largest post ever and my first gold; thank you!!!



EDIT 2: I just updated the list to include any that had over 150 upvotes (with the exception of Reddit at 145, but I thought it deserved an honorable mention!). Thanks again everyone for all the support :)"
software  free  lists  onlinetoolkit  computing  mac  osx  windows  linux  online  web  internet 
february 2018 by robertogreco
Podcast, Nick Seaver: “What Do People Do All Day?” - MIT Comparative Media Studies/Writing
"The algorithmic infrastructures of the internet are made by a weird cast of characters: rock stars, gurus, ninjas, wizards, alchemists, park rangers, gardeners, plumbers, and janitors can all be found sitting at computers in otherwise unremarkable offices, typing. These job titles, sometimes official, sometimes informal, are a striking feature of internet industries. They mark jobs as novel or hip, contrasting starkly with the sedentary screenwork of programming. But is that all they do? In this talk, drawing on several years of fieldwork with the developers of algorithmic music recommenders, Seaver describes how these terms help people make sense of new kinds of jobs and their positions within new infrastructures. They draw analogies that fit into existing prestige hierarchies (rockstars and janitors) or relationships to craft and technique (gardeners and alchemists). They aspire to particular imaginations of mastery (gurus and ninjas). Critics of big data have drawn attention to the importance of metaphors in framing public and commercial understandings of data, its biases and origins. The metaphorical borrowings of role terms serve a similar function, highlighting some features at the expense of others and shaping emerging professions in their image. If we want to make sense of new algorithmic industries, we’ll need to understand how they make sense of themselves.

Nick Seaver is assistant professor of anthropology at Tufts University. His current research examines the cultural life of algorithms for understanding and recommending music. He received a masters from CMS in 2010 for research on the history of the player piano."

[direct link to audio: https://soundcloud.com/mit-cmsw/nick-seaver-what-do-people-do-all-day ]

[via: https://twitter.com/allank_o/status/961382666573561856 ]
nickseaver  2016  work  labor  algorithms  bigdata  music  productivity  automation  care  maintenance  programming  computing  hierarchy  economics  data  datascience 
february 2018 by robertogreco
Peripetatic Humanities - YouTube
"A lecture about Mark Sample's "Notes Toward a Deformed Humanities," featuring ideas by Lisa Rhody, Matt Kirchenbaum, Steve Ramsay, Barthes, Foucault, Bahktin, Brian Croxall, Dene Grigar, Roger Whitson, Adeline Koh, Natalia Cecire, and Ian Bogost & the Oulipo, a band opening for The Carpenters."
kathiinmanberens  performance  humanities  deformity  marksample  lisarhody  mattkirchenbaum  steveramsay  foucault  briancroxall  denegrigar  rogerwhitson  adelinekoh  ianbogost  oulipo  deformance  humptydumpty  repair  mikhailbakhtin  linearity  alinear  procedure  books  defamiliarization  reading  howweread  machines  machinereading  technology  michelfoucault  rolandbarthes  nataliacecire  disruption  digitalhumanities  socialmedia  mobile  phones  making  computation  computing  hacking  nonlinear 
february 2018 by robertogreco
No one’s coming. It’s up to us. – Dan Hon – Medium
"Getting from here to there

This is all very well and good. But what can we do? And more precisely, what “we”? There’s increasing acceptance of the reality that the world we live in is intersectional and we all play different and simultaneous roles in our lives. The society of “we” includes technologists who have a chance of affecting the products and services, it includes customers and users, it includes residents and citizens.

I’ve made this case above, but I feel it’s important enough to make again: at a high level, I believe that we need to:

1. Clearly decide what kind of society we want; and then

2. Design and deliver the technologies that forever get us closer to achieving that desired society.

This work is hard and, arguably, will never be completed. It necessarily involves compromise. Attitudes, beliefs and what’s considered just changes over time.

That said, the above are two high level goals, but what can people do right now? What can we do tactically?

What we can do now

I have two questions that I think can be helpful in guiding our present actions, in whatever capacity we might find ourselves.

For all of us: What would it look like, and how might our societies be different, if technology were better aligned to society’s interests?

At the most general level, we are all members of a society, embedded in existing governing structures. It certainly feels like in the recent past, those governing structures are coming under increasing strain, and part of the blame is being laid at the feet of technology.

One of the most important things we can do collectively is to produce clarity and prioritization where we can. Only by being clearer and more intentional about the kind of society we want and accepting what that means, can our societies and their institutions provide guidance and leadership to technology.

These are questions that cannot and should not be left to technologists alone. Advances in technology mean that encryption is a societal issue. Content moderation and censorship are a societal issue. Ultimately, it should be for governments (of the people, by the people) to set expectations and standards at the societal level, not organizations accountable only to a board of directors and shareholders.

But to do this, our governing institutions will need to evolve and improve. It is easier, and faster, for platforms now to react to changing social mores. For example, platforms are responding in reaction to society’s reaction to “AI-generated fake porn” faster than governing and enforcing institutions.

Prioritizations may necessarily involve compromise, too: the world is not so simple, and we are not so lucky, that it can be easily and always divided into A or B, or good or not-good.

Some of my perspective in this area is reflective of the schism American politics is currently experiencing. In a very real way, America, my adoptive country of residence, is having to grapple with revisiting the idea of what America is for. The same is happening in my country of birth with the decision to leave the European Union.

These are fundamental issues. Technologists, as members of society, have a point of view on them. But in the way that post-enlightenment governing institutions were set up to protect against asymmetric distribution of power, technology leaders must recognize that their platforms are now an undeniable, powerful influence on society.

As a society, we must do the work to have a point of view. What does responsible technology look like?

For technologists: How can we be humane and advance the goals of our society?

As technologists, we can be excited about re-inventing approaches from first principles. We must resist that impulse here, because there are things that we can do now, that we can learn now, from other professions, industries and areas to apply to our own. For example:

* We are better and stronger when we are together than when we are apart. If you’re a technologist, consider this question: what are the pros and cons of unionizing? As the product of a linked network, consider the question: what is gained and who gains from preventing humans from linking up in this way?

* Just as we create design patterns that are best practices, there are also those that represent undesired patterns from our society’s point of view known as dark patterns. We should familiarise ourselves with them and each work to understand why and when they’re used and why their usage is contrary to the ideals of our society.

* We can do a better job of advocating for and doing research to better understand the problems we seek to solve, the context in which those problems exist and the impact of those problems. Only through disciplines like research can we discover in the design phase — instead of in production, when our work can affect millions — negative externalities or unintended consequences that we genuinely and unintentionally may have missed.

* We must compassionately accept the reality that our work has real effects, good and bad. We can wish that bad outcomes don’t happen, but bad outcomes will always happen because life is unpredictable. The question is what we do when bad things happen, and whether and how we take responsibility for those results. For example, Twitter’s leadership must make clear what behaviour it considers acceptable, and do the work to be clear and consistent without dodging the issue.

* In America especially, technologists must face the issue of free speech head-on without avoiding its necessary implications. I suggest that one of the problems culturally American technology companies (i.e., companies that seek to emulate American culture) face can be explained in software terms. To use agile user story terminology, the problem may be due to focusing on a specific requirement (“free speech”) rather than the full user story (“As a user, I need freedom of speech, so that I can pursue life, liberty and happiness”). Free speech is a means to an end, not an end, and accepting that free speech is a means involves the hard work of considering and taking a clear, understandable position as to what ends.

* We have been warned. Academics — in particular, sociologists, philosophers, historians, psychologists and anthropologists — have been warning of issues such as large-scale societal effects for years. Those warnings have, bluntly, been ignored. In the worst cases, those same academics have been accused of not helping to solve the problem. Moving on from the past, is there not something that we technologists can learn? My intuition is that post the 2016 American election, middle-class technologists are now afraid. We’re all in this together. Academics are reaching out, have been reaching out. We have nothing to lose but our own shame.

* Repeat to ourselves: some problems don’t have fully technological solutions. Some problems can’t just be solved by changing infrastructure. Who else might help with a problem? What other approaches might be needed as well?

There’s no one coming. It’s up to us.

My final point is this: no one will tell us or give us permission to do these things. There is no higher organizing power working to put systemic changes in place. There is no top-down way of nudging the arc of technology toward one better aligned with humanity.

It starts with all of us.

Afterword

I’ve been working on the bigger themes behind this talk since …, and an invitation to 2017’s Foo Camp was a good opportunity to try to clarify and improve my thinking so that it could fit into a five minute lightning talk. It also helped that Foo Camp has the kind of (small, hand-picked — again, for good and ill) influential audience who would be a good litmus test for the quality of my argument, and would be instrumental in taking on and spreading the ideas.

In the end, though, I nearly didn’t do this talk at all.

Around 6:15pm on Saturday night, just over an hour before the lightning talks were due to start, after the unconference’s sessions had finished and just before dinner, I burst into tears talking to a friend.

While I won’t break the societal convention of confidentiality that helps an event like Foo Camp be productive, I’ll share this: the world felt too broken.

Specifically, the world felt broken like this: I had the benefit of growing up as a middle-class educated individual (albeit, not white) who believed he could trust that institutions were a) capable and b) would do the right thing. I now live in a country where a) the capability of those institutions has consistently eroded over time, and b) those institutions are now being systematically dismantled, to add insult to injury.

In other words, I was left with the feeling that there’s nothing left but ourselves.

Do you want the poisonous lead removed from your water supply? Your best bet is to try to do it yourself.

Do you want a better school for your children? Your best bet is to start it.

Do you want a policing policy that genuinely rehabilitates rather than punishes? Your best bet is to…

And it’s just. Too. Much.

Over the course of the next few days, I managed to turn my outlook around.

The answer, of course, is that it is too much for one person.

But it isn’t too much for all of us."
danhon  technology  2018  2017  johnperrybarlow  ethics  society  calltoaction  politics  policy  purpose  economics  inequality  internet  web  online  computers  computing  future  design  debchachra  ingridburrington  fredscharmen  maciejceglowski  timcarmody  rachelcoldicutt  stacy-marieishmael  sarahjeong  alexismadrigal  ericmeyer  timmaughan  mimionuoha  jayowens  jayspringett  stacktivism  georginavoss  damienwilliams  rickwebb  sarawachter-boettcher  jamebridle  adamgreenfield  foocamp  timoreilly  kaitlyntiffany  fredturner  tomcarden  blainecook  warrenellis  danhill  cydharrell  jenpahljka  robinray  noraryan  mattwebb  mattjones  danachisnell  heathercamp  farrahbostic  negativeexternalities  collectivism  zeyneptufekci  maciejcegłowski 
february 2018 by robertogreco
HEWN, No. 250
"I wrote a book review this week of Brian Dear’s The Friendly Orange Glow: The Untold History of of PLATO System and the Dawn of Cyberculture. My review’s a rumination on how powerful the mythologizing is around tech, around a certain version of the history of technology – “the Silicon Valley narrative,” as I’ve called this elsewhere – so much so that we can hardly imagine that there are other stories to tell, other technologies to build, other practices to adopt, other ways of being, and so on.

I was working on the book review when I heard the news Tuesday evening that the great author Ursula K. Le Guin had passed away, I immediately thought of her essay “The Carrier Bag Theory of Fiction” – her thoughts on storytelling about spears and storytelling about bags and what we might glean from a culture (and a genre) that praises the former and denigrates the latter.
If science fiction is the mythology of modern technology, then its myth is tragic. “Technology,” or “modern science” (using the words as they are usually used, in an unexamined shorthand standing for the “hard” sciences and high technology founded upon continuous economic growth), is a heroic undertaking, Herculean, Promethean, conceived as triumph, hence ultimately as tragedy. The fiction embodying this myth will be, and has been, triumphant (Man conquers earth, space, aliens, death, the future, etc.) and tragic (apocalypse, holocaust, then or now).

If, however, one avoids the linear, progressive, Time’s-(killing)-arrow mode of the Techno-Heroic, and redefines technology and science as primarily cultural carrier bag rather than weapon of domination, one pleasant side effect is that science fiction can be seen as a far less rigid, narrow field, not necessarily Promethean or apocalyptic at all, and in fact less a mythological genre than a realistic one.


The problems of technology – and the problems of the storytelling about the computing industry today, which seems to regularly turn to the worst science fiction for inspiration – is bound up in all this. There’s a strong desire to create, crown, and laud the Hero – a tendency that’s going to end pretty badly if we don’t start thinking about care and community (and carrier bags) and dial back this wretched fascination with weapons, destruction, and disruption.

(Something like this, I wonder: “The Ones Who Walk Away From Omelas” by Ursula K. Le Guin.)

Elsewhere in the history of the future of technology: “Sorry, Alexa Is Not a Feminist,” says Ian Bogost. “The People Who Would Survive Nuclear War” by Alexis Madrigal.

There are many reasons to adore Ursula K. Le Guin. And there are many pieces of her writing, of course, one could point to and insist “you must read this. You must.” For me, the attraction was her grounding in cultural anthropology – I met Le Guin at a California Folklore Society almost 20 years ago when I was a graduate student in Folklore Studies – alongside her willingness to challenge the racism and imperialism and expropriation that the field engendered. It was her fierce criticism of capitalism and her commitment to freedom. I’m willing to fight anyone who tries to insist that Sometimes a Great Notion is the great novel of the Pacific Northwest. Really, you should pick almost any Le Guin novel in its stead – Always Coming Home, perhaps. Or The Word for the World is Forest. She was the most important anarchist of our era, I posted on Facebook when I shared the NYT obituary. It was a jab at another Oregon writer who I bet thinks that’s him. But like Kesey, his notion is all wrong.

Fewer Heroes. Better stories about people. Better worlds for people.

Yours in struggle,
~Audrey"
audreywatters  ursulaleguin  2018  anarchism  sciencefiction  scifi  technology  edtech  progress  storytelling  care  community  caring  folklore  anarchy  computing  siliconvalley  war  aggression  humanism  briandear  myth  heroes  science  modernscience  hardsciences  economics  growth  fiction  tragedy  apocalypse  holocaust  future  conquest  domination  weapons  destruction  disruption 
january 2018 by robertogreco
Bat, Bean, Beam: Inside the Personal Computer
"The inside of a computer looks a bit like a city, its memory banks and I/O devices rising like buildings over the avenues of soldered circuits. But then so do modern cities resembles motherboards, especially at night, when the cars sparkle like point-to-point signal carriers travelling along the grid. It is a well-worn visual metaphor in films and advertising, suggesting that the nerve centres of business and finance have come to resemble the information infrastructure that sustains them. Besides, isn’t the city at the sharp edge of the late capitalist era above all a generator of symbols?

And yet this technology with which we are so intimate, and that more than any other since the invention of writing has extended us, remains mostly opaque to us. Why would anyone bother to learn what digital machines look like on the inside? What difference would it make, when the uses we make of them are so incommensurate with this trivial knowledge?

I like pop-up books, and early pop-up books about the inner workings of computers have become obsolete in an interesting way. They are the last thing we would think to use to demonstrate such knowledge nowadays. They are so prone to jamming or coming apart. They have none of the grace and smoothness that our devices aspire to.

The centre piece of Sharon Gallagher’s Inside the Personal Computer – An illustrated Introduction in 3 Dimensions (1984) is the machine itself, complete with keyboard and floppy disk drive.

If you push the disk inside its unit and lower the flap, a Roman blind-like mechanism changes the message on the screen from INSERT DISK AND CLOSE DOWN to HELLO: THIS BOOK EXPLAINS WHAT I AM AND HOW I WORK. BY THE END YOU’LL KNOW ME INSIDE OUT.

It’s a neat trick. But the book is at its best when it gets into the basics of how transistors work, or uses wheels to explain how to translate a number into binary code, or a typed character first into ASCII, then into its binary equivalent.

Or simply what happens when you type “M”.

There is the mechanical action that alienates us from the digital word. Writing technologized language but still allowed us to write in our own hand, whereas there is simply no way of typing gracefully. Any M is like any other M, and even if we choose a fancy font the translation from the essential M (ASCII code 77) to the fancy M happens inside the computer and in code. This is not a ‘bad thing’. It’s just the state of the tools of our culture, which require a different kind of practice.

The other thing that this book makes clear is that the personal computer hasn’t changed very much at all since 1984. Its component parts are largely unchanged: a motherboard, a central processing unit, RAM and ROM, I/O ports. Floppy disks have become USB sticks, while hard drives – which boasted at the time ‘between 5 and 50 megabytes of information – the equivalent of between 3,000 and 30,000 typewritten pages' – have fewer moving parts. But their function is the same as in the early models. Ditto the monitors, which have become flatter, and in colour. Even the mouse already existed, although back then its name still commanded inverted commas. Today’s computers, then, are a great deal more powerful, but otherwise fairly similar to what they were like three and a half decades ago. What makes them unrecognisable is that they’re all connected. And for that – for the internet – it makes even less sense to ‘take a look inside’. Inside what? Does the internet reside in the telephone exchange, or at the headquarters of ICANN, or where else?

The inside of a computer looks a bit like a city, but it’s an alien city. None of its buildings have doors or windows. The roads are made not of stone or asphalt but of plastic and metal.

The pictures above, by the way, show the guts of mine, which I recently upgraded. It’s what I used to write this blog and everything else from 2010 to June of this year, but I feel no attachment to it – it would be silly to.

There are guides on the web to help you mine your old computer for gold using household chemicals. They come with bold type warnings about how toxic the process is. But in fact computers are both hazardous to manufacture and to dismantle. Waste materials from all the PCs and assorted electronic devices discarded since 1984 have created massively polluted districts and cities in the global south. Places like the Agbogbloshie district of Accra, Ghana, and countless others. Vast dumping sites that are mined for scraps of precious metals as much as for the personal information left onto the hard drives, while leeching chemicals into the local water supply.

This would be a more meaningful inside in which to peer if we want to understand how computers work, and their effect on the world’s societies. One effect of globalisation has been to displace human labour. Not eliminate it, far from it, but rather create the illusion in the most advanced nations that manufacturing jobs have disappeared, and meaningful work consists in either farming the land or providing services. Automation has claimed many of those jobs, of course, but other have simply shifted away from the centres where most of the consumption takes place. This is another way in which the computer has become a mysterious machine: because no-one you know makes them.

Inside the Personal Computer was written 33 years ago in an effort to demystify an object that would soon become a feature in every household, and change everyone’s life. On the last page, it is no longer the book that ‘speaks’ to the reader, like in the first pop up, but the computer itself. Its message is perfectly friendly but in hindsight more than a little eerie."
giovnnitiso  computers  computing  2017  globalization  labor  hardware  geopolitics  economics  pop-upbooks  1984  sharongallagher  writing  technology  digital  physical  icann  ascii  accra  ghana  objects  environment  sustainability  ecology 
november 2017 by robertogreco
Zeynep Tufekci: We're building a dystopia just to make people click on ads | TED Talk | TED.com
"We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response."

[See also: "Machine intelligence makes human morals more important"
https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important

"Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics.""]
zeyneptufekci  machinelearning  ai  artificialintelligence  youtube  facebook  google  amazon  ethics  computing  advertising  politics  behavior  technology  web  online  internet  susceptibility  dystopia  sociology  donaldtrump 
october 2017 by robertogreco
Ellen Ullman: Life in Code: "A Personal History of Technology" | Talks at Google - YouTube
"The last twenty years have brought us the rise of the internet, the development of artificial intelligence, the ubiquity of once unimaginably powerful computers, and the thorough transformation of our economy and society. Through it all, Ellen Ullman lived and worked inside that rising culture of technology, and in Life in Code she tells the continuing story of the changes it wrought with a unique, expert perspective.

When Ellen Ullman moved to San Francisco in the early 1970s and went on to become a computer programmer, she was joining a small, idealistic, and almost exclusively male cadre that aspired to genuinely change the world. In 1997 Ullman wrote Close to the Machine, the now classic and still definitive account of life as a coder at the birth of what would be a sweeping technological, cultural, and financial revolution.

Twenty years later, the story Ullman recounts is neither one of unbridled triumph nor a nostalgic denial of progress. It is necessarily the story of digital technology’s loss of innocence as it entered the cultural mainstream, and it is a personal reckoning with all that has changed, and so much that hasn’t. Life in Code is an essential text toward our understanding of the last twenty years—and the next twenty."
ellenullman  bias  algorithms  2017  technology  sexism  racism  age  ageism  society  exclusion  perspective  families  parenting  mothers  programming  coding  humans  humanism  google  larrypage  discrimination  self-drivingcars  machinelearning  ai  artificialintelligence  literacy  reading  howweread  humanities  education  publicschools  schools  publicgood  libertarianism  siliconvalley  generations  future  pessimism  optimism  hardfun  kevinkelly  computing 
october 2017 by robertogreco
Eyeo 2017 - Robin Sloan on Vimeo
"Robin Sloan at Eyeo 2017
| Writing with the Machine |

Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine."
robinsloan  writing  howwewrite  neuralnetworks  computing  eyeo  eyeo2017  2017 
september 2017 by robertogreco
James Ryan on Twitter: "Happenthing On Travel On (1975) is a novel that integrates prose, source code, computer-generated text, and glitch art, to rhetorical effect https://t.co/Ex9zItG3xt"
"Happenthing On Travel On (1975) is a novel that integrates prose, source code, computer-generated text, and glitch art, to rhetorical effect"

[via: https://twitter.com/tealtan/status/892523355794001920 ]

"instead of making exaggerated claims about the creative (or even collaborative) role of the computer, she describes it as an expressive tool"
https://twitter.com/xfoml/status/892169553806901249

"Carole Spearin McCauley should be better recognized as a major innovator in the early period of expressive computing"
https://twitter.com/xfoml/status/892170816623751168
novels  writing  computing  computers  prose  code  coding  computer-generatedtext  text  glitchart  1975  carolespearinmccauley  collaboration  cyborgs 
august 2017 by robertogreco
Doug Engelbart, transcontextualist | Gardner Writes
"I’ve been mulling over this next post for far too long, and the results will be brief and rushed (such bad food, and such small portions!). You have been warned.

The three strands, or claims I’m engaging with (EDIT: I’ve tried to make things clearer and more parallel in the list below):

1. The computer is “just a tool.” This part’s in partial response to the comments on my previous post. [http://www.gardnercampbell.net/blog1/?p=2158 ]

2. Doug Engelbart’s “Augmenting Human Intellect: A Conceptual Framework” [http://www.dougengelbart.org/pubs/augment-3906.html ] is “difficult to understand” or “poorly written.” This one’s a perpetual reply. 🙂 It was most recently triggered by an especially perplexing Twitter exchange shared with me by Jon Becker.

3. Engelbart’s ideas regarding the augmentation of human intellect aim for an inhuman and inhumane parsing of thought and imagination, an “efficiency expert” reduction of the richness of human cognition. This one tries to think about some points raised in the VCU New Media Seminar this fall.

These are the strands. The weave will be loose. (Food, textiles, textures, text.)

1. There is no such thing as “just a tool.” McLuhan wisely notes that tools are not inert things to be used by human beings, but extensions of human capabilities that redefine both the tool and the user. A “tooler” results, or perhaps a “tuser” (pronounced “TOO-zer”). I believe those two words are neologisms but I’ll leave the googling as an exercise for the tuser. The way I used to explain this is my new media classes was to ask students to imagine a hammer lying on the ground and a person standing above the hammer. The person picks up the hammer. What results? The usual answers are something like “a person with a hammer in his or her hand.” I don’t hold much with the elicit-a-wrong-answer-then-spring-the-right-one-on-them school of “Socratic” instruction, but in this case it was irresistible and I tried to make a game of it so folks would feel excited, not tricked. “No!” I would cry. “The result is a HammerHand!” This answer was particularly easy to imagine inside Second Life, where metaphors become real within the irreality of a virtual landscape. In fact, I first came up with the game while leading a class in Second Life–but that’s for another time.

So no “just a tool,” since a HammerHand is something quite different from a hammer or a hand, or a hammer in a hand. It’s one of those small but powerful points that can make one see the designed built world, a world full of builders and designers (i.e., human beings), as something much less inert and “external” than it might otherwise appear. It can also make one feel slightly deranged, perhaps usefully so, when one proceeds through the quotidian details (so-called) of a life full of tasks and taskings.

To complicate matters further, the computer is an unusual tool, a meta-tool, a machine that simulates any other machine, a universal machine with properties unlike any other machine. Earlier in the seminar this semester a sentence popped out of my mouth as we talked about one of the essays–“As We May Think”? I can’t remember now: “This is your brain on brain.” What Papert and Turkle refer to as computers’ “holding power” is not just the addictive cat videos (not that there’s anything wrong with that, I imagine), but something weirdly mindlike and reflective about the computer-human symbiosis. One of my goals continues to be to raise that uncanny holding power into a fuller (and freer) (and more metaphorical) (and more practical in the sense of able-to-be-practiced) mode of awareness so that we can be more mindful of the environment’s potential for good and, yes, for ill. (Some days, it seems to me that the “for ill” part is almost as poorly understood as the “for good” part, pace Morozov.)

George Dyson writes, “The stored-program computer, as conceived by Alan Turing and delivered by John von Neumann, broke the distinction between numbers that mean things and numbers that do things. Our universe would never be the same” (Turing’s Cathedral: The Origins of the Digital Universe). This is a very bold statement. I’ve connected it with everything from the myth of Orpheus to synaesthetic environments like the one @rovinglibrarian shared with me in which one can listen to, and visualize, Wikipedia being edited. Thought vectors in concept space, indeed. The closest analogies I can find are with language itself, particularly the phonetic alphabet.

The larger point is now at the ready: in fullest practice and perhaps even for best results, particularly when it comes to deeper learning, it may well be that nothing is just anything. Bateson describes the moment in which “just a” thing becomes far more than “just a” thing as a “double take.” For Bateson, the double take bears a thrilling and uneasy relationship to the double bind, as well as to some kinds of derangement that are not at all beneficial. (This is the double-edged sword of human intellect, a sword that sometimes has ten edges or more–but I digress.) This double take (the kids call it, or used to call it, “wait what?”) indicates a moment of what Bateson calls “transcontextualism,” a paradoxical level-crossing moment (micro to macro, instance to meta, territory to map, or vice-versa) that initiates or indicates (hard to tell) deeper learning.
It seems that both those whose life is enriched by transcontextual gifts and those who are impoverished by transcontextual confusions are alike in one respect: for them there is always or often a “double take.” A falling leaf, the greeting of a friend, or a “primrose by the river’s brim” is not “just that and nothing more.” Exogenous experience may be framed in the contexts of dream, and internal thought may be projected into the contexts of the external world. And so on. For all this, we seek a partial explanation in learning and experience. (“Double Bind, 1969,” in Steps to an Ecology of Mind, U Chicago Press, 2000, p. 272). (EDIT: I had originally typed “eternal world,” but Bateson writes “external.” It’s an interesting typo, though, so I remember it here.)


It does seem to me, very often, that we do our best to purge our learning environments of opportunities for transcontextual gifts to emerge. This is understandable, given how bad and indeed “unproductive” (by certain lights) the transcontextual confusions can be. No one enjoys the feeling of falling, unless there are environments and guides that can make the falling feel like flying–more matter for another conversation, and a difficult art indeed, and one that like all art has no guarantees (pace Madame Tussaud).

2. So now the second strand, regarding Engelbart’s “Augmenting Human Intellect: A Conceptual Framework.” Much of this essay, it seems to me, is about identifying and fostering transcontextualism (transcontextualization?) as a networked activity in which both the individual and the networked community recognize the potential for “bootstrapping” themselves into greater learning through the kind of level-crossing Bateson imagines (Douglas Hofstadter explores these ideas too, particularly in I Am A Strange Loop and, it appears, in a book Tom Woodward is exploring and brought to my attention yesterday, Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. That title alone makes the recursive point very neatly). So when Engelbart switches modes from engineering-style-specification to the story of bricks-on-pens to the dialogue with “Joe,” he seems to me not to be willful or even prohibitively difficult (though some of the ideas are undeniably complex). He seems to me to be experimenting with transcontextualism as an expressive device, an analytical strategy, and a kind of self-directed learning, a true essay: an attempt:

And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers–whether the problem situation exists for twenty minutes or twenty years.

A list worthy of Walt Whitman, and one that explicitly (and for me, thrillingly) crosses levels and enacts transcontextualism.

Here’s another list, one in which Engelbart tallies the range of “thought kernels” he wants to track in his formulative thinking (one might also say, his “research”):

The “unit records” here, unlike those in the Memex example, are generally scraps of typed or handwritten text on IBM-card-sized edge-notchable cards. These represent little “kernels” of data, thought, fact, consideration, concepts, ideas, worries, etc. That are relevant to a given problem area in my professional life.

Again, the listing enacts a principle: we map a problem space, a sphere of inquiry, along many dimensions–or we should. Those dimensions cross contexts–or they should. To think about this in terms of language for a moment, Engelbart’s idea seems to be that we should track our “kernels” across the indicative, the imperative, the subjunctive, the interrogative. To put it another way, we should be mindful of, and somehow make available for mindful building, many varieties of cognitive activity, including affect (which can be distinguished but not divided from cognition).

3. I don’t think this activity increases efficiency, if efficiency means “getting more done in less time.” (A “cognitive Taylorism,” as one seminarian put it.) More what is always the question. For me, Engelbart’s transcontextual gifts (and I’ll concede that there are likely transcontextual confusions in there too–it’s the price of trancontextualism, clearly) are such that the emphasis lands squarely on effectiveness, which in his essay means more work with positive potential (understanding there’s some disagreement but not total disagreement about… [more]
dougengelbart  transcontextualism  gardnercampbell  2013  gregorybateson  marshallmcluhan  socraticmethod  education  teaching  howweteach  howwelearn  learning  hammerhand  technology  computers  computing  georgedyson  food  textiles  texture  text  understanding  tools  secondlife  seymourpapert  sherryturkle  alanturing  johnvonneumann  doublebind  waltwhitman  memex  taylorism  efficiency  cognition  transcontextualization 
july 2017 by robertogreco
Towards an Internet of Living Things – OpenExplorer Journal – Medium
"Conservation groups are using technology to understand and protect our planet in an entirely new way."

"The Internet of Things (IoT) was an idea that industry always loved. It was simple enough to predict: as computing and sensors become smaller and cheaper, they would be embedded into devices and products that interact with each other and their owners. Fast forward to 2017 and the IoT is in full bloom. Because of the stakes — that every device and machine in your life will be upgraded and harvested for data — companies wasted no time getting in on the action. There are smart thermostats, refrigerators, TVs, cars, and everything else you can imagine.

Industry was first, but they aren’t the only. Now conservationists are taking the lead.

The same chips, sensors (especially cameras) and networks being used to wire up our homes and factories are being deployed by scientists (both professional and amateur) to understand our natural world. It’s an Internet of Living Things. It isn’t just a future of efficiency and convenience. It’s enabling us to ask different questions and understand our world from an entirely new perspective. And just in time. As environmental challenges — everything from coral bleaching to African elephant poaching— continue to mount, this emerging network will serve as the planetary nervous system, giving insight into precisely what actions to take.

It’s a new era of conservation based on real-time data and monitoring. It changes our ecological relationship with the planet by changing the scales at which we can measure — we get both increased granularity, as well as adding a truly macro view of the entire planet. It also allows us to simultaneously (and unbiasedly) measuring the most important part of the equation: ourselves.

Specific and Real-Time

We have had population estimates of species for decades, but things are different now. Before the estimates came from academic fieldwork, and now we’re beginning to rely on vast networks of sensors to monitor and model those same populations in real-time. Take the recent example of Paul Allen’s Domain Awareness System (DAS) that covers broad swaths of West Africa. Here’s an excerpt from the Bloomberg feature:
For years, local rangers have protected wildlife with boots on the ground and sheer determination. Armed guards spend days and nights surrounding elephant herds and horned rhinos, while on the lookout for rogue trespassers.

Allen’s DAS uses technology to go the distance that humans cannot. It relies on three funnels of information: ranger radios, animal tracker tags, and a variety of environmental sensors such as camera traps and satellites. This being the product of the world’s 10th-richest software developer, it sends everything back to a centralized computer system, which projects specific threats onto a map of the monitored region, displayed on large screens in a closed circuit-like security room.

For instance, if a poacher were to break through a geofence sensor set up by a ranger in a highly-trafficked corridor, an icon of a rifle would flag the threat as well as any micro-chipped elephants and radio-carrying rangers in the vicinity.

[video]

These networks are being woven together in ecosystems all over the planet. Old cellphones being turned into rainforest monitoring devices. Drones surveying and processing the health of Koala populations in Australia. The conservation website MongaBay now has a section of their site dedicated to the fast-moving field, which they’ve dubbed WildTech. Professionals and amateurs are gathering in person at events like Make for the Planet and in online communities like Wildlabs.net. It’s game on.

The trend is building momentum because the early results have been so good, especially in terms of resolution. The organization WildMe is using a combination of citizen science (essentially human-powered environmental sensors) and artificial intelligence to identify and monitor individuals in wild populations. As in, meet Struddle the manta ray, number 1264_B201. He’s been sited ten times over the course of 10 years, mostly around the Maldives.

[image]

The combination of precision and pervasiveness means these are more than just passive data-collecting systems. They’re beyond academic, they’re actionable. We can estimate more accurately — there are 352,271 elephants estimated to remain in Africa — but we’re also reacting when something happens — a poacher broke a geofence 10 minutes ago.

The Big Picture

It’s not just finer detail, either. We’re also getting a better bigger picture than we’ve ever had before. We’re watching on a planetary scale.

Of course, advances in satellites are helping. Planet (the company) has been a major driving force. Over the past few years they’ve launched hundreds of small imaging satellites and have created an earth-imaging constellation that has ambitions of getting an image of every location on earth, every day. Like Google Earth, but near-real-time and the ability to search along the time horizon. An example of this in action, Planet was able to catch an illegal gold mining operation in the act in the Peruvian Amazon Rainforest.

[image]

It’s not just satellites, it’s connectivity more broadly. Traditionally analog wildlife monitoring is going online. Ornithology gives us a good example of this. For the past century, the study of birds have relied on amateur networks of enthusiasts — the birders — to contribute data on migration and occurrence studies. (For research that spans long temporal time spans or broad geographic areas, citizen science is often the most effective method.) Now, thanks to the ubiquity of mobile phones, birding is digitized and centralized on platforms like eBird and iNaturalist. You can watch the real-time submissions and observations:

[image]

Sped up, we get the visual of species-specific migrations over the course of a year:

[animated GIF]

Human Activity

The network we’re building isn’t all glass, plastic and silicon. It’s people, too. In the case of the birders above, the human component is critical. They’re doing the legwork, getting into the field and pointing the cameras. They’re both the braun and the (collective) brain of the operation.

Keeping humans in the loop has it’s benefits. It’s allowing these networks to scale faster. Birders with smartphones and eBird can happen now, whereas a network of passive forest listening devices would take years to build (and would be much more expensive to maintain). It also makes these systems better adept at managing ethical and privacy concerns — people are involved in the decision making at all times. But the biggest benefit of keeping people in the loop, is that we can watch them—the humans—too. Because as much as we’re learning about species and ecosystems, we also need to understand how we ourselves are affected by engaging and perceiving the natural world.

We’re getting more precise measurements of species and ecosystems (a better small picture), as well as a better idea of how they’re all linked together (a better big picture). But we’re also getting an accurate sense of ourselves and our impact on and within these systems (a better whole picture).

We’re still at the beginning of measuring the human-nature boundary, but the early results suggests it will help the conservation agenda. A sub-genre of neuroscience called neurobiophilia has emerged to study the effects on nature on our brain function. (Hint: it’s great for your health and well-being.) National Geographic is sending some of their explorers into the field wired up with Fitbits and EEG machines. The emerging academic field of citizen science seems to be equally concerned with the effects of participation than it is with outcomes. So far, the science is indicating that engagement in the data collecting process has measurable effects on the community’s ability to manage different issues. The lesson here: not only is nature good for us, but we can evolve towards a healthier perspective. In a world approaching 9 billion people, this collective self-awareness will be critical.

What’s next

Just as fast as we’re building this network, we’re learning what it’s actually capable of doing. As we’re still laying out the foundation, the network is starting to come alive. The next chapter is applying machine learning to help make sense of the mountains of data that these systems are producing. Want to quickly survey the dispersion of arctic ponds? Here. Want to count and classify the number of fish you’re seeing with your underwater drone? We’re building that. In a broad sense, we’re “closing the loop” as Chris Anderson explained in an Edge.org interview:
If we could measure the world, how would we manage it differently? This is a question we’ve been asking ourselves in the digital realm since the birth of the Internet. Our digital lives — clicks, histories, and cookies — can now be measured beautifully. The feedback loop is complete; it’s called closing the loop. As you know, we can only manage what we can measure. We’re now measuring on-screen activity beautifully, but most of the world is not on screens.

As we get better and better at measuring the world — wearables, Internet of Things, cars, satellites, drones, sensors — we are going to be able to close the loop in industry, agriculture, and the environment. We’re going to start to find out what the consequences of our actions are and, presumably, we’ll take smarter actions as a result. This journey with the Internet that we started more than twenty years ago is now extending to the physical world. Every industry is going to have to ask the same questions: What do we want to measure? What do we do with that data? How can we manage things differently once we have that data? This notion of closing the loop everywhere is perhaps the biggest endeavor of … [more]
davidlang  internetofthings  nature  life  conservation  tracking  2017  data  maps  mapping  sensors  realtime  iot  computing  erth  systems  wildlife  australia  africa  maldives  geofencing  perú  birds  ornithology  birding  migration  geography  inaturalist  ebird  mobile  phones  crowdsourcing  citizenscience  science  classideas  biology 
july 2017 by robertogreco
15 Sorting Algorithms in 6 Minutes - YouTube
"Visualization and "audibilization" of 15 Sorting Algorithms in 6 Minutes.
Sorts random shuffles of integers, with both speed and the number of items adapted to each algorithm's complexity.

The algorithms are: selection sort, insertion sort, quick sort, merge sort, heap sort, radix sort (LSD), radix sort (MSD), std::sort (intro sort), std::stable_sort (adaptive merge sort), shell sort, bubble sort, cocktail shaker sort, gnome sort, bitonic sort and bogo sort (30 seconds of it).

More information on the "Sound of Sorting" at http://panthema.net/2013/sound-of-sorting/ "

[via: https://boingboing.net/2017/06/28/15-sorting-algorithms-visualiz.html ]
algorithms  programming  sorting  visualization  sound  video  timobingmann  computing  classideas 
june 2017 by robertogreco
Why are browsers so slow?
"I understand why rendering a complicated layout may be slow. Or why executing a complicated script may be slow. Actually, browsers are rather fast doing these things. If you studied programming and have a rough idea about how many computations are made to render a page, it is surprising the browsers can do it all that fast.

But I am not talking about rendering and scripts. I am talking about everything else. Safari may take a second or two just to open a new blank tab on a 2014 iMac. And with ten or fifteen open tabs it eventually becomes sluggish as hell. Chrome is better, but not much so.

What are they doing? The tabs are already open. Everything has been rendered. Why does it take more than, say, a thousandth of a second to switch between tabs or create a new one? Opening a 20-megapixel photo from disk doesn’t take any noticeable amount of time, it renders instantaneously. Browsers store their stuff in memory. Why can’t they just show the pixels immediately when I ask for them?

You may say: if you are so smart, go create your own browser — and you will win this argument, as I’m definitely not that smart (I don’t think any one person is, by the way).

But I remember the times when we had the amazing Opera browser. In Opera, I could have a hundred open tabs, and it didn’t care, it worked incredibly fast on the hardware of its era, useless today.

You may ask: why would a sane person want a hundred open tabs, how would you even manage that? Well, Opera has had a great UI for that, which nobody has ever matched. Working with a hundred tabs in Opera was much easier back then than working with ten in today’s Safari or Chrome. But that’s a whole different story.

What would you do today if you opened a link and saw a long article which you don’t have time to read right now, but want to read later? You would save a link and close the tab. But when your browser is fast, you just don’t tend to close tabs which you haven’t dealt with. In Opera, I would let tabs stay open for months without having any impact on my machine’s performance.

Wait, but didn’t I restart my computer or the browser sometimes? Of course I did. Unfortunately, modern browsers are so stupid that they reload all the tabs when you restart them. Which takes ages if you have a hundred of tabs. Opera was sane: it did not reload a tab unless you asked for it. It just reopened everything from cache. Which took a couple of seconds.

Modern browsers boast their rendering and script execution performance, but that’s not what matters to me as a user. I just don’t understand why programmers spend any time optimising for that while the chrome is laughably slow even by ten-years-old standards.

I want back the pleasure of fast browsing."
browsers  performance  computing  tabs  internet  online  web  chrome  opera  2016  ilyabirman 
december 2016 by robertogreco
Earth-friendly EOMA68 Computing Devices | Crowd Supply
[via: https://boingboing.net/2016/08/04/a-freeopen-computer-on-a-card.html ]

"Have you ever had to replace an expensive laptop because it was “unfixable” or the cost of getting it repaired was ridiculously high? That really stings, doesn’t it?

Now imagine if you owned a computing device that you could easily fix yourself and inexpensively upgrade as needed. So, instead of having to shell out for a completely new computer, you could simply spend around US$50 to upgrade — which, by the way, you could easily do in SECONDS, by pushing a button on the side of your device and just popping in a new computer card. Doesn’t that sound like the way it should be?

We think so, too! That’s why we spent several years developing the easy-to-maintain, easy-on-your-pocket, easy-on-Mother Earth, EOMA68 line of computing devices.

Read on, because it gets even better. Now, let’s say you accidentally dropped your laptop and a corner gets cracked. Instead of swearing or weeping over the loss, you simply PRINT OUT REPLACEMENT PARTS with a 3D printer. With the EOMA68 line of computers, you have the freedom to make your own laptop housing parts and can download the CAD files to have replacement PCBs made. Heck, you don’t necessarily have to break anything to have a bit of fun with your laptop: maybe you would like the freedom of being able to CHANGE THE COLOR from silver to aqua to bright orange.

A great deal of thought and ingenuity has been put into the design of the EOMA68 line of computing devices to make them money-saving and convenient. For example, you can connect the computer card to your TV set to continue working if your monitor fails… and in the future, we’d like to give you the option to plug the computer card into your TV set if your monitor fails.

Security is also a major concern. We have taken measures to ensure the integrity of your computer data that exceed anything being sold in North America, Europe (or most parts of the world). And, because we have the complete set of sources, there is an opportunity to weed out the back doors that have been slowly making their way into our computing devices. There is no security without a strong foundation and understanding of what is running on your computing devices. For the first time, the EOMA68 is a standard to work off for building freedom-friendly, privacy-respecting, and secure computing devices.

Lastly, being kind to Mother Earth has to be a priority. It goes without saying that we don’t like seeing electronic goods continue to stack up in landfills around the world, and we know you don’t like it either. We envisage a thriving community developing around the re-use of older computer cards: people using them to set up ultra-low power servers, routers, entertainment centers or just passing them on to a friend.

The EOMA68 Standard
The goal of this project is to introduce the idea of being ethically responsible about both the ecological and the financial resources required to design, manufacture, acquire and maintain our personal computing devices. This campaign therefore introduces the world’s first devices built around the EOMA68 standard, a freely-accessible royalty-free, unencumbered hardware standard formulated and tested over the last five years around the ultra-simple philosophy of “just plug it in: it will work.”

Key Aspects
• Truly Free: Everything is freely licensed
• Modular: Use the same Computer Card across many devices
• Money-saving: Upgrade by replacing Computer Cards, not the whole device
• Long-lived: Designed to be relevant and useful for at least a decade, if not longer
• Ecologically Responsible: Keeps parts out of landfill by repurposing them

Some of you might recognise the form-factor of EOMA68 Computer Cards: it’s the legacy PCMCIA from the 1990s. The EOMA68 standard therefore re-uses legacy PCMCIA cases and housings, because that’s an environmentally responsible thing to do (and saves hugely on development costs).

Read more on the ecological implications of electronics waste in the white paper.

First Offerings
The first of the available devices will be a Micro-Desktop Housing, a 15.6” Laptop Housing, and two types of Computer Cards based on a highly efficient Allwinner A20 Dual-core ARM Cortex A7 processor."
plannedobsolescence  computers  hardware  computing 
august 2016 by robertogreco
Snapchat - 2014 AXS Partner Summit Keynote
"The following keynote was delivered by Evan Spiegel, CEO of Snapchat, at the AXS Partner Summit on January 25, 2014.

I was asked to speak here today on a topic I’m sure you’re all familiar with: sexting in the post-PC era.

[Just Kidding]

I’ve always thought it was a bit odd that this period in our history has been called the “post-personal computer” era – when really it should be called the “more-personal computer” era.

I read a great story yesterday about a man named Mister Macintosh. He was a man designed by Steve Jobs to live inside the Macintosh computer when it launched, 30 years ago from yesterday. He would appear every so often, hidden behind a pull-down menu or popping out from behind an icon – just quickly and infrequently enough that you almost thought he wasn’t real.

Until yesterday, I hadn’t realized that Steve’s idea of tying a man to a computer had happened so early in his career. But, at the time, the Macintosh was forced to ship without Mister Macintosh because the engineers were constrained to only 128 kilobytes of memory. It wasn’t until much later in Steve’s career that he would truly tie man to machine – the launch of the iPhone on June 29, 2007.

In the past, technical constraints meant that computers were typically found in physical locations: the car, the home, the school. The iPhone tied a computer uniquely to a phone number – to YOU.

Not all that long ago, communication was location-dependent. We were either in the same room together, in which case we could talk face-to-face, or we were across the world from each other, in which case I could call your office or send a letter to your home. It is only very recently that we have begun to tie phone numbers to individual identities for the purpose of computation and communication.

I say all this to establish that smartphones are simply the culmination of Steve’s journey to identify man with machine – and bring about the age of the More-Personal Computer.

There are three characteristics of the More-Personal Computer that are particularly relevant to our work at Snapchat:

1) Internet Everywhere

2) Fast + Easy Media Creation

3) Ephemerality

When we first started working on Snapchat in 2011, it was just a toy. In many ways it still is – but to quote Eames, “Toys are not really as innocent as they look. Toys and games are preludes to serious ideas.”

The reason to use a toy doesn’t have to be explained – it’s just fun. But using a toy is a terrific opportunity to learn.

And boy, have we been learning.

Internet Everywhere means that our old conception of the world separated into an online and an offline space is no longer relevant. Traditional social media required that we live experiences in the offline world, record those experiences, and then post them online to recreate the experience and talk about it. For example, I go on vacation, take a bunch of pictures, come back home, pick the good ones, post them online, and talk about them with my friends.

This traditional social media view of identity is actually quite radical: you are the sum of your published experience. Otherwise known as: pics or it didn’t happen.

Or in the case of Instagram: beautiful pics or it didn’t happen AND you’re not cool.

This notion of a profile made a lot of sense in the binary experience of online and offline. It was designed to recreate who I am online so that people could interact with me even if I wasn’t logged on at that particular moment.

Snapchat relies on Internet Everywhere to provide a totally different experience. Snapchat says that we are not the sum of everything we have said or done or experienced or published – we are the result. We are who we are today, right now.

We no longer have to capture the “real world” and recreate it online – we simply live and communicate at the same time.

Communication relies on the creation of media and is constrained by the speed at which that media is created and shared. It takes time to package your emotions, feelings and thoughts into media content like speech, writing, or photography.

Indeed, humans have always used media to understand themselves and share with others. I’ll spare you the Gaelic with this translation of Robert Burns, “Oh would some power the gift give us, to see ourselves as others see us.”

When I heard that quote, I couldn’t help but think of self-portraits. Or for us Millennials: the selfie! Self-portraits help us understand the way that others see us – they represent how we feel, where we are, and what we’re doing. They are arguably the most popular form of self-expression.

In the past, lifelike self-portraits took weeks and millions of brush strokes to complete. In the world of Fast + Easy Media Creation, the selfie is immediate. It represents who we are and how we feel – right now.

And until now, the photographic process was far too slow for conversation. But with Fast + Easy Media Creation we are able to communicate through photos, not just communicate around them like we did on social media. When we start communicating through media we light up. It’s fun.

The selfie makes sense as the fundamental unit of communication on Snapchat because it marks the transition between digital media as self-expression and digital media as communication.

And this brings us to the importance of ephemerality at the core of conversation.

Snapchat discards content to focus on the feeling that content brings to you, not the way that content looks. This is a conservative idea, the natural response to radical transparency that restores integrity and context to conversation.

Snapchat sets expectations around conversation that mirror the expectations we have when we’re talking in-person.

That’s what Snapchat is all about. Talking through content not around it. With friends, not strangers. Identity tied to now, today. Room for growth, emotional risk, expression, mistakes, room for YOU.

The Era of More Personal Computing has provided the technical infrastructure for more personal communication. We feel so fortunate to be a part of this incredible transformation.

Snapchat is a product built from the heart – that is the reason why we are in Los Angeles. I often talk with people about the conflicts between technology companies and content companies – I’ve found that one of the biggest issues is that frequently technology companies view movies, music, and television as INFORMATION. Directors, producers, musicians, and actors view them as feelings, as expression. Not to be searched, sorted, and viewed – but EXPERIENCED.

Snapchat focuses on the experience of conversation – not the transfer of information. We’re thrilled to be a part of this community.

Thank you for inviting me today and thank you for being a part of our journey. Our team looks forward to getting to know all of you."

[Also here: https://es.scribd.com/doc/202195145/2014-AXS-Partner-Summit-Keynote#fullscreen ]

[via: https://twitter.com/smc90/status/427551803475906560 ]
evanspeigel  snapchat  2014  computing  personalcomputing  personalcomputers  stevejobs  ubiquitous  internet  web  online  communication  media  talking  conversation  experience  selfies  photography  ephemerality  mediacreation  creativity  expression  ephemeral 
august 2016 by robertogreco
Remarks at the SASE Panel On The Moral Economy of Tech
"I am only a small minnow in the technology ocean, but since it is my natural habitat, I want to make an effort to describe it to you.

As computer programmers, our formative intellectual experience is working with deterministic systems that have been designed by other human beings. These can be very complex, but the complexity is not the kind we find in the natural world. It is ultimately always tractable. Find the right abstractions, and the puzzle box opens before you.

The feeling of competence, control and delight in discovering a clever twist that solves a difficult problem is what makes being a computer programmer sometimes enjoyable.

But as anyone who's worked with tech people knows, this intellectual background can also lead to arrogance. People who excel at software design become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.

Today we are embarked on a great project to make computers a part of everyday life. As Marc Andreessen memorably frames it, "software is eating the world". And those of us writing the software expect to be greeted as liberators.

Our intentions are simple and clear. First we will instrument, then we will analyze, then we will optimize. And you will thank us.

But the real world is a stubborn place. It is complex in ways that resist abstraction and modeling. It notices and reacts to our attempts to affect it. Nor can we hope to examine it objectively from the outside, any more than we can step out of our own skin.

The connected world we're building may resemble a computer system, but really it's just the regular old world from before, with a bunch of microphones and keyboards and flat screens sticking out of it. And it has the same old problems.

Approaching the world as a software problem is a category error that has led us into some terrible habits of mind.

BAD MENTAL HABITS

First, programmers are trained to seek maximal and global solutions. Why solve a specific problem in one place when you can fix the general problem for everybody, and for all time? We don't think of this as hubris, but as a laudable economy of effort. And the startup funding culture of big risk, big reward encourages this grandiose mode of thinking. There is powerful social pressure to avoid incremental change, particularly any change that would require working with people outside tech and treating them as intellectual equals.

Second, treating the world as a software project gives us a rationale for being selfish. The old adage has it that if you are given ten minutes to cut down a tree, you should spend the first five sharpening your axe. We are used to the idea of bootstrapping ourselves into a position of maximum leverage before tackling a problem.

In the real world, this has led to a pathology where the tech sector maximizes its own comfort. You don't have to go far to see this. Hop on BART after the conference and take a look at Oakland, or take a stroll through downtown San Francisco and try to persuade yourself you're in the heart of a boom that has lasted for forty years. You'll see a residential theme park for tech workers, surrounded by areas of poverty and misery that have seen no benefit and ample harm from our presence. We pretend that by maximizing our convenience and productivity, we're hastening the day when we finally make life better for all those other people.

Third, treating the world as software promotes fantasies of control. And the best kind of control is control without responsibility. Our unique position as authors of software used by millions gives us power, but we don't accept that this should make us accountable. We're programmers—who else is going to write the software that runs the world? To put it plainly, we are surprised that people seem to get mad at us for trying to help.

Fortunately we are smart people and have found a way out of this predicament. Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.

Of course, people obsessed with control have to eventually confront the fact of their own extinction. The response of the tech world to death has been enthusiastic. We are going to fix it. Google Ventures, for example, is seriously funding research into immortality. Their head VC will call you a "deathist" for pointing out that this is delusional.

Such fantasies of control come with a dark side. Witness the current anxieties about an artificial superintelligence, or Elon Musk's apparently sincere belief that we're living in a simulation. For a computer programmer, that's the ultimate loss of control. Instead of writing the software, you are the software.

We obsess over these fake problems while creating some real ones.

In our attempt to feed the world to software, techies have built the greatest surveillance apparatus the world has ever seen. Unlike earlier efforts, this one is fully mechanized and in a large sense autonomous. Its power is latent, lying in the vast amounts of permanently stored personal data about entire populations.

We started out collecting this information by accident, as part of our project to automate everything, but soon realized that it had economic value. We could use it to make the process self-funding. And so mechanized surveillance has become the economic basis of the modern tech industry.

SURVEILLANCE CAPITALISM

Surveillance capitalism has some of the features of a zero-sum game. The actual value of the data collected is not clear, but it is definitely an advantage to collect more than your rivals do. Because human beings develop an immune response to new forms of tracking and manipulation, the only way to stay successful is to keep finding novel ways to peer into people's private lives. And because much of the surveillance economy is funded by speculators, there is an incentive to try flashy things that will capture the speculators' imagination, and attract their money.

This creates a ratcheting effect where the behavior of ever more people is tracked ever more closely, and the collected information retained, in the hopes that further dollars can be squeezed out of it.

Just like industrialized manufacturing changed the relationship between labor and capital, surveillance capitalism is changing the relationship between private citizens and the entities doing the tracking. Our old ideas about individual privacy and consent no longer hold in a world where personal data is harvested on an industrial scale.

Those who benefit from the death of privacy attempt to frame our subjugation in terms of freedom, just like early factory owners talked about the sanctity of contract law. They insisted that a worker should have the right to agree to anything, from sixteen-hour days to unsafe working conditions, as if factory owners and workers were on an equal footing.

Companies that perform surveillance are attempting the same mental trick. They assert that we freely share our data in return for valuable services. But opting out of surveillance capitalism is like opting out of electricity, or cooked foods—you are free to do it in theory. In practice, it will upend your life.

Many of you had to obtain a US visa to attend this conference. The customs service announced yesterday it wants to start asking people for their social media profiles. Imagine trying to attend your next conference without a LinkedIn profile, and explaining to the American authorities why you are so suspiciously off the grid.

The reality is, opting out of surveillance capitalism means opting out of much of modern life.

We're used to talking about the private and public sector in the real economy, but in the surveillance economy this boundary doesn't exist. Much of the day-to-day work of surveillance is done by telecommunications firms, which have a close relationship with government. The techniques and software of surveillance are freely shared between practitioners on both sides. All of the major players in the surveillance economy cooperate with their own country's intelligence agencies, and are spied on (very effectively) by all the others.

As a technologist, this state of affairs gives me the feeling of living in a forest that is filling up with dry, dead wood. The very personal, very potent information we're gathering about people never goes away, only accumulates. I don't want to see the fire come, but at the same time, I can't figure out a way to persuade other people of the great danger.

So I try to spin scenarios.

THE INEVITABLE LIST OF SCARY SCENARIOS

One of the candidates running for President this year has promised to deport eleven million undocumented immigrants living in the United States, as well as block Muslims from entering the country altogether. Try to imagine this policy enacted using the tools of modern technology. The FBI would subpoena Facebook for information on every user born abroad. Email and phone conversations would be monitored to check for the use of Arabic or Spanish, and sentiment analysis applied to see if the participants sounded "nervous". Social networks, phone metadata, and cell phone tracking would lead police to nests of hiding immigrants.

We could do a really good job deporting people if we put our minds to it.

Or consider the other candidate running for President, the one we consider the sane alternative, who has been a longtime promoter of a system of extrajudicial murder that uses blanket surveillance of cell phone traffic, email, and social media to create lists of people to be tracked and killed with autonomous aircraft. … [more]
culture  ethics  privacy  surveillance  technology  technosolutionism  maciegceglowski  2016  computing  coding  programming  problemsolving  systemsthinking  systems  software  control  power  elonmusk  marcandreessen  siliconvalley  sanfrancisco  oakland  responsibility  machinelearning  googlevntures  vc  capitalism  speculation  consent  labor  economics  poland  dystopia  government  politics  policy  immortality 
june 2016 by robertogreco
How to Write a History of Writing Software - The Atlantic
"Isaac Asimov, John Updike, and John Hersey changed their writing habits to adapt to word processors, according to the first literary historian of the technology."



"There are three things I really like about that story and why I feel like it’s the best candidate for quote-unquote “first.”

One, it defamiliarizes our sense of what word processing is. It’s not a typewriter connected to a TV set. The key thing turns out to be the magnetic storage layer. The other thing thing I like about it is—there’s a term I use in the book, “suspended encryption.” That captures that dynamic of word processing: You’re writing, but there’s a kind of suspended animation to it. The text remains in its fluid, malleable state, until such time as you commit it to hard copy.

The other thing I like about the story is that it captures that gendered dynamic, that social dimension of writing. It’s not just the author alone at his typewriter. It’s really a collaborative process, there is a gender dimension to it, and there’s something very human about it, I think."



"Meyer: There is a material history you can read from a typewriter. I think you mention the example of Lawrence Rainey, a scholar of T.S. Eliot, being able to decode The Waste Land’s compositional history by looking at his typewriter. And I remember there being anxiety around writing software, and the future of that kind of scholarship. Did writing this history make you buy into the anxiety that we won’t be able to preserve contemporary literary work?

Kirschenbaum: So much of writing now, and that includes literary writing, that includes novels and poetry that will become culturally resonant and important—all of this happens now digitally. And that was something that I was interesting in writing about, writing the book. What I found is that there were often very surprising examples of evidence remaining, even from these early days of word processing history.

There’s a kind of paradox at the heart of this. As you know, we’ve all lost files, or had important stuff disappear into the [digital] ether, so there’s all that volatility and fragility we associate with the computer. But it’s also a remarkably resilient medium. And there are some writers who are using the actual track-changes feature or some other kind of versioning system to preserve their own literary manuscripts literally keystroke by keystroke."



"Meyer: You talk a little bit about looking at different paths for word processing after Word. You go into “austerityware,” which is your phrase for software like WriteRoom, which tries to cut down on distractions. Is there any prognosticating you feel like you could do about what’s catching on next?

Kirschenbaum: I do think we’re seeing this interesting return to what instructors of writing for a long time have called free writing, which is just about the uninhibited process of getting stuff out there, doing that sort of initial quick and dirty draft. What’s interesting to me is that there are now particular tools and platforms that are emerging with that precise model of writing in mind.

The one that’s gotten the most attention is the one I write about at the end of the book. At the time I was writing, it was called the Hemingwrite, but now it’s called Freewrite. It’s essentially a very lightweight, very portable keyboard, with a small screen and portable memory. It reminds me of the way a lot of writers talk about their fountain pens—these exquisitely crafted and engineers fine instruments for writing. The Freewrite aspires to bring that same level of craft and deliberation to the fabrication of a purpose-built writing instrument.

So, you know, in a sense, I think we’re going to see more and more of those special-purpose writing platforms. I think writing might move away from the general-purpose computer—we’ll still do lots of writing of all sorts at our regular laptop, but it might be your email, your social media. For dedicated long-form writing, I think there may be more and more alternatives."



"Meyer: One thing I love about the book are all the office pictures—the pictures from ’80s offices, especially. There is a sense looking at the images that the desks are retrofitted writers’s desks, rather than the kind of generic surface-with-a-laptop setup that I think a lot of people work at now.

Kirschenbaum: The visual history of all of this is really interesting. One of the hard thing was trying to figure out is, what is a literary history of word processing, how do you go about researching it? Maybe by going to the archives, but you also do it by looking at the way in which computers really were represented in the kind of imagery I was looking at earlier. You look at the old office photographs. You see a picture of Amy Tan sitting with a laptop and you try to figure out what kind of laptop it is, and lastly you do it by talking to people. It was the oral histories I did that were the best research for the book."
robinsonmeyer  wordprocessing  software  history  isaacasimov  johnupdike  writing  howewrite  computing  matthewkirschenbaum  lendeighton  ellenorhandley  johnhersey  jerrypournelle  sciencefiction  scifi  thomaspynchon  gorevidal  charlesbukowski  rcrumb  tseliot  lawrencerainey  trackchanges  typing  typewriters  freewrite  writeroom  hamingwrite  evekosofskysedgwick  howwework  howwewrite  amytan 
june 2016 by robertogreco
The Minecraft Generation - The New York Times
"Seth Frey, a postdoctoral fellow in computational social science at Dartmouth College, has studied the behavior of thousands of youths on Minecraft servers, and he argues that their interactions are, essentially, teaching civic literacy. “You’ve got these kids, and they’re creating these worlds, and they think they’re just playing a game, but they have to solve some of the hardest problems facing humanity,” Frey says. “They have to solve the tragedy of the commons.” What’s more, they’re often anonymous teenagers who, studies suggest, are almost 90 percent male (online play attracts far fewer girls and women than single-­player mode). That makes them “what I like to think of as possibly the worst human beings around,” Frey adds, only half-­jokingly. “So this shouldn’t work. And the fact that this works is astonishing.”

Frey is an admirer of Elinor Ostrom, the Nobel Prize-­winning political economist who analyzed the often-­unexpected ways that everyday people govern themselves and manage resources. He sees a reflection of her work in Minecraft: Running a server becomes a crash course in how to compromise, balance one another’s demands and resolve conflict.

Three years ago, the public library in Darien, Conn., decided to host its own Minecraft server. To play, kids must acquire a library card. More than 900 kids have signed up, according to John Blyberg, the library’s assistant director for innovation and user experience. “The kids are really a community,” he told me. To prevent conflict, the library installed plug-ins that give players a chunk of land in the game that only they can access, unless they explicitly allow someone else to do so. Even so, conflict arises. “I’ll get a call saying, ‘This is Dasher80, and someone has come in and destroyed my house,’ ” Blyberg says. Sometimes library administrators will step in to adjudicate the dispute. But this is increasingly rare, Blyberg says. “Generally, the self-­governing takes over. I’ll log in, and there’ll be 10 or 15 messages, and it’ll start with, ‘So-and-so stole this,’ and each message is more of this,” he says. “And at the end, it’ll be: ‘It’s O.K., we worked it out! Disregard this message!’ ”

Several parents and academics I interviewed think Minecraft servers offer children a crucial “third place” to mature, where they can gather together outside the scrutiny and authority at home and school. Kids have been using social networks like Instagram or Snapchat as a digital third place for some time, but Minecraft imposes different social demands, because kids have to figure out how to respect one another’s virtual space and how to collaborate on real projects.

“We’re increasingly constraining youth’s ability to move through the world around them,” says Barry Joseph, the associate director for digital learning at the American Museum of Natural History. Joseph is in his 40s. When he was young, he and his friends roamed the neighborhood unattended, where they learned to manage themselves socially. Today’s fearful parents often restrict their children’s wanderings, Joseph notes (himself included, he adds). Minecraft serves as a new free-­ranging realm.

Joseph’s son, Akiva, is 9, and before and after school he and his school friend Eliana will meet on a Minecraft server to talk and play. His son, Joseph says, is “at home but still getting to be with a friend using technology, going to a place where they get to use pickaxes and they get to use shovels and they get to do that kind of building. I wonder how much Minecraft is meeting that need — that need that all children have.” In some respects, Minecraft can be as much social network as game.

Just as Minecraft propels kids to master Photoshop or video-­editing, server life often requires kids to acquire complex technical skills. One 13-year-old girl I interviewed, Lea, was a regular on a server called Total Freedom but became annoyed that its administrators weren’t clamping down on griefing. So she asked if she could become an administrator, and the owners said yes.

For a few months, Lea worked as a kind of cop on that beat. A software tool called “command spy” let her observe records of what players had done in the game; she teleported miscreants to a sort of virtual “time out” zone. She was eventually promoted to the next rank — “telnet admin,” which allowed her to log directly into the server via telnet, a command-­line tool often used by professionals to manage servers. Being deeply involved in the social world of Minecraft turned Lea into something rather like a professional systems administrator. “I’m supposed to take charge of anybody who’s breaking the rules,” she told me at the time.

Not everyone has found the online world of Minecraft so hospitable. One afternoon while visiting the offices of Mouse, a nonprofit organization in Manhattan that runs high-tech programs for kids, I spoke with Tori. She’s a quiet, dry-­witted 17-year-old who has been playing Minecraft for two years, mostly in single-­player mode; a recent castle-­building competition with her younger sister prompted some bickering after Tori won. But when she decided to try an online server one day, other players — after discovering she was a girl — spelled out “BITCH” in blocks.

She hasn’t gone back. A group of friends sitting with her in the Mouse offices, all boys, shook their heads in sympathy; they’ve seen this behavior “everywhere,” one said. I have been unable to find solid statistics on how frequently harassment happens in Minecraft. In the broader world of online games, though, there is more evidence: An academic study of online players of Halo, a shoot-’em-up game, found that women were harassed twice as often as men, and in an unscientific poll of 874 self-­described online gamers, 63 percent of women reported “sex-­based taunting, harassment or threats.” Parents are sometimes more fretful than the players; a few told me they didn’t let their daughters play online. Not all girls experience harassment in Minecraft, of course — Lea, for one, told me it has never happened to her — and it is easy to play online without disclosing your gender, age or name. In-game avatars can even be animals.

How long will Minecraft’s popularity endure? It depends very much on Microsoft’s stewardship of the game. Company executives have thus far kept a reasonably light hand on the game; they have left major decisions about the game’s development to Mojang and let the team remain in Sweden. But you can imagine how the game’s rich grass-roots culture might fray. Microsoft could, for example, try to broaden the game’s appeal by making it more user-­friendly — which might attenuate its rich tradition of information-­sharing among fans, who enjoy the opacity and mystery. Or a future update could tilt the game in a direction kids don’t like. (The introduction of a new style of combat this spring led to lively debate on forums — some enjoyed the new layer of strategy; others thought it made Minecraft too much like a typical hack-and-slash game.) Or an altogether new game could emerge, out-­Minecrafting Minecraft.

But for now, its grip is strong. And some are trying to strengthen it further by making it more accessible to lower-­income children. Mimi Ito has found that the kids who acquire real-world skills from the game — learning logic, administering servers, making YouTube channels — tend to be upper middle class. Their parents and after-­school programs help them shift from playing with virtual blocks to, say, writing code. So educators have begun trying to do something similar, bringing Minecraft into the classroom to create lessons on everything from math to history. Many libraries are installing Minecraft on their computers."
2016  clivethompson  education  videogames  games  minecraft  digitalculture  gaming  mimiito  robinsloan  coding  computationalthinking  stem  programming  commandline  ianbogost  walterbenjamin  children  learning  resilience  colinfanning  toys  lego  wood  friedrichfroebel  johnlocke  rebeccamir  mariamontessori  montessori  carltheodorsorensen  guilds  mentoring  mentorship  sloyd  denmark  construction  building  woodcrafting  woodcraft  adventureplaygrounds  material  logic  basic  mojang  microsoft  markuspersson  notch  modding  photoshop  texturepacks  elinorostrom  collaboration  sethfrey  civics  youtube  networkedlearning  digitalliteracy  hacking  computers  screentime  creativity  howwelearn  computing  froebel 
april 2016 by robertogreco
A Neural Network Playground
"Tinker With a Neural Network Right Here in Your Browser.
Don’t Worry, You Can’t Break It. We Promise.

Um, What Is a Neural Network?

It’s a technique for building a computer program that learns from data. It is based very loosely on how we think the human brain works. First, a collection of software “neurons” are created and connected together, allowing them to send messages to each other. Next, the network is asked to solve a problem, which it attempts to do over and over, each time strengthening the connections that lead to success and diminishing those that lead to failure. For a more detailed introduction to neural networks, Michael Nielsen’s Neural Networks and Deep Learning is a good place to start. For more a more technical overview, try Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.

This Is Cool, Can I Repurpose It?

Please do! We’ve open sourced it on GitHub with the hope that it can make neural networks a little more accessible and easier to learn. You’re free to use it in any way that follows our Apache License. And if you have any suggestions for additions or changes, please let us know.

We’ve also provided some controls below to enable you tailor the playground to a specific topic or lesson. Just choose which features you’d like to be visible below then save this link, or refresh the page.

Show test data
Discretize output
Play button
Learning rate
Activation
Regularization
Regularization rate
Problem type
Which dataset
Ratio train data
Noise level
Batch size
# of hidden layers
What Do All the Colors Mean?

Orange and blue are used throughout the visualization in slightly different ways, but in general orange shows negative values while blue shows positive values.

The data points (represented by small circles) are initially colored orange or blue, which correspond to positive one and negative one.

In the hidden layers, the lines are colored by the weights of the connections between neurons. Blue shows a positive weight, which means the network is using that output of the neuron as given. An orange line shows that the network is assiging a negative weight.

In the output layer, the dots are colored orange or blue depending on their original values. The background color shows what the network is predicting for a particular area. The intensity of the color shows how confident that prediction is.

Credits

This was created by Daniel Smilkov and Shan Carter. This is a continuation of many people’s previous work — most notably Andrej Karpathy’s convnet.js and Chris Olah’s articles about neural networks. Many thanks also to D. Sculley for help with the original idea and to Fernanda Viégas and Martin Wattenberg and the rest of the Big Picture and Google Brain teams for feedback and guidance."
neuralnetworks  data  computing  deeplearning  ai  danielsmilkov  shancarter 
april 2016 by robertogreco
From AI to IA: How AI and architecture created interactivity - YouTube
"The architecture of digital systems isn't just a metaphor. It developed out of a 50-year collaborative relationship between architects and designers, on one side, and technologists in AI, cybernetics, and computer science, on the other. In this talk at the O'Reilly Design Conference in 2016, Molly Steenson traces that history of interaction, tying it to contemporary lessons aimed at designing for a complex world."
mollysteenson  2016  ai  artificialintelligence  douglasenglebart  symbiosis  augmentation  christopheralexander  nicholasnegroponte  richardsaulwurman  architecture  physical  digital  mitmedialab  history  mitarchitecturemachinegroup  technology  compsci  computerscience  cybernetics  interaction  structures  computing  design  complexity  frederickbrooks  computers  interactivity  activity  metaphor  marvinminsky  heuristics  problemsolving  kent  wardcunningham  gangoffour  objectorientedprogramming  apatternlanguage  wikis  agilesoftwaredevelopment  software  patterns  users  digitalspace  interactiondesign  terrywinograd  xeroxparc  petermccolough  medialab 
february 2016 by robertogreco
The Internet Isn't Available in Most Languages - The Atlantic
"Tweet, tuít, or giolc? These were the three iterations of a Gaelic version of the word “tweet” that Twitter’s Irish translators debated in 2012. The agonizing choice between an Anglicized spelling, a Gaelic spelling, or the use of the Gaelic word for “tweeting like a bird” stalled the project for an entire year. Finally, a small group of translators made an executive decision to use the Anglicized spelling of “tweet” with Irish grammar. As of April 2015, Gaelic Twitter is online.

Indigenous and under-resourced cultures face a number of obstacles when establishing their languages on the Internet. English, along with a few other languages like Spanish and French, dominates the web. People who speak these languages often take for granted access to social-media sites with agreed-upon vocabularies, built-in translation services, and basic grammar and spell-checkers.

For Gaelic, a minority language spoken by only two to three percent of the Irish population, it can be difficult to access these digital services. And even languages with millions of speakers can lack the resources needed to make the Internet relevant to daily life.

In September of this year, the Broadband Commission for Digital Development, an organization established five years ago to monitor the growth and use of the Internet around the world, released its 2015 report on the state of broadband. The report argues that representation of the world's languages online remains one of the major challenges in expanding the Internet to reach the four billion people who don’t yet have access.

At the moment, the Internet only has webpages in about five percent of the world's languages. Even national languages like Hindi and Swahili are used on only .01 percent of the 10 million most popular websites. The majority of the world’s languages lack an online presence that is actually useful.

Ethnologue, a directory of the world’s living languages, has determined that 1,519 out of the 7,100 languages spoken today are in danger of extinction. For these threatened languages, social-networking sites like Facebook, Twitter, and Instagram, which rely primarily on user-generated content, as well as other digital platforms like Google and Wikipedia, have a chance to contribute to their preservation. While the best way to keep a language alive is to speak it, using one’s native language online could help.

The computational linguistics professor Kevin Scannell devotes his time to developing the technical infrastructure—often using open-source software—that can work for multiple languages. He’s worked with more than 40 languages around the world, his efforts part of a larger struggle to promote under-resourced languages. “[The languages] are not part of the world of the Internet or computing,” he says. “We’re trying to change that mindset by providing the tools for people to use.”

One such under-resourced language is Chichewa, a Bantu language spoken by 12 million people, many of whom are in the country of Malawi. According to Edmond Kachale, a programmer who began developing a basic word processor for the language in 2005 and has been working on translating Google search into Chichewa for the last five years, his language doesn’t have sufficient content online. This makes it difficult for its speakers to compete in a digital, globalized world. “Unless a language improves its visibility in the digital world,” he says, “it is heading for extinction.”

In Malawi, over 60 percent of the population lacks Internet access; but Kachale says that “even if there would be free Internet nation-wide, chances are that [Chichewa speakers] may not use it at all because of the language barrier.” The 2015 Broadband Report bears Kachale’s point out. Using the benchmark of 100,000 Wikipedia pages in any given language, it found that only 53 percent of the world’s population has access to sufficient content in their native language to make use of the Internet relevant.

People who can’t use the Internet risk falling behind economically because they can’t take advantage of e-commerce. In Malawi, Facebook has become a key platform for Internet businesses, even though the site has not yet been translated into Chichewa. Instead, users tack-on a work-around browser plug-in, a quick-fix for languages that don’t have official translations for big social-media sites.

“Unless a language improves its visibility in the digital world, it is heading for extinction.”
In 2014, Facebook added 20 new languages to its site and launched several more this year, bringing it to more than 80 languages. The site also opens up languages for community-based translation. This option is currently available for about 50 languages, including Aymara, an indigenous language spoken mainly in Bolivia, Peru, and Chile. Though it has approximately 2 million speakers, UNESCO has designated Aymara as “vulnerable.” Beginning in May of 2014, a group of 20 volunteer translators have been chipping away at the 25,000 words used on the site—and the project is on course to be finished by Christmas.

The project is important because it will encourage young people to use their native language. “We are sure when Aymara is available on Facebook as an official language, it will be a source of motivation for Aymara people,” says Elias Quisepe Chura, who manages the translation effort (it happens primarily online, unsurprisingly via a Facebook page).

Ruben Hilari, another member of the translation team, told the Spanish newspaper El Pais, “Aymara is alive. It does not need to be revitalized. It needs to be strengthened and that is exactly what we are doing. If we do not work for our language and culture today, it will be too late tomorrow to remember who we are, and we will always feel insecure about our identity.”

Despite its reputation as the so-called information superhighway, the Internet is only legible to speakers of a few languages; this limit to the web’s accessibility proves that it can be as just as insular and discriminative as the modern world at large."
internet  languages  language  linguistics  2015  translation  insularity  web  online  gaelic  hindi  swahili  kevinscannell  via:unthinkingly  katherineschwab  edmondkachele  accessibility  enlgish  aymara  rubenhilari  eliasquisepechura  bolivia  perú  chile  indigenous  indigeneity  chichewa  bantu  google  kevinsannell  twitter  facebook  instagram  software  computation  computing  inclusivity 
january 2016 by robertogreco
The Jacob’s Ladder of coding — Medium
"Anecdotes and questions about climbing up and down the ladder of abstraction: Atari, ARM, demoscene, education, creative coding, community, seeking lightness, enlightenment & strange languages"



"With only an hour or two of computer time a week, our learning and progress was largely down to intensive trial & error, daily homework and learning to code and debug with only pencil and paper, whilst trying to be the machine yourself: Playing every step through in our heads (and on paper) over and over until we were confident, the code did as we’d expect, yet, often still failing because of wrong intuitions. Learning this analytical thinking is essential to successful debugging, even today, specifically in languages / environments where no GUI debugger is available. In the late 90s, John Maeda did similar exercises at MIT Media Lab, with students role-playing different parts of a CPU or a whole computer executing a simple process. Later at college, my own CS prof too would often quote Alan Perlis:
“To understand a program you must become both the machine and the program.” — Alan Perlis

Initially we’d only be using the machine largely to just verify our ideas prepared at home (spending the majority of the time typing in/correcting numbers from paper). Through this monastic style of working, we also learned the importance of having the right tools and balance of skills within the group and were responsible to create them ourselves in order to achieve our vision. This important lesson stayed with me throughout (maybe even became) my career so far… Most projects I worked on, especially in the past 15 years, almost exclusively relied on custom-made tooling, which was as much part of the final outcome as the main deliverable to clients. Often times it even was the main deliverable. On the other hand, I’ve also had to learn the hard way that being a largely self-sufficient generalist often is undesired in the modern workplace, which frequently still encourages narrow expertise above all else…

After a few months of convincing my parents to invest all of their saved up and invaluable West-german money to purchase a piece of “Power Without the Price” (a much beloved Atari 800XL) a year before the Wall came down in Berlin, I finally gained daily access to a computer, but was still in a similar situation as before: No more hard west money left to buy a tape nor disk drive from the Intershop, I wasn’t able to save any work (apart from creating paper copies) and so the Atari was largely kept switched on until November 10, 1989, the day after the Berlin Wall was opened and I could buy an XC-12 tape recorder. I too had to choose whether to go the usual route of working with the built-in BASIC language or stick with what I’d learned/taught myself so far, Assembly… In hindsight, am glad I chose the latter, since it proved to be far more useful and transportable knowledge, even today!"



"Lesson learned: Language skills, natural and coded ones, are gateways, opening paths not just for more expression, but also to paths in life.

As is the case today, so it was back then: People tend to organize around specific technological interests, languages and platforms and then stick with them for a long time, for better or worse. Over the years I’ve been part of many such tool-based communities (chronologically: Asm, C, TurboPascal, Director, JS, Flash, Java, Processing, Clojure) and have somewhat turned into a nomad, not being able to ever find a true home in most of them. This might sound judgemental and negative, but really isn’t meant to and these travels through the land of languages and toolkits has given me much food for thought. Having slowly climbed up the ladder of abstraction and spent many years both with low & high level languages, has shown me how much each side of the spectrum can inform and learn from the other (and they really should do more so!). It’s an experience I can highly recommend to anyone attempting to better understand these machines some of us are working with for many hours a day and which impact so much of all our lives. So am extremely grateful to all the kind souls & learning encountered on the way!"



"In the vastly larger open source creative computing demographic of today, the by far biggest groups are tight-knit communities around individual frameworks and languages. There is much these platforms have achieved in terms of output, increasing overall code literacy and turning thousands of people from mere computer users into authors. This is a feat not be underestimated and a Good Thing™! Yet my issue with this siloed general state of affairs is that, apart from a few notable exceptions (especially the more recent arrivals), there’s unfortunately a) not much cross-fertilizing with fundamentally different and/or new ideas in computing going on and b) over time only incremental progress is happening, business as usual, rather than a will to continuously challenge core assumptions among these largest communities about how we talk to machines and how we can do so better. I find it truly sad that many of these popular frameworks rely only on the same old imperative programming language family, philosophy and process, which has been pre-dominant and largely unchanged for the past 30+ years, and their communities also happily avoid or actively reject alternative solutions, which might require fundamental changes to their tools, but which actually could be more suitable and/or powerful to their aims and reach. Some of these platforms have become and act as institutions in their own right and as such also tend to espouse an inward looking approach & philosophy to further cement their status (as owners or pillars?) in their field. This often includes a no-skills-neccessary, we-cater-all-problems promise to their new users, with each community re-inventing the same old wheels in their own image along the way. It’s Not-Invented-Here on a community level: A reliance on insular support ecosystems, libraries & tooling is typical, reducing overall code re-use (at least between communities sharing the same underlying language) and increasing fragmentation. More often than not these platforms equate simplicity with ease (go watch Rich Hickey taking this argument eloquently apart!). The popular prioritization of no pre-requisite knowledge, super shallow learning curves and quick results eventually becomes the main obstacle to later achieve systemic changes, not just in these tools themselves, but also for (creative) coding as discipline at large. Bloatware emerges. Please do forgive if that all sounds harsh, but I simply do believe we can do better!

Every time I talk with others about this topic, I can’t help but think about Snow Crash’s idea of “Language is a virus”. I sometimes do wonder what makes us modern humans, especially those working with computing technology, so fundamentalist and brand-loyal to these often flawed platforms we happen to use? Is it really that we believe there’s no better way? Are we really always only pressed for time? Are we mostly content with Good Enough? Are we just doing what everyone else seems to be doing? Is it status anxiety, a feeling we have to use X to make a living? Are we afraid of unlearning? Is it that learning tech/coding is (still) too hard, too much of an effort, which can only be justified a few times per lifetime? For people who have been in the game long enough and maybe made a name for themselves in their community, is it pride, sentimentality or fear of becoming a complete beginner again? Is it maybe a sign that the way we teach computing and focus on concrete tools too early in order to obtain quick, unrealistically complex results, rather than fundamental (“boring”) knowledge, which is somewhat flawed? Is it our addiction to largely focus on things we can document/celebrate every minor learning step as an achievement in public? This is no stab at educators — much of this systemic behavior is driven by the sheer explosion of (too often similar) choices, demands made by students and policy makers. But I do think we should ask ourselves these questions more often."

[author's tweet: https://twitter.com/toxi/status/676578816572067840 ]
coding  via:tealtan  2015  abstraction  demoscene  education  creativecoding  math  mathematics  howwelearn  typography  design  dennocoil  alanperlis  johnmaeda  criticalthinking  analyticalthinking  basic  programming  assembly  hexcode  georgedyson  computing  computers  atari  amiga  commodore  sinclair  identity  opensource  insularity  simplicity  ease  language  languages  community  communities  processing  flexibility  unschooling  deschooling  pedagogy  teaching  howweteach  understanding  bottomup  topdown  karstenschmidt 
december 2015 by robertogreco
The Digital Disparities Facing Lower-Income Teenagers - The New York Times
"The study found some overarching themes. Teens and tweens, for instance, generally reported spending much more time watching television than they did on social media.

The study also analyzed the differences in children’s media use based on entertainment prototypes — such as mobile gamers, social networkers and heavy consumers of television and music — and by race, gender, household income and parents’ level of education.

The stark differences in daily activities among teenage and tween subgroups are likely to spur further research into the implications of such divergent media access and use.

“The reason that we need to be concerned about disparities here is that technology and media are now part and parcel of growing up in America,” said Ellen Wartella, the director of the Center on Media and Human Development at Northwestern University in Evanston, Ill. A professor of communication, she has conducted research on children, media and race.

“When there are disparities, even if it’s a question of how smart your phone is, teens and tweens may not have access to what they need — not just for school, but for other parts of their lives as well,” Dr. Wartella said. “They aren’t able to participate in the way that more wealthy teens and tweens are able to.”

The study also found that, while black teenagers and teenagers in lower-income households had fewer computers at home, those who did have access to smartphones and tablets typically spent more time using them each day than their white or higher-income peers."
us  inequality  digitaldivide  2015  teens  youth  socialmedia  media  television  tv  smartphones  laptops  computing  internet  web  online  ellenwartella 
november 2015 by robertogreco
Is It Time to Give Up on Computers in Schools?
"This is a version of the talk I gave at ISTE today on a panel titled "Is It Time to Give Up on Computers in Schools?" with Gary Stager, Will Richardson, Martin Levins, David Thornburg, and Wayne D'Orio. It was pretty damn fun.

Take one step into that massive shit-show called the Expo Hall and it’s hard not to agree: “yes, it is time to give up on computers in schools.”

Perhaps, once upon a time, we could believe ed-tech would change things. But as Seymour Papert noted in The Children’s Machine,
Little by little the subversive features of the computer were eroded away: … the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation.

I think we were naive when we ever thought otherwise.

Sure, there are subversive features, but I think the computers also involve neoliberalism, imperialism, libertarianism, and environmental destruction. They now involve high stakes investment by the global 1% – it’s going to be a $60 billion market by 2018, we’re told. Computers are implicated in the systematic de-funding and dismantling of a public school system and a devaluation of human labor. They involve the consolidation of corporate and governmental power. They involve scientific management. They are designed by white men for white men. They re-inscribe inequality.

And so I think it’s time now to recognize that if we want education that is more just and more equitable and more sustainable, that we need to get the ideologies that are hardwired into computers out of the classroom.

In the early days of educational computing, it was often up to innovative, progressive teachers to put a personal computer in their classroom, even paying for the computer out of their own pocket. These were days of experimentation, and as Seymour teaches us, a re-imagining of what these powerful machines could enable students to do.

And then came the network and, again, the mainframe.

You’ll often hear the Internet hailed as one of the greatest inventions of mankind – something that connects us all and that has, thanks to the World Wide Web, enabled the publishing and sharing of ideas at an unprecedented pace and scale.

What “the network” introduced in educational technology was also a more centralized control of computers. No longer was it up to the individual teacher to have a computer in her classroom. It was up to the district, the Central Office, IT. The sorts of hardware and software that was purchased had to meet those needs – the needs and the desire of the administration, not the needs and the desires of innovative educators, and certainly not the needs and desires of students.

The mainframe never went away. And now, virtualized, we call it “the cloud.”

Computers and mainframes and networks are points of control. They are tools of surveillance. Databases and data are how we are disciplined and punished. Quite to the contrary of Seymour’s hopes that computers will liberate learners, this will be how we are monitored and managed. Teachers. Students. Principals. Citizens. All of us.

If we look at the history of computers, we shouldn’t be that surprised. The computers’ origins are as weapons of war: Alan Turing, Bletchley Park, code-breakers and cryptography. IBM in Germany and its development of machines and databases that it sold to the Nazis in order to efficiently collect the identity and whereabouts of Jews.

The latter should give us great pause as we tout programs and policies that collect massive amounts of data – “big data.” The algorithms that computers facilitate drive more and more of our lives. We live in what law professor Frank Pasquale calls “the black box society.” We are tracked by technology; we are tracked by companies; we are tracked by our employers; we are tracked by the government, and “we have no clear idea of just how far much of this information can travel, how it is used, or its consequences.” When we compel the use of ed-tech, we are doing this to our students.

Our access to information is constrained by these algorithms. Our choices, our students’ choices are constrained by these algorithms – and we do not even recognize it, let alone challenge it.

We have convinced ourselves, for example, that we can trust Google with its mission: “To organize the world’s information and make it universally accessible and useful.” I call “bullshit.”

Google is at the heart of two things that computer-using educators should care deeply and think much more critically about: the collection of massive amounts of our personal data and the control over our access to knowledge.

Neither of these are neutral. Again, these are driven by ideology and by algorithms.

You’ll hear the ed-tech industry gleefully call this “personalization.” More data collection and analysis, they contend, will mean that the software bends to the student. To the contrary, as Seymour pointed out long ago, instead we find the computer programming the child. If we do not unpack the ideology, if the algorithms are all black-boxed, then “personalization” will be discriminatory. As Tressie McMillan Cottom has argued “a ‘personalized’ platform can never be democratizing when the platform operates in a society defined by inequalities.”

If we want schools to be democratizing, then we need to stop and consider how computers are likely to entrench the very opposite. Unless we stop them.

In the 1960s, the punchcard – an older piece of “ed-tech” – had become a symbol of our dehumanization by computers and by a system – an educational system – that was inflexible, impersonal. We were being reduced to numbers. We were becoming alienated. These new machines were increasing the efficiency of a system that was setting us up for a life of drudgery and that were sending us off to war. We could not be trusted with our data or with our freedoms or with the machines themselves, we were told, as the punchcards cautioned: “Do not fold, spindle, or mutilate.”

Students fought back.

Let me quote here from Mario Savio, speaking on the stairs of Sproul Hall at UC Berkeley in 1964 – over fifty years ago, yes, but I think still one of the most relevant messages for us as we consider the state and the ideology of education technology:
We’re human beings!

There is a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can’t take part; you can’t even passively take part, and you’ve got to put your bodies upon the gears and upon the wheels, upon the levers, upon all the apparatus, and you’ve got to make it stop. And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all!

We’ve upgraded from punchcards to iPads. But underneath, a dangerous ideology – a reduction to 1s and 0s – remains. And so we need to stop this ed-tech machine."
edtech  education  audreywatters  bias  mariosavio  politics  schools  learning  tressuemcmillancottom  algorithms  seymourpapert  personalization  data  security  privacy  howwteach  howwelearn  subversion  computers  computing  lms  neoliberalism  imperialism  environment  labor  publicschools  funding  networks  cloud  bigdata  google  history 
july 2015 by robertogreco
What I learned by asking 100 school kids about the future of work
"In May this year I gave a different style of presentation at an Ignite event in San Francisco to the ones I normally do. As an analyst and someone who gets excited by telling stories about the possibilities of technology I do a fair bit of research and digging around, but I needed something different. Simply regurgitating the same facts and numbers over and over in meme fashion that we read every week wasn’t enough.

So I went back to school. Literally.

I approached the head teacher of a local primary school and asked her help, I needed to find out from the kids what they expect their future to look like when they enter the business world. She graciously agreed and roped in the other teachers to coordinate. Bear in mind we’re talking a vast age range here, from 5 to 11-year-olds, girls and boys. I really didn’t have any expectations, save for feedback like ‘flying cars’, ‘moon based offices’, like a cross between the Jetsons and Star Trek.

What I got back was so grounded and well thought out its made me challenge just how we seem to approach our own thinking about the future.

I, robot
Kids love robots but there wasn’t a hint of Optimus Prime anywhere. They wanted helpers in the office, assistants to help them achieve their work in a more productive way. They expect things like virtual assistants that we are learning to live with in Cortana and Google to be completely woven into the fabric of business, ambiently aware of our needs and not explicitly called into action. They understood that robots have a purpose and they should be part of the process, not extraneous to it.

What, no PC and Pa$$w0rd5?
There was no mention of the humble PC. In fact, if it has a surface, kids expect to be able to interact with it, be it a table, wall, window. Everything was game. Virtual reality and holography were key to how kids today expect to conduct business tomorrow. Not only that, the notion of passworded security didn’t even feature. Everything was passively tied to a user’s biometrics, whether fingerprint, facial or voice recognition, security and privacy was again an ambient process that wasn’t explicitly invoked.

Children value the idea of privacy long before they understand the full implications of it.

I don’t want e-mail
What child does? These were no exception. They valued multi-video collaboration and mobile working above traditional methods we use today. Kids collaborate using Google Hangouts and Skype to complete their homework assignments — at the age of 11. Yet in an office environment we still find it rare to conduct business this way. Kids won’t when they enter the business world, they expect it as a minimum.

Change the emotion of work
Perhaps the best conclusion from the entries was that children expect work to have an emotional connection, not be a hard, grey environment they spend the vast majority of their lives in. The whole office is expected to be crowdshaped according to the moods from the workers, in real-time. Colours, visuals, smells, sounds.

It’s not a bad idea, and beat the ubiquitous bean bag and pinball machine afterthought some companies subscribe to.

Another brick in the wall?
This became the title of the presentation, which you can find on my Slideshare account and also can view the Ignite talk from the MemSQL HQ. After reading 100+ golden nuggets of inspiration four things became clear:

1. We are ignoring a key generation in understanding what they want us to build for the future, and not everything they suggest is far-fetched. Millennials are the wrong people we should be talking to if we want to stay ahead of the game.

2. We are guilty of not taking the business and IT world into the classroom earlier. We surround ourselves in stats and scores to affirm our position around STEM education, genders in classrooms, and wait for the policymakers to change things. We should be the ones to change things.

3. We need more -eers. There has been an overt focus on developers. Indeed most curriculums are looking into computer science and programming to be part of the education system because of the shortfall in skills predicted. But we need to think broader than this. We need more engineers, imagineers, creationeers. People who can create, build and program. If we truly are entering an age where 50 billion devices will connect and talk across the Internet then who is going to build and maintain them all? A developer can’t, but an engineer can.

4. It was the girls who gave the most detailed feedback in the entries I received. Stop creating pie charts about girls leaving STEM subjects and just talk to them.

Kids want to learn about business, IT, and STEM subjects faster than we are prepared to keep up with because we’re so preoccupied about creating a future we want to see, but will never inhabit by the time it’s built.

So, my advice. This year go back to school. Search out the golden nuggets that are hidden in the classrooms across your countries. Talk to the real generation we should be building a future for.

You might learn something."
2015  theopriestley  children  wrok  future  via:willrichardson  education  email  robots  automation  work  labor  fulfillment  collaboration  videoconferencing  computing  technology 
july 2015 by robertogreco
Is Translation an Art or a Math Problem? - NYTimes.com
"One Enlightenment aspiration that the science-­fiction industry has long taken for granted, as a necessary intergalactic conceit, is the universal translator. In a 1967 episode of “Star Trek,” Mr. Spock assembles such a device from spare parts lying around the ship. An elongated chrome cylinder with blinking red-and-green indicator lights, it resembles a retracted light saber; Captain Kirk explains how it works with an off-the-cuff disquisition on the principles of Chomsky’s “universal grammar,” and they walk outside to the desert-­island planet of Gamma Canaris N, where they’re being held hostage by an alien. The alien, whom they call The Companion, materializes as a fraction of sparkling cloud. It looks like an orange Christmas tree made of vaporized mortadella. Kirk grips the translator and addresses their kidnapper in a slow, patronizing, put-down-the-gun tone. The all-­powerful Companion is astonished.

“My thoughts,” she says with some confusion, “you can hear them.”

The exchange emphasizes the utopian ambition that has long motivated universal translation. The Companion might be an ion fog with coruscating globules of viscera, a cluster of chunky meat-parts suspended in aspic, but once Kirk has established communication, the first thing he does is teach her to understand love. It is a dream that harks back to Genesis, of a common tongue that perfectly maps thought to world. In Scripture, this allowed for a humanity so well ­coordinated, so alike in its understanding, that all the world’s subcontractors could agree on a time to build a tower to the heavens. Since Babel, though, even the smallest construction projects are plagued by terrible delays.

Translation is possible, and yet we are still bedeviled by conflict. This fallen state of affairs is often attributed to the translators, who must not be doing a properly faithful job. The most succinct expression of this suspicion is “traduttore, traditore,” a common Italian saying that’s really an argument masked as a proverb. It means, literally, “translator, traitor,” but even though that is semantically on target, it doesn’t match the syllabic harmoniousness of the original, and thus proves the impossibility it asserts.

Translation promises unity but entails betrayal. In his wonderful survey of the history and practice of translation, “Is That a Fish in Your Ear?” the translator David Bellos explains that the very idea of “infidelity” has roots in the Ottoman Empire. The sultans and the members of their court refused to learn the languages of the infidels, so the task of expediting communication with Europe devolved upon a hereditary caste of translators, the Phanariots. They were Greeks with Venetian citizenship residing in Istanbul. European diplomats never liked working with them, because their loyalty was not to the intent of the foreign original but to the sultan’s preference. (Ottoman Turkish apparently had no idiom about not killing the messenger, so their work was a matter of life or death.) We retain this lingering association of translation with treachery."



"One computational linguist said, with a knowing leer, that there is a reason we have more than 20 translations in English of “Don Quixote.” It must be because nobody ever gets it right. If the translators can’t even make up their own minds about what it means to be “faithful” or “accurate,” what’s the point of worrying too much about it? Let’s just get rid of the whole antiquated fidelity concept. All the Sancho Panzas, all the human translators and all the computational linguists are in the same leaky boat, but the machinists are bailing out the water while the humans embroider monograms on the sails.

But like many engineers, the computational linguists are so committed to the power and craftsmanship of their means that they tend to lose perspective on whose ends they are advancing. The problem with human translators, from the time of the Phanariots, is that there is always the possibility that they might be serving the ends of their bosses rather than the intent of the text itself. But at least a human translator asks the very questions — What purpose is this text designed to serve? What aims are encoded in this language? — that a machine regards as entirely beside the point.

The problem is that all texts have some purpose in mind, and what a good human translator does is pay attention to how the means serve the end — how the “style” exists in relationship to “the gist.” The oddity is that belief in the existence of an isolated “gist” often obscures the interests at the heart of translation. Toward the end of the marathon, I asked a participant why he chose to put his computer-­science background to the service of translation. He mentioned, as many of them did, a desire to develop tools that would be helpful in earthquakes or war. Beyond that, he said, he hoped to help ameliorate the time lag in the proliferation of international news. I asked him what he meant.

“There was, for example, a huge delay with the Germanwings crash.”

It wasn’t the example I was expecting. “But what was that delay, like 10 or 15 minutes?”

He cocked his head. “That’s a huge delay if you’re a trader.”

I didn’t say anything informational in words, but my body or face must have communicated a response the engineer mistranslated as ignorance. “It’s called cross-­lingual arbitrage. If there’s a mine collapse in Spanish, you want to make a trade as quickly as possible."
via:tealtan  translation  language  languages  words  davidbellos  technology  2015  engineers  computing  gideonlewis-kraus 
june 2015 by robertogreco
Welcome to Project Jacquard - YouTube
"Project Jacquard is a new system for weaving technology into fabric, transforming everyday objects, like clothes, into interactive surfaces. Project Jacquard will allow designers and developers to build connected, touch-sensitive textiles into their own products. This is just the beginning, and we're very excited to see what people will do with it."
textiles  computing  touch  projectjacquard 
may 2015 by robertogreco
Bat, Bean, Beam: The broken book
"The book weighs only 170 grams but has a potentially very large – although not infinite – number of pages. It is made of plastic and rubber, and a translucent sheet at the front that acts like a window for reading its contents.

The book is portable, durable and robust, but not robust enough that you should sit on it. Which unfortunately is what I did with mine. It bent under my weight and something inside made a crunching sound. When I looked again, the black case of plastic and rubber looked intact but I could tell that the book had been damaged. The bottom half of the page I was reading when I put the book down was badly smudged, as if the text had been drawn it pencil and someone had hastily rubbed it with an eraser. Otherwise, the book was fine. I could still turn the pages and view the top half of each one.

Given the very low energy consumption and lack of significant moving parts, I could preserve the book in this state for quite a long time, there to uselessly collect the top half of a few dozen books and many more articles and essays.

What I chose to do instead was open the book and look inside. This proved a surprisingly difficult task, as the back rubber panel of my damaged Amazon Kindle was held in place by eight very tight clips and took a lot of prying. I wasn’t just driven by curiosity: seeing as I possess an older keyboard model with the screen still intact, I thought I could carry out a little transplant, in the off chance that parts were compatible. I found websites dedicated to replacing a screen on those older models, but nothing for my relatively more recent Kindle 5.

Once I finally removed the back cover, the book looked like this.

[…]

Those marks are a concrete reminder that there is something very particular about these book machines.

Words can be rearranged on a computer screen at will, but they remain virtual, and when I turn the screen off they vanish as if they had never existed. To bring them into the analogue world of inert objects, I need to print them on paper, and then they behave in every way like the old technology. Electronic books straddle those two worlds, typesetting at each turn the ordinary page of a book, only on a special plastic instead of paper. And if the book machine breaks, as it could do at any moment (and eventually will, since the battery cannot be replaced), that last page will become permanent, as if out of your whole library you had chosen to print that one alone.

I enjoyed tinkering with my broken book, although I am not sure what I learned from the experience. It seems likely to me, as it does to many historians and scholars, that the form of the technologies in which our words are written and read affects our psychology as writers and readers, therefore the character that textuality takes in any given epoch. It’s just too early to say exactly what those effects will be for ours. All the same I occasionally worry that books without physical dimensions will entail a loss; that their ghost materiality will make them mean less. As I peer within the layers of the screen of my dead Kindle I am reminded that this is not quite so, and that aspects of that history survive –for history is always the hardest to die."
kindle  giovannitiso  2015  electronics  eink  ebooks  publishing  digital  technology  computers  screens  computing  displays 
may 2015 by robertogreco
Eyeo 2014 - Leah Buechley on Vimeo
"Thinking About Making – An examination of what we mean by making (MAKEing) these days. What gets made? Who makes? Why does making matter?"



[uninscusive covers of Make Magazine and composition of Google employment]

“Meet the new boss, same as the old boss”

"I'm really tired of setting up structures where we tell young women and young brown and black kids that they should aspire to be like rich white guys."

[RTd these back than, but never watched the video. Thanks, Sara for bringing it back up.

https://twitter.com/arikan/status/477546169329938432
https://twitter.com/arikan/status/477549826498764801 ]

[Talk with some of the same content from Leah Buechley (and a lot of defensive comments from the crowd that Buechleya addresses well):
http://edstream.stanford.edu/Video/Play/883b61dd951d4d3f90abeec65eead2911d
https://www.edsurge.com/n/2013-10-29-make-ing-more-diverse-makers ]
leahbuechley  making  makermovement  critique  equality  gender  race  2014  via:ablerism  privilege  wealth  glvo  openstudioproject  lcproject  democratization  inequality  makemagazine  money  age  education  electronics  robots  robotics  rockets  technology  compsci  computerscience  computing  computers  canon  language  work  inclusivity  funding  google  intel  macarthurfoundation  opportunity  power  influence  movements  engineering  lowriders  pottery  craft  culture  universality  marketing  inclusion 
may 2015 by robertogreco
Kardashian Krypt - Chrome Web Store
"Covertly send messages to friends, family, paramours & more by hiding messages in pictures of Kim Kardashian!!!!!

Leverage Kim Kardashian's visual omnipresence thru KARDASHIAN KRYPT, a steganography Chrome extension that hides your messages in pictures of Kim Kardashian.

Easy to use, optional passwords for XTRA protection!!"

[See also:
http://fffff.at/kardashian-krypt/
http://motherboard.vice.com/read/finally-a-way-to-send-secret-messages-inside-pictures-of-kim-kardashian

and

http://fffff.at/kanyefy-your-dock/
http://www.avclub.com/article/heres-how-kanye-fy-your-apple-dock-206030 ]
maddyvarner  encryption  chrome  extensions  kimkardashian  kanyewest  computing  computers  data  imagery  mac  osx 
may 2015 by robertogreco
The Most Important Thing on the Internet Is the Screenshot | WIRED
"Screenshots can also be almost forensic, a way to prove to others that you're really seeing the crazy stuff you're seeing. The first viral hit of the screenshot age was the often-filthy autocorrect errors in SMS. Now screenshots hold people accountable for their terrible online words. When Australian videogame reviewer Alanah Pearce was getting harassed online, she discovered that many of her trolls were young boys. She tracked down their mothers and sent a screenshot to one (who then demanded her son handwrite a letter of apology). DC writers eagerly pounce on politicians' social media faux pas, preserving them before they can vanish down the memory hole—part justice, part gotcha.

Even more arrestingly, though, screenshots let you see other people's screenworlds, increasingly where we all do our best thinking. They invite a useful voyeurism. Venture capitalist Chris Dixon tweeted a link to an article on how “Nikola Tesla predicted the iPhone” and got 109 retweets; when he tweeted a readable screenshot of the piece, it got over 4,200. Indeed, one of the more delightful aspects of screenshot culture is how often people share text instead of just the clickbaity headline. Developers have strained for years to devise technologies for “collaborative reading.” Now it's happening organically.

We're going to need better apps to help us share, sort, and make sense of this new flood. Screenshots are more semantically diverse than typical snapshots, and we already struggle to manage our photo backlog. Rita J. King, codirector of the Science House consultancy, has thousands of screenshots from her online ramblings (pictures of bacteria, charts explaining probability). Rummaging through them reminds her of ideas she's forgotten and triggers new ones. “It's like a scrapbook, or a fossil record in digital silt,” King says. A lifetime of scraps, glimpsed through the screen."
clivethompson  screenshots  internet  online  communication  perspective  pov  chrisdixon  2015  evernote  joannemcneil  photography  digital  imagery  computing  mobile  phones  smartphones 
march 2015 by robertogreco
Mapping the Sneakernet – The New Inquiry
"Digital media travels hand to hand, phone to phone across vast cartographies invisible to Big Data"



"Indeed, the song was just one of many media files I saw on people’s phones: There were Chinese kung fu movies, Nigerian comedies, and Ugandan pop music. They were physically transferred, phone to phone, Bluetooth to Bluetooth, USB stick to USB stick, over hundreds of miles by an informal sneakernet of entertainment media downloaded from the Internet or burned from DVDs, bringing media that’s popular in video halls—basically, small theaters for watching DVDs—to their own villages and huts.

In geographic distribution charts of Carly Rae Jepsen’s virality, you’d be hard pressed to find impressions from this part of the world. Nor is this sneakernet practice unique to the region. On the other end of continent, in Mali, music researcher Christopher Kirkley has documented a music trade using Bluetooth transfers that is similar to what I saw in northern Uganda. These forms of data transfer and access, though quite common, are invisible to traditional measures of connectivity and Big Data research methods. Like millions around the world with direct internet connections, young people in “unconnected” regions are participating in the great viral products of the Internet, consuming mass media files and generating and transferring their own media.

Indeed, the practice of sneakernets is global, with political consequences in countries that try to curtail Internet access. In China, I saw many activists trading media files via USB sticks to avoid stringent censorship and surveillance. As Cuba opens its borders to the world, some might be surprised that citizens have long been able to watch the latest hits from United States, as this Guardian article notes. Sneakernets also apparently extend into North Korea, where strict government policy means only a small elite have access to any sort of connectivity. According to news reports, Chinese bootleggers and South Korean democracy activists regularly smuggle media on USB sticks and DVDs across the border, which may be contributing to increasing defections, as North Korean citizens come to see how the outside world lives.

Blum imagines the Internet as a series of rivers of data crisscrossing the globe. I find it a lovely visual image whose metaphor should be extended further. Like water, the Internet is vast, familiar and seemingly ubiquitous but with extremes of unequal access. Some people have clean, unfettered and flowing data from invisible but reliable sources. Many more experience polluted and flaky sources, and they have to combine patience and filters to get the right set of data they need. Others must hike dozens of miles of paved and dirt roads to access the Internet like water from a well, ferrying it back in fits and spurts when the opportunity arises. And yet more get trickles of data here and there from friends and family, in the form of printouts, a song played on a phone’s speaker, an interesting status update from Facebook relayed orally, a radio station that features stories from the Internet.

Like water from a river, data from the Internet can be scooped up and irrigated and splashed around in novel ways. Whether it’s north of the Nile in Uganda or south of Market St. in the Bay Area, policies and strategies for connecting the “unconnected” should take into account the vast spectrum of ways that people find and access data. Packets of information can be distributed via SMS and mobile 3G but also pieces of paper, USB sticks and Bluetooth. Solar-powered computer kiosks in rural areas can have simple capabilities for connecting to mobile phones’ SD cards for upload and download. Technology training courses can start with a more nuanced base level of understanding, rather than assuming zero knowledge of the basics of computing and network transfer. These are broad strokes, of course; the specifics of motivation and methods are complex and need to be studied carefully in any given instance. But the very channels that ferry entertainment media can also ferry health care information, educational material and anything else in compact enough form.

There are many maps for the world’s internet tubes and the electric wires that power them, but, like any map, they reflect an inherent bias, in this case toward a single user, binary view of connectivity. This view in turn limits our understanding of just how broad an impact the Internet has had on the world, with social, political and cultural implications that have yet to be fully explored. One critical addition to understanding the internet’s global impact is mapping the many sneakernets that crisscross the “unconnected” parts of the world. The next billion, we might find, are already navigating new cities with Google Maps, trading Korean soaps and Nigerian comedies, and rocking out to the latest hits from Carly Rae Jepsen."
access  africa  internet  online  connectivity  2015  anxiaomina  bigdata  digital  maps  mapping  cartography  bias  sneakernets  p2p  peer2peer  uganda  music  data  bluetooth  mobile  phones  technology  computing  networks  northkorea  christopherkirkley  sms  communication  usb  andrewblum  sneakernet 
march 2015 by robertogreco
The Humane Representation of Thought on Vimeo
"Closing keynote at the UIST and SPLASH conferences, October 2014.
Preface: http://worrydream.com/TheHumaneRepresentationOfThought/note.html

References to baby-steps towards some of the concepts mentioned:

Dynamic reality (physical responsiveness):
- The primary work here is Hiroshi Ishii's "Radical Atoms": http://tangible.media.mit.edu/project/inform/
- but also relevant are the "Soft Robotics" projects at Harvard: http://softroboticstoolkit.com
- and at Otherlab: http://youtube.com/watch?v=gyMowPAJwqo
- and some of the more avant-garde corners of material science and 3D printing

Dynamic conversations and presentations:
- Ken Perlin's "Chalktalk" changes daily; here's a recent demo: http://bit.ly/1x5eCOX

Context-sensitive reading material:
- http://worrydream.com/MagicInk/

"Explore-the-model" reading material:
- http://worrydream.com/ExplorableExplanations/
- http://worrydream.com/LadderOfAbstraction/
- http://ncase.me/polygons/
- http://redblobgames.com/pathfinding/a-star/introduction.html
- http://earthprimer.com/

Evidence-backed models:
- http://worrydream.com/TenBrighterIdeas/

Direct-manipulation dynamic authoring:
- http://worrydream.com/StopDrawingDeadFish/
- http://worrydream.com/DrawingDynamicVisualizationsTalk/
- http://tobyschachman.com/Shadershop/

Modes of understanding:
- Jerome Bruner: http://amazon.com/dp/0674897013
- Howard Gardner: http://amazon.com/dp/0465024335
- Kieran Egan: http://amazon.com/dp/0226190390

Embodied thinking:
- Edwin Hutchins: http://amazon.com/dp/0262581469
- Andy Clark: http://amazon.com/dp/0262531569
- George Lakoff: http://amazon.com/dp/0465037712
- JJ Gibson: http://amazon.com/dp/0898599598
- among others: http://en.wikipedia.org/wiki/Embodied_cognition

I don't know what this is all about:
- http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/
- http://worrydream.com/ABriefRantOnTheFutureOfInteractionDesign/responses.html

---

Abstract:

New representations of thought — written language, mathematical notation, information graphics, etc — have been responsible for some of the most significant leaps in the progress of civilization, by expanding humanity’s collectively-thinkable territory.

But at debilitating cost. These representations, having been invented for static media such as paper, tap into a small subset of human capabilities and neglect the rest. Knowledge work means sitting at a desk, interpreting and manipulating symbols. The human body is reduced to an eye staring at tiny rectangles and fingers on a pen or keyboard.

Like any severely unbalanced way of living, this is crippling to mind and body. But it is also enormously wasteful of the vast human potential. Human beings naturally have many powerful modes of thinking and understanding.

Most are incompatible with static media. In a culture that has contorted itself around the limitations of marks on paper, these modes are undeveloped, unrecognized, or scorned.

We are now seeing the start of a dynamic medium. To a large extent, people today are using this medium merely to emulate and extend static representations from the era of paper, and to further constrain the ways in which the human body can interact with external representations of thought.

But the dynamic medium offers the opportunity to deliberately invent a humane and empowering form of knowledge work. We can design dynamic representations which draw on the entire range of human capabilities — all senses, all forms of movement, all forms of understanding — instead of straining a few and atrophying the rest.

This talk suggests how each of the human activities in which thought is externalized (conversing, presenting, reading, writing, etc) can be redesigned around such representations.

---

Art by David Hellman.
Bret Victor -- http://worrydream.com "

[Some notes from Boris Anthony:

"Those of you who know my "book hack", Bret talks about exactly what motivates my explorations starting at 20:45 in https://vimeo.com/115154289 "
https://twitter.com/Bopuc/status/574339495274876928

"From a different angle, btwn 20:00-29:00 Bret explains how "IoT" is totally changing everything
https://vimeo.com/115154289
@timoreilly @moia"
https://twitter.com/Bopuc/status/574341875836043265 ]
bretvictor  towatch  interactiondesign  davidhellman  hiroshiishii  softrobotics  robots  robotics  kenperlin  jeromebruner  howardgardner  kieranegan  edwinhutchins  andyclark  jjgibson  embodiedcognition  cognition  writing  math  mathematics  infographic  visualization  communication  graphics  graphicdesign  design  representation  humans  understanding  howwelearn  howwethink  media  digital  dynamism  movement  conversation  presentation  reading  howweread  howwewrite  chalktalk  otherlab  3dprinting  3d  materials  physical  tangibility  depth  learning  canon  ui  informationdesign  infographics  maps  mapping  data  thinking  thoughts  numbers  algebra  arithmetic  notation  williamplayfair  cartography  gestures  placevalue  periodictable  michaelfaraday  jamesclerkmaxell  ideas  print  printing  leibniz  humanism  humanerepresentation  icons  visual  aural  kinesthetic  spatial  tactile  symbols  iot  internetofthings  programming  computers  screens  computation  computing  coding  modeling  exploration  via:robertogreco  reasoning  rhetoric  gerrysussman  environments  scale  virtualization 
march 2015 by robertogreco
whitney trettien on Twitter: "It continues to upset me how often I come across a digital humanities syllabus with all-but-0 women writers/thinkers/makers/educators."
“It continues to upset me how often I come across a digital humanities syllabus with all-but-0 women writers/thinkers/makers/educators.”

“@whitneytrettien Thank you Whitney. I finally understand why this word "maker" is so important to people.”
https://twitter.com/CaptDavidRyan/status/567768889506934784

“@whitneytrettien By using "Maker" instead of builder, mechanic, tinkerer, fabricator, etc...”
https://twitter.com/CaptDavidRyan/status/567770540010512384

“@whitneytrettien One implies the sort of meta-awareness of the activity (and accompanying prestige) that we associate with...”
https://twitter.com/CaptDavidRyan/status/567770713278402560

“@whitneytrettien that we associate with writers, artist, educators.”
https://twitter.com/CaptDavidRyan/status/567770804697440256

“@whitneytrettien I've always known that "Maker" was a fantastically class-conscious term, but could never put my finger on why exactly.”
https://twitter.com/CaptDavidRyan/status/567771011358023681

“@KellyPDillon @whitneytrettien @evalantsoght I wonder lately how to best combine these stories with courses on methods, too, you know?”
https://twitter.com/djp2025/status/567771031763693568

“@KellyPDillon @whitneytrettien @evalantsoght I mean, is the answer a separate course on Women in Comp? Or a module on comp hist in DH class?”
https://twitter.com/djp2025/status/567771246801469440

“@KellyPDillon @whitneytrettien @evalantsoght I mean, is the answer a separate course on Women in Comp? Or a module on comp hist in DH class?”

“@djp2025 @KellyPDillon @evalantsoght Historicize the methods and the making within/against other practices? "my mother was a computer," etc.”
https://twitter.com/whitneytrettien/status/567772121632632832

“@whitneytrettien @KellyPDillon @evalantsoght Indeed, and exactly.”
https://twitter.com/djp2025/status/567773158800101379

“@whitneytrettien Guilty, apart from the literary texts I teach.”
https://twitter.com/briancroxall/status/567768467655651328

“@briancroxall Which may be more problematic, no? Reinscription of the male gaze to dissect women writers. Not accusing, just musing.”
https://twitter.com/whitneytrettien/status/567768920455913472

“@whitneytrettien It could be. My application of “theory” to our texts is pretty loose. +”
https://twitter.com/briancroxall/status/567769131811086337

“@whitneytrettien It makes me wonder as well whether the idea of distant reading is a gendered gaze.”
https://twitter.com/briancroxall/status/567769298207535104

“@briancroxall Me too -- definitely something I've been thinking about recently.”
https://twitter.com/whitneytrettien/status/567769736176758784

“@briancroxall @whitneytrettien Can I interest you in an article on precisely that subject...”
https://twitter.com/ncecire/status/567769693927522304

“@briancroxall @whitneytrettien (forthcoming...) pic.twitter.com/wTVpR8L7AP ”
https://twitter.com/ncecire/status/567771149954973699

“@ncecire Excellent -- when/where is it out? I look forward to reading it. @briancroxall”
https://twitter.com/whitneytrettien/status/567771469862961153

“@whitneytrettien Oh, ha, would you look at that! Institutional repository at work. http://sro.sussex.ac.uk/52909/1/11_82.1cecire.pdf … @briancroxall”
https://twitter.com/ncecire/status/567772313060671488

“looks fantastic @ncecire's "Ways of Not Reading Gertrude Stein." ELH 82 (forthcoming 2015) http://sro.sussex.ac.uk/52909/1/11_82.1cecire.pdf ”**
https://twitter.com/pfyfe/status/567807396900315136

**Article now at:
http://sro.sussex.ac.uk/52909/
http://muse.jhu.edu/login?auth=0&type=summary&url=/journals/elh/v082/82.1.cecire.html
gender  makers  making  class  whitneytrettien  davidryan  2015  digitalhumanities  briancroxall  danielpowell  feminism  scholarship  academia  malegaze  genderedgaze  craft  thinking  education  tinkering  fabrication  mechanics  building  meta-awareness  art  writing  method  computation  computing  practice  nataliacecire 
february 2015 by robertogreco
Outside the Skinner Box
"There are two commonly repeated tropes about educational technology impeding progress and clouding our judgment. The first such myth is that technology is neutral. This is untrue. All technology was designed to influence behavior; the fact that a handful of people can stretch a technology beyond its normal trajectory does not change this fundamental truth.

It is not uncommon for a school committed to progressive learner-centered education to undermine its mission by investing in a well-intentioned school-to-home communication package that allows Dad to sit at his office desk and day-trade his eight-year-old when the expectation of continuous numerical reporting is offered by such a system. Similarly, I have encountered many independent schools committed to whole language development that then contradict their missions by using phonics software on iPads for no other reason than, “There’s an app for that.”

In schools, all hardware and software bestow agency on one of three parties: the system, the teacher, or the learner. Typically, two of these actors lose their power as the technology benefits the third. Ask a group of colleagues to create a three-column table and brainstorm the hardware or software in your school and who is granted agency by each. Management software, school-wide grade-book programs, integrated learning systems, school-to-home communication packages, massive open online courses (MOOCs), and other cost-cutting technologies grant maximum benefit to the system. Interactive whiteboards, worksheet generators, projectors, whole-class simulations, plagiarism software, and so on, benefit the teacher. Personal laptops, programming languages, creativity software, cameras, MIDI keyboards, microcontrollers, fabrication equipment, and personal web space primarily benefit (bestow agency to) the learner.

The second oft-recited myth is that technology changes constantly. If only this were the case in schools. Regrettably, much of what schools do with technology is exactly the same, or less than, what they did 25 years ago. Wordles, note taking, looking stuff up, word-processing essays, and making PowerPoint presentations on topics students don’t care about for audiences they’ll never encounter represent the state-of-the-art in far too many classrooms. We can do better.

I enjoyed the great fortune of leading professional development at the world’s first laptop schools nearly a quarter century ago. Those Australian schools never saw laptops as an experiment or pilot project. For them, laptops represented a way to rescue kids explicitly from a failing hierarchical bureaucracy. Every student learned to program from every teacher as a means to encounter powerful ideas, express oneself, and change the nature of the educational experience.

When teachers saw what was possible through the eyes and the screens of their children, they demanded rapid changes to scheduling, assessment, classroom furniture, and even school architecture. They blurred the artificial boundaries between subject areas, shared expertise, challenged peers, and transformed many schools to benefit the children they served. Those early “laptop teachers” often viewed themselves in new and powerful ways. An amazing number of them went on to become school principals, Ph.D.s, policy makers, and entrepreneurs. A school like Methodist Ladies’ College in Melbourne, Australia, changed the world with its existing teaching staff through a coherent vision articulated clearly by a bold, charismatic leader, David Loader, who focused on benefiting the largest number of stakeholders in any school community: the students.2"



"A Bold Vision for the Future of Computers in Schools

The future of schools is not found in a shopping list of devices and programs, no matter how interesting or revolutionary the technology may be. In order for schools to seize the power of computers as intellectual laboratories and vehicles for self-expression, the following traits need to be in place.

Awareness

Educators, parents, and policy makers need to understand that, currently, their investment in technology is not maximizing its promise to amplify the human potential of each student. Alternative models must be made available.

Governance

Too many schools conflate instructional and noninstructional technology. Such an inability to reconcile often-competing priorities harms the educational enterprise of a school. One role is of the plumber and the other of a philosopher; both are important functions, but you would never consciously surrender the setting of graduation standards to your maintenance department. Why, then, is educational policy so greatly impacted by IT personnel?

Vision

Schools need a bolder concept of what computing can mean in the creative and intellectual development of young people. Such a vision must be consistent with the educational ­ideals of a school. In far too many cases, technology is used in ways contrary to the stated mission of the school. At no point should technology be used as a substitute for competent educators or to narrow educational experiences. The vision should not be rigid, but needs to embrace the serendipitous discoveries and emerging technologies that expand the power of our goals.

Consistent leadership

Once a vision of educational technology use is established, school leadership needs to model that approach, enact rituals and practices designed to reinforce it, and lend a coherent voice leading the entire community in a fashion consistent with its vision to improve the lives of young people.

Great leaders recognize the forces that water down innovation and enact safeguards to minimize such inertia.

Professional development for professionals

You cannot be expected to teach 21st-century learners if you have not learned in this century. Professional development strategies need to focus on creating the sorts of rich constructive learning experiences schools desire for students, not on using computers to perform clerical tasks. We must refrain from purchasing “teacher-proof” curricula or technology and then acting surprised when teachers fail to embrace it. PD needs to stop rewarding helplessness and embrace the competence of educators.

High Expectations and Big Dreams

When we abandon our prejudices and superstitions in order to create the conditions in which anything is possible, teachers and children alike will exceed our expectations.

Some people are excited by using technology to teach what we have always wanted kids to know, perhaps with greater efficiency, efficacy, or comprehension. I am not interested in using computers to improve education by 0.02 percent. Incrementalism is the enemy of progress. My work is driven by the actualization of young people learning and doing in ways unimaginable just a few years ago.

This is not a fantasy; it’s happening in schools today. Here are a few vignettes from my own work.

Learning by Doing"
2015  garystager  computing  schools  education  technology  makers  makermovement  seymourpapert  edtech  physicalcomputing  governance  awareness  vision  leadership  nais  learningbydoing  learning  constructionism 
january 2015 by robertogreco
prosthetic knowledge — Intel Compute Stick Announced today - a fully...
"Intel Compute Stick

Announced today - a fully working computer the size of a USB stick which just plugs into an HDMI port of a display:

The Intel® Compute Stick is a new generation compute-on-a-stick device that’s ready-to-go out-of–the-box and offers the performance, quality, and value you expect from Intel. Pre-installed with Windows 8.1* or Linux, get a complete experience on an ultra-small, power-efficient device that is just four inches long, yet packs the power and reliability of a quad-core Intel® Atom™ processor, with built-in wireless connectivity, on-board storage, and a micro SD card slot for additional storage. It’s everything you love about your desktop computer in a device that fits in the palm of your hand.

Computers are cheaper and smaller now - whilst it doesn’t appear to feature any specific graphical card for media capabilities, I’m sure there could be useful applications for tech arts (removing the need of a laptop)

More Here [http://www.intel.com/content/www/us/en/compute-stick/intel-compute-stick.html ]"

[See also: http://www.engadget.com/2015/01/07/intel-compute-stick/

"Your Chromecast may be able to play Netflix, but can it play Crysis? Intel's HDMI Compute Stick probably can't either, but the tiny device does have enough power to run Windows 8.1 apps on your TV. Intel has rather impressively crammed in a quad-core Atom CPU, 32GB of storage and 2GB of RAM, along with a USB port, WiFi and Bluetooth 4.0 support and a mini-USB connector for power (HDMI power will come later). "But why?" you might ask. Intel sees it as a low-priced computer or (pricey) media stick, or even a thin-client device for companies. To up the crazy factor, it may eventually launch a much zippier Core M version. The Windows version will run $149, and if that seems a bit much, a 1GB RAM/8GB memory Linux version is priced at $89. Both will arrive in March."]
intel  computers  hotswapping  windows8  computing  2015  thinclients 
january 2015 by robertogreco
Convivial Tools in an Age of Surveillance
"What would convivial ed-tech look like?

The answer can’t simply be “like the Web” as the Web is not some sort of safe and open and reliable and accessible and durable place. The answer can’t simply be “like the Web” as though the move from institutions to networks magically scrubs away the accumulation of history and power. The answer can’t simply be “like the Web” as though posting resources, reference services, peer-matching, and skill exchanges — what Illich identified as the core of his “learning webs” — are sufficient tools in the service of equity, freedom, justice, or hell, learning.

“Like the Web” is perhaps a good place to start, don’t get me wrong, particularly if this means students are in control of their own online spaces — its content, its data, its availability, its publicness. “Like the Web” is convivial, or close to it, if students are in control of their privacy, their agency, their networks, their learning. We all need to own our learning — and the analog and the digital representations or exhaust from that. Convivial tools do not reduce that to a transaction — reduce our learning to a transaction, reduce our social interactions to a transaction.

I'm not sure the phrase "safe space" is quite the right one to build alternate, progressive education technologies around, although I do think convivial tools do have to be “safe” insofar as we recognize the importance of each other’s health and well-being. Safe spaces where vulnerability isn’t a weakness for others to exploit. Safe spaces where we are free to explore, but not to the detriment of those around us. As Illich writes, "A convivial society would be the result of social arrangements that guarantee for each member the most ample and free access to the tools of the community and limit this freedom only in favor of another member’s equal freedom.”

We can’t really privilege “safe” as the crux of “convivial” if we want to push our own boundaries when it comes to curiosity, exploration, and learning. There is risk associated with learning. There’s fear and failure (although I do hate how those are being fetishized in a lot of education discussions these days, I should note.)

Perhaps what we need to build are more compassionate spaces, so that education technology isn’t in the service of surveillance, standardization, assessment, control.

Perhaps we need more brave spaces. Or at least many educators need to be braver in open, public spaces -- not brave to promote their own "brands" but brave in standing with their students. Not "protecting them” from education technology or from the open Web but not leaving them alone, and not opening them to exploitation.

Perhaps what we need to build are more consensus-building not consensus-demanding tools. Mike Caulfield gets at this in a recent keynote about “federated education.” He argues that "Wiki, as it currently stands, is a consensus *engine*. And while that’s great in the later stages of an idea, it can be deadly in those first stages.” Caulfield relates the story of the Wikipedia entry on Kate Middleton’s wedding dress, which, 16 minutes after it was created, "someone – and in this case it probably matters that is was a dude – came and marked the page for deletion as trivial, or as they put it 'A non-notable article incapable of being expanded beyond a stub.’” Debate ensues on the entry’s “talk” page, until finally Jimmy Wales steps in with his vote: a “strong keep,” adding "I hope someone will create lots of articles about lots of famous dresses. I believe that our systemic bias caused by being a predominantly male geek community is worth some reflection in this context.”

Mike Caulfield has recently been exploring a different sort of wiki, also by Ward Cunningham. This one — called the Smallest Federated Wiki — doesn’t demand consensus like Wikipedia does. Not off the bat. Instead, entries — and this can be any sort of text or image or video, it doesn’t have to “look like” an encyclopedia — live on federated servers. Instead of everyone collaborating in one space on one server like a “traditional” wiki, the work is distributed. It can be copied and forked. Ideas can be shared and linked; it can be co-developed and co-edited. But there isn’t one “vote” or one official entry that is necessarily canonical.

Rather than centralized control, conviviality. This distinction between Wikipedia and Smallest Federated Wiki echoes too what Illich argued: that we need to be able to identify when our technologies become manipulative. We need "to provide guidelines for detecting the incipient stages of murderous logic in a tool; and to devise tools and tool systems that optimize the balance of life, thereby maximizing liberty for all."

Of course, we need to recognize, those of us that work in ed-tech and adopt ed-tech and talk about ed-tech and tech writ large, that convivial tools and a convivial society must go hand-in-hand. There isn’t any sort of technological fix to make education better. It’s a political problem, that is, not a technological one. We cannot come up with technologies that address systematic inequalities — those created by and reinscribed by education— unless we are willing to confront those inequalities head on. Those radical education writers of the Sixties and Seventies offered powerful diagnoses about what was wrong with schooling. The progressive education technologists of the Sixties and Seventies imagined ways in which ed-tech could work in the service of dismantling some of the drudgery and exploitation.

But where are we now? Instead we find ourselves with technologies working to make that exploitation and centralization of power even more entrenched. There must be alternatives — both within and without technology, both within and without institutions. Those of us who talk and write and teach ed-tech need to be pursuing those things, and not promoting consumption and furthering institutional and industrial control. In Illich’s words: "The crisis I have described confronts people with a choice between convivial tools and being crushed by machines.""
toolforconviviality  ivanillich  audreywatters  edtech  technology  education  2014  seymourpapert  logo  alankay  dynabook  mikecaufield  wardcunningham  web  internet  online  schools  teaching  progressive  wikipedia  smallestfederatedwiki  wikis  society  politics  policy  decentralization  surveillance  doxxing  gamergate  drm  startups  venturecapital  bigdata  neilpostman  paulofreire  paulgoodman  datapalooza  knewton  computers  computing  mindstorms  control  readwrite  everettreimer  1960s  1970s  jonathankozol  disruption  revolution  consensus  safety  bravery  courage  equity  freedom  justice  learning 
november 2014 by robertogreco
The Sixth Stage of Grief is Retro-Computing — The Message — Medium
"Imagine having, in your confused adolescence, the friendship of an older, avuncular man who is into computers, a world-traveling photographer who would occasionally head out to, like, videotape the Dalai Lama for a few weeks, then come back and and listen to every word you said while you sat on his porch. A generous, kind person who spoke openly about love and faith and treated people with respect."



"A year after the Amiga showed up—I was 13—my life started to go backwards. Not forever, just for a while. My dad left, money was tight. My clothes were the ones my dad left behind, old blouse-like Oxfords in the days of Hobie Cat surfwear. I was already big and weird, and now I was something else. I think my slide perplexed my peers; if anything they bullied me less. I heard them murmuring as I wandered down the hall.

I was a ghost and I had haunts: I vanished into the computer. I had that box of BBS floppies. One after another I’d insert them into the computer and examine every file, thousands of files all told. That was how I pieced together the world. Second-hand books and BBS disks and trips to the library. I felt very alone but I’ve since learned that it was a normal American childhood, one millions of people experienced.

Often—how often I don’t remember—I’d go over to Tom’s. I’d share my techniques for rotating text in Deluxe Paint, show him what I’d gleaned from my disks. He always had a few spare computers around for generating title sequences in videos, and later for editing, and he’d let me practice with his videocameras. And he would listen to me.

Like I said: Avuncular. He wasn’t a father figure. Or a mother figure. He was just a kind ear when I needed as many kind ears as I could find. I don’t remember what I said; I just remember being heard. That’s the secret to building a network. People want to be heard. God, life, history, science, books, computers. The regular conversations of anxious kids. His students would show up, impossibly sophisticated 19-year-old men and women, and I’d listen to them talk as the sun went down. For years. A world passed over that porch and I got to watch and participate even though I was still a boy.

I constantly apologized for being there, for being so young and probably annoying, and people would just laugh at me. But no one put me in my place. People touched me, hugged me, told me about books to read and movies to watch. I was not a ghost.

When I graduated from high school I went by to sit on the porch and Tom gave me a little brown teddy bear. You need to remember, he said, to be a kid. To stay in touch with that part of yourself.

I did not do this."



"Technology is What We Share

Technology is what we share. I don’t mean “we share the experience of technology.” I mean: By my lights, people very often share technologies with each other when they talk. Strategies. Ideas for living our lives. We do it all the time. Parenting email lists share strategies about breastfeeding and bedtime. Quotes from the Dalai Lama. We talk neckties, etiquette, and Minecraft, and tell stories that give us guidance as to how to live. A tremendous part of daily life regards the exchange of technologies. We are good at it. It’s so simple as to be invisible. Can I borrow your scissors? Do you want tickets? I know guacamole is extra. The world of technology isn’t separate from regular life. It’s made to seem that way because of, well…capitalism. Tribal dynamics. Territoriality. Because there is a need to sell technology, to package it, to recoup the terrible investment. So it becomes this thing that is separate from culture. A product.

I went looking for the teddy bear that Tom had given me, the reminder to be a child sometimes, and found it atop a bookshelf. When I pulled it down I was surprised to find that it was in a tiny diaper.

I stood there, ridiculous, a 40-year-old man with a diapered 22-year-old teddy bear in my hand. It stared back at me with root-beer eyes.

This is what I remembered right then: That before my wife got pregnant we had been trying for kids for years without success. We had considered giving up.

That was when I said to my wife: If we do not have children, we will move somewhere where there is a porch. The children who need love will find the porch. They will know how to find it. We will be as much parents as we want to be.

And when she got pregnant with twins we needed the right-sized doll to rehearse diapering. I went and found that bear in an old box.

I was handed that toy, sitting on Tom’s porch, in 1992. A person offering another person a piece of advice. Life passed through that object as well, through the teddy bear as much as through the operating systems of yore.

Now that I have children I can see how tuned they are to the world. Living crystals tuned to all manner of frequencies. And how urgently they need to be heard. They look up and they say, look at me. And I put my phone away.

And when they go to bed, protesting and screaming, I go to mess with my computers, my old weird imaginary emulated computers. System after system. I open up these time capsules and look at the thousands of old applications, millions of dollars of software, but now it can be downloaded in a few minutes and takes up a tiny portion of a hard drive. It’s all comically antiquated.

When you read histories of technology, whether of successes or failures, you sense the yearning of people who want to get back into those rooms for a minute, back to solving the old problems. How should a window open? How should the mouse look? What will people want to do, when we give them these machines? Who wouldn’t want to go back 20 years—to drive again into the office, to sit before the whiteboard in a beanbag chair, in a place of warmth and clarity, and give it another try?

Such a strange way to say goodbye. So here I am. Imaginary disks whirring and screens blinking as I visit my old haunts. Wandering through lost computer worlds for an hour or two, taking screenshots like a tourist. Shutting one virtual machine down with a sigh, then starting up another one. But while these machines run, I am a kid. A boy on a porch, back among his friends."
paulford  memory  memories  childhood  neoteny  play  wonder  sharing  obituaries  technology  history  sqeak  amiga  textcraft  plan9  smalltalk-80  smalltalk  mac  1980s  1990s  1970s  xerox  xeroxalto  texteditors  wordprocessors  software  emulators  emulations  2014  computers  computing  adolescence  listening  parenting  adults  children  mentors  macwrite  howwelearn  relationships  canon  caring  love  amigaworkbench  commodore  aegisanimator  jimkent  vic-20  commodore64  1985  andywarhol  debbieharry  1987  networks  porches  kindness  humility  lisp  windows3.1  microsoft  microsoftpaint  capitalism  next  openstep  1997  1992  stevejobs  objectivec  belllabs  xeroxparc  inria  doom  macos9  interfacebuilder 
november 2014 by robertogreco
The Dads of Tech - The Baffler
"The master’s tools will never dismantle the master’s house,” Audre Lorde famously said, but let Clay Shirky mansplain. It “always struck me as a strange observation—even the metaphor isn’t true,” the tech consultant and bestselling author said at the New Yorker Festival last autumn in a debate with the novelist Jonathan Franzen. “Get ahold of the master’s hammer,” and you can dismantle anything. Just consider all the people “flipping on the ‘I’m gay’ light on Facebook” to signal their support for marriage equality—there, Shirky declared, is a prime example of the master’s tools put to good use.

“Shirky invented the Internet and Franzen wants to shut it down,” panel moderator Henry Finder mused with an air of sophisticated hyperbole. Finder said he was merely paraphrasing a festival attendee he’d overheard outside—and joked that for once in his New Yorker editing career, he didn’t need fact-checkers to determine whether the story was true. He then announced with a wink that it was “maybe a little true.” Heh.

Shirky studied fine art in school, worked as a lighting designer for theater and dance companies; he was a partner at investment firm The Accelerator Group before turning to tech punditry. Now he teaches at NYU and publishes gung-ho cyberliberation tracts such as Here Comes Everybody and Cognitive Surplus while plying a consulting sideline for a diverse corps of well-paying clients such as Nokia, the BBC, and the U.S. Navy—as well as high-profile speaking gigs like the New Yorker forum, which was convened under the stupifyingly dualistic heading “Is Technology Good for Culture?”

And that’s tech punditry for you: simplification with an undercurrent of sexism. There are plenty of woman academics and researchers who study technology and social change, but we are a long way from the forefront of stage-managed gobbledygook. Instead of getting regaled with nods and winks for “inventing the Internet,” women in the tech world typically have to overcome the bigoted suspicions of an intensively male geek culture—when, that is, they don’t face outright harassment in the course of pursuing industry careers."



"No wonder, then, that investors ignore coders from marginalized communities who aspire to meet real needs. With an Internet so simple even your Dad can understand it as our guiding model, the myriad challenges that attend the digital transformation, from rampant sexism, racism, and homophobia to the decline of journalism, are impossible to apprehend, let alone address. How else could a white dude who didn’t know that a “bustle” is a butt-enhancing device from the late nineteenth century raise $6.5 million to start a women’s content site under that name? Or look at investors racing to fund the latest fad: “explainer” journalism, a format that epitomizes our current predicament. Explainer journalism is an Internet simple enough for Dad to understand made manifest. Nate Silver’s FiveThirtyEight, the New York Times’ The Upshot, and Ezra Klein’s Vox (which boasts a “Leadership Team” of seventeen men and three women) all champion a numbers-driven model that does not allow for qualification or uncertainty. No doubt, quantification can aid insight, but statistics shouldn’t be synonymous with a naive, didactic faith that numbers don’t lie or that everything worth knowing can be rendered in a series of quickly clickable virtual notecards. Plenty of news reports cry out for further explanation, because the world is complex and journalists often get things wrong, but like Internet punditry before it, these explainer outlets don’t explain, they simplify."



"Most of all, the dominance of the Dad’s-eye-view of the world shores up the Internet’s underlying economic operating system. This also means a de facto free pass for corporate surveillance, along with an increasing concentration of wealth and power in the coffers of a handful of advertising-dependent, privacy-violating info-monopolies and the men who run them (namely Google and Facebook, though Amazon and Apple are also addicted to sucking up our personal data). Study after study shows that women are more sensitive to the subject of privacy than men, from a Pew poll that found that young girls are more prone than boys are to disabling location tracking on their devices to another that showed that while women are equally enthusiastic about technology in general, they’re also more concerned about the implications of wearable technologies. A more complicated Internet would incorporate these legitimate apprehensions instead of demanding “openness” and “transparency” from everyone. (It would also, we dare to hope, recognize that the vacuous sloganeering on behalf of openness only makes us more easily surveilled by government and big business.) But, of course, imposing privacy protections would involve regulation and impede profit—two bête noires of tech dudes who are quite sure that Internet freedom is synonymous with the free market.

The master’s house might have a new shape—it may be sprawling and diffuse, and occupy what is euphemistically referred to as the “cloud”—but it also has become corporatized and commercialized, redolent of hierarchies of yore, and it needs to be dismantled. Unfortunately, in the digital age, like the predigital one, men don’t want to take it apart."
astrataylor  joannemcneil  2014  sexism  technology  culture  siliconvalley  dads  nodads  patriarchy  paternalism  gender  emotionallabor  hisotry  computing  programming  complexity  simplification  nuance  diversity  journalism  clayshirky  polarization  exclusion  marcandreessen  ellenchisa  julieannhorvath  github  careers  audrelorde  punditry  canon  inequality 
november 2014 by robertogreco
Raspberry Pi Compute Module: new product! | Raspberry Pi
"The compute module contains the guts of a Raspberry Pi (the BCM2835 processor and 512Mbyte of RAM) as well as a 4Gbyte eMMC Flash device (which is the equivalent of the SD card in the Pi). This is all integrated on to a small 67.6x30mm board which fits into a standard DDR2 SODIMM connector (the same type of connector as used for laptop memory*). The Flash memory is connected directly to the processor on the board, but the remaining processor interfaces are available to the user via the connector pins. You get the full flexibility of the BCM2835 SoC (which means that many more GPIOs and interfaces are available as compared to the Raspberry Pi), and designing the module into a custom system should be relatively straightforward as we’ve put all the tricky bits onto the module itself.

So what you are seeing here is a Raspberry Pi shrunk down to fit on a SODIMM with onboard memory, whose connectors you can customise for your own needs.

The Compute Module is primarily designed for those who are going to create their own PCB. However, we are also launching something called the Compute Module IO Board to help designers get started.

The Compute Module IO Board is a simple, open-source breakout board that you can plug a Compute Module into. It provides the necessary power to the module, and gives you the ability to program the module’s Flash memory, access the processor interfaces in a slightly more friendly fashion (pin headers and flexi connectors, much like the Pi) and provides the necessary HDMI and USB connectors so that you have an entire system that can boot Raspbian (or the OS of your choice). This board provides both a starting template for those who want to design with the Compute Module, and a quick way to start experimenting with the hardware and building and testing a system before going to the expense of fabricating a custom board.

Initially, the Compute Module and IO Board will be available to buy together as the Raspberry Pi Compute Module Development Kit.

These kits will be available from RS and element14 some time in June. Shortly after that the Compute Module will be available to buy separately, with a unit cost of around $30 in batches of 100; you will also be able to buy them individually, but the price will be slightly higher. The Raspberry Pi Foundation is a charity, and as with everything we make here, all profits are pushed straight back into educating kids in computing."

[See also: http://www.fastcompany.com/3033850/most-creative-people/whats-next-for-raspberry-pi-the-35-computer-powering-hardware-innovatio ]
raspberrypi  diy  microcontrollers  via:alexismadrigal  computing  internetofthings  iot 
august 2014 by robertogreco
ScratchJr on the App Store on iTunes
"With ScratchJr, young children (ages 5-7) learn important new skills as they program their own interactive stories and games.

By snapping together graphical programming blocks, children can make characters move, jump, dance, and sing. In the process, children learn to solve problems, design projects, and express themselves creatively on the computer. They also use math and language in a meaningful and motivating context, supporting the development of early-childhood numeracy and literacy. With ScratchJr, children don’t just learn to code, they code to learn.

ScratchJr was inspired by the popular Scratch programming language (http://scratch.mit.edu), used by millions of people (ages 8 and up) around the world. The ScratchJr interface and programming language were redesigned to make them appropriate for younger children’s cognitive, personal, social, and emotional development.

ScratchJr is a collaboration between the Lifelong Kindergarten research group at the MIT Media Lab, the Developmental Technologies research group at Tufts University, and the Playful Invention Company. The ScratchJr project has received generous financial support from the National Science Foundation (NSF DRL-1118664), Code-to-Learn Foundation, LEGO Foundation, and British Telecommunications.

If you enjoy using this free app, please consider making a donation to the Code-to-Learn Foundation (www.codetolearn.org), a nonprofit organization that provides ongoing support for ScratchJr. We appreciate donations of all sizes, large and small."

[See also: http://www.scratchjr.org/
http://newsoffice.mit.edu/2014/scratchjr-coding-kindergarten ]
children  programming  scratch  scratchjr  2014  ios  ios7  application  ipad  coding  computationalthinking  thinking  computing 
july 2014 by robertogreco
Why the Landline Telephone Was the Perfect Tool - Suzanne Fischer - The Atlantic
"Illich's achievement was a reframing of human relationships to systems and society, in everyday, accessible language. He advocated for the reintegration of community decisionmaking and personal autonomy into all the systems that had become oppressive: school, work, law, religion, technology, medicine, economics. His ideas were influential for 1970s technologists and the appropriate technology movement -- can they be useful today?

In 1971, Illich published what is still his most famous book, Deschooling Society. He argued that the commodification and specialization of learning had created a harmful education system that had become an end in itself. In other words, "the right to learn is curtailed by the obligation to attend school." For Illich, language often pointed to how toxic ideas had poisoned the ways we relate to each other. "I want to learn," he said, had been transmuted by industrial capitalism into "I want to get an education," transforming a basic human need for learning into something transactional and coercive. He proposed a restructuring of schooling, replacing the manipulative system of qualifications with self-determined, community-supported, hands-on learning. One of his suggestions was for "learning webs," where a computer could help match up learners and those who had knowledge to share. This skillshare model was popular in many radical communities.

With Tools for Conviviality (1973), Illich extended his analysis of education to a broader critique of the technologies of Western capitalism. The major inflection point in the history of technology, he asserts, is when, in the life of each tool or system, the means overtake the ends. "Tools can rule men sooner than they expect; the plow makes man the lord of the garden but also the refugee from the dust bowl." Often this effect is accompanied by the rise in power of a managerial class of experts; Illich saw technocracy as a step toward fascism. Tools for Conviviality points out the ways in which a helpful tool can evolve into a destructive one, and offers suggestions for how communities can escape the trap.

So what makes a tool "convivial?" For Illich, "tools foster conviviality to the extent to which they can be easily used, by anybody, as often or as seldom as desired, for the accomplishment of a purpose chosen by the user." That is, convivial technologies are accessible, flexible, and noncoercive. Many tools are neutral, but some promote conviviality and some choke it off. Hand tools, for Illich, are neutral. Illich offers the telephone as an example of a tool that is "structurally convivial" (remember, this is in the days of the ubiquitous public pay phone): anyone who can afford a coin can use it to say whatever they want. "The telephone lets anybody say what he wants to the person of his choice; he can conduct business, express love, or pick a quarrel. It is impossible for bureaucrats to define what people say to each other on the phone, even though they can interfere with -- or protect -- the privacy of their exchange."

A "manipulatory" tool, on the other hand, blocks off other choices. The automobile and the highway system it spawned are, for Illich, prime examples of this process. Licensure systems that devalue people who have not received them, such as compulsory schooling, are another example. But these kinds of tools, that is, large-scale industrial production, would not be prohibited in a convivial society. "What is fundamental to a convivial society is not the total absence of manipulative institutions and addictive goods and services, but the balance between those tools which create the specific demands they are specialized to satisfy and those complementary, enabling tools which foster self-realization."

To foster convivial tools, Illich proposes a program of research with "two major tasks: to provide guidelines for detecting the incipient stages of murderous logic in a tool; and to devise tools and tool systems that optimize the balance of life, thereby maximizing liberty for all." He also suggests that pioneers of a convivial society work through the legal and political systems and reclaim them for justice. Change is possible, Illich argues. There are decision points. We cannot abdicate our right to self-determination, and to decide how far is far enough. "The crisis I have described," says Illich, "confronts people with a choice between convivial tools and being crushed by machines."

Illich's ideas on technology, like his ideas on schooling, were influential among those who spent the 1970s thinking that we might be on the cusp of another world. Some of those utopians included early computer innovators, who saw the culture of sharing, self-determination, and DIY that they lived as something that should be baked into tools.

Computing pioneer Lee Felsenstein has spoken about the direct influence Tools for Conviviality on his work. For him, Illich's description of radio as a convivial tool in Central America was a model for computer development: "The technology itself was sufficiently inviting and accessible to them that it catalyzed their inherent tendencies to learn. In other words, if you tried to mess around with it, it didn't just burn out right away. The tube might overheat, but it would survive and give you some warning that you had done something wrong. The possible set of interactions, between the person who was trying to discover the secrets of the technology and the technology itself, was quite different from the standard industrial interactive model, which could be summed up as 'If you do the wrong thing, this will break, and God help you.' ... And this showed me the direction to go in. You could do the same thing with computers as far as I was concerned." Felsenstein described the first meeting of the legendary Homebrew Computer Club, where 30 or so people tried to understand the Altair together, as "the moment at which the personal computer became a convivial technology."

In 1978, Valentina Borremans of CIDOC prepared a Reference Guide to Convivial Tools. This guide to resources listed many of the new ideas in 1970s appropriate technology -- food self-sufficiency, earth-friendly home construction, new energy sources. But our contemporary convivial tools are mostly in the realm of communications. At their best, personal computers, the web, mobile technology, the open source movement, and the maker movement are contemporary convivial tools. What other convivial technologies do we use today? What tools do we need to make more convivial? Ivan Illich would exhort us to think carefully about the tools we use and what kind of world they are making."
ivanillich  2012  suzannefischer  technology  technogracy  conviviality  unschooling  deschoooling  education  philosophy  history  society  valentinaborremans  leefelsenstein  telephone  landlines  radio  self-determination  diy  grassroots  democracy  computing  computers  internet  web  tools  justice  flexibility  coercion  schools  schooling  openstudioproject  lcproject  learningwebs  credentials  credentialism  learning  howwelearn  commodification  business  capitalism  toolsforconviviality 
july 2014 by robertogreco
Everything Is Broken — The Message — Medium
"It was my exasperated acknowledgement that looking for good software to count on has been a losing battle. Written by people with either no time or no money, most software gets shipped the moment it works well enough to let someone go home and see their family. What we get is mostly terrible.

Software is so bad because it’s so complex, and because it’s trying to talk to other programs on the same computer, or over connections to other computers. Even your computer is kind of more than one computer, boxes within boxes, and each one of those computers is full of little programs trying to coordinate their actions and talk to each other. Computers have gotten incredibly complex, while people have remained the same gray mud with pretensions of godhood.

Your average piece-of-shit Windows desktop is so complex that no one person on Earth really knows what all of it is doing, or how.

Now imagine billions of little unknowable boxes within boxes constantly trying to talk and coordinate tasks at around the same time, sharing bits of data and passing commands around from the smallest little program to something huge, like a browser — that’s the internet. All of that has to happen nearly simultaneously and smoothly, or you throw a hissy fit because the shopping cart forgot about your movie tickets.

We often point out that the phone you mostly play casual games on and keep dropping in the toilet at bars is more powerful than all the computing we used to go to space for decades.

NASA had a huge staff of geniuses to understand and care for their software. Your phone has you.

Plus a system of automatic updates you keep putting off because you’re in the middle of Candy Crush Saga every time it asks.

Because of all this, security is terrible. Besides being riddled with annoying bugs and impossible dialogs, programs often have a special kind of hackable flaw called 0days by the security scene. No one can protect themselves from 0days. It’s their defining feature — 0 is the number of days you’ve had to deal with this form of attack. There are meh, not-so-terrible 0days, there are very bad 0days, and there are catastrophic 0days that hand the keys to the house to whomever strolls by. I promise that right now you are reading this on a device with all three types of 0days. “But, Quinn,” I can hear you say, “If no one knows about them how do you know I have them?” Because even okay software has to work with terrible software. The number of people whose job it is to make software secure can practically fit in a large bar, and I’ve watched them drink. It’s not comforting. It isn’t a matter of if you get owned, only a matter of when.

Look at it this way — every time you get a security update (seems almost daily on my Linux box), whatever is getting updated has been broken, lying there vulnerable, for who-knows-how-long. Sometimes days, sometimes years. Nobody really advertises that part of updates. People say “You should apply this, it’s a critical patch!” and leave off the “…because the developers fucked up so badly your children’s identities are probably being sold to the Estonian Mafia by smack addicted script kiddies right now.”



Recently an anonymous hacker wrote a script that took over embedded Linux devices. These owned computers scanned the whole rest of the internet and created a survey that told us more than we’d ever known about the shape of the internet. The little hacked boxes reported their data back (a full 10 TBs) and quietly deactivated the hack. It was a sweet and useful example of someone who hacked the planet to shit. If that malware had actually been malicious, we would have been so fucked.

This is because all computers are reliably this bad: the ones in
hospitals and governments and banks, the ones in your phone, the ones that control light switches and smart meters and air traffic control systems. Industrial computers that maintain infrastructure and manufacturing are even worse. I don’t know all the details, but those who do are the most alcoholic and nihilistic people in computer security. Another friend of mine accidentally shut down a factory with a malformed ping at the beginning of a pen test. For those of you who don’t know, a ping is just about the smallest request you can send to another computer on the network. It took them a day to turn everything back on.

Computer experts like to pretend they use a whole different, more awesome class of software that they understand, that is made of shiny mathematical perfection and whose interfaces happen to have been shat out of the business end of choleric donkey. This is a lie. The main form of security this offers is through obscurity — so few people can use this software that there’s no point in building tools to attack it. Unless, like the NSA, you want to take over sysadmins."



"When we tell you to apply updates we are not telling you to mend your ship. We are telling you to keep bailing before the water gets to your neck.

To step back a bit from this scene of horror and mayhem, let me say that things are better than they used to be. We have tools that we didn’t in the 1990s, like sandboxing, that keep the idiotically written programs where they can’t do as much harm. (Sandboxing keeps a program in an artificially small part of the computer, cutting it off from all the other little programs, or cleaning up anything it tries to do before anything else sees it.)

Certain whole classes of terrible bugs have been sent the way of smallpox. Security is taken more seriously than ever before, and there’s a network of people responding to malware around the clock. But they can’t really keep up. The ecosystem of these problems is so much bigger than it was even ten years ago that it’s hard to feel like we’re making progress.

People, as well, are broken.

“I trust you…” was my least favorite thing to hear from my sources in Anonymous. Inevitably it was followed by some piece of information they shouldn’t have been telling me. It is the most natural and human thing to share something personal with someone you are learning to trust. But in exasperation I kept trying to remind Anons they were connecting to a computer, relaying though countless servers, switches, routers, cables, wireless links, and finally to my highly targeted computer, before they were connecting to another human being. All of this was happening in the time it takes one person to draw in a deep, committal breath. It’s obvious to say, but bears repeating: humans were not built to think this way.

Everyone fails to use software correctly. Absolutely everyone, fucks up. OTR doesn’t encrypt until after the first message, a fact that leading security professionals and hackers subject to 20-country manhunts consistently forget. Managing all the encryption and decryption keys you need to keep your data safe across multiple devices, sites, and accounts is theoretically possible, in the same way performing an appendectomy on yourself is theoretically possible. This one guy did it once in Antarctica, why can’t you?

Every malware expert I know has lost track of what some file is, clicked on it to see, and then realized they’d executed some malware they were supposed to be examining. I know this because I did it once with a PDF I knew had something bad in it. My friends laughed at me, then all quietly confessed they’d done the same thing. If some of the best malware reversers around can’t keep track of their malicious files, what hope do your parents have against that e-card that is allegedly from you?"



"Security and privacy experts harangue the public about metadata and networked sharing, but keeping track of these things is about as natural as doing blood panels on yourself every morning, and about as easy. The risks on a societal level from giving up our privacy are terrible. Yet the consequences of not doing so on an individual basis are immediately crippling. The whole thing is a shitty battle of attrition between what we all want for ourselves and our families and the ways we need community to survive as humans — a Mexican stand off monetized by corporations and monitored by governments.

I live in this stuff, and I’m no better. Once when I had to step through a process to verify myself to a secretive source. I had to take a series of pictures showing my location and the date. I uploaded them, and was allowed to proceed with my interview. It turns out none of my verification had come through, because I’d failed to let the upload complete before nervously shutting down my computer. “Why did you let me through?” I asked the source. “Because only you would have been that stupid,” my source told me.

Touché.

But if I can’t do this, as a relatively well trained adult who pays attention to these issues all the damn time, what chance do people with real jobs and real lives have?

In the end, it’s culture that’s broken.

A few years ago, I went to several well respected people who work in privacy and security software and asked them a question.

First, I had to explain something:

“Most of the world does not have install privileges on the computer they are using.”
That is, most people using a computer in the world don’t own the computer they are using. Whether it’s in a cafe, or school, or work, for a huge portion of the world, installing a desktop application isn’t a straightforward option. Every week or two, I was being contacted by people desperate for better security and privacy options, and I would try to help them. I’d start, “Download th…” and then we’d stop. The next thing people would tell me they couldn’t install software on their computers. Usually this was because an IT department somewhere was limiting their rights as a part of managing a network. These people needed tools that worked with what they had access to, mostly a browser.

So the question I put to hackers… [more]
quinnnorton  privacy  security  software  2014  heartbleed  otr  libpurple  malware  computers  computing  networks  nsa  fbi 
may 2014 by robertogreco
dy/dan » Blog Archive » Adaptive Learning Is An Infinite iPod That Only Plays Neil Diamond
"If all you've ever heard in your life is Neil Diamond's music, you might think we've invented something quite amazing there. Your iPod contains the entire universe of music. If you've heard any other music at all, you might still be impressed by this infinite iPod. Neil wrote a lot of music after all, some of it good. But you'll know we're missing out on quite a lot also.

So it is with the futurists, many of whom have never been in a class where math was anything but watching someone lecture about a procedure and then replicating that procedure twenty times on a piece of paper. That entire universe fits neatly within a computer-adaptive model of learning.

But for math educators who have experienced math as a social process where students conjecture and argue with each other about their conjectures, where one student's messy handwritten work offers another student a revelation about her own work, a process which by definition can't be individualized or self-paced, computer-adaptive mathematics starts to seem rather limited.

Lectures and procedural fluency are an important aspect of a student's mathematics education but they are to the universe of math experiences as Neil Diamond is to all the other amazing artists who aren't Neil Diamond.

If I could somehow convince the futurists to see math the same way, I imagine our conversations would become a lot more productive.

BTW. While I'm here, Justin Reich wrote an extremely thoughtful series of posts on adaptive learning last month that I can't recommend enough:

Blended Learning, But The Data Are Useless
http://blogs.edweek.org/edweek/edtechresearcher/2014/04/blended_learning_but_the_data_are_useless.html

Nudging, Priming, and Motivating in Blended Learning
http://blogs.edweek.org/edweek/edtechresearcher/2014/04/nudging_priming_and_motivating_in_blended_learning.html

Computers Can Assess What Computers Do Best
http://blogs.edweek.org/edweek/edtechresearcher/2014/04/computers_can_assess_what_computers_do_best.html "
danmeyer  edtech  adaptivelearning  education  2014  blendedlearning  lectures  neildiamond  computing  computers  closedsystems  transcontextualization  via:lukeneff  transcontextualism 
may 2014 by robertogreco
Should We Automate Education? | EdTech Magazine
"In 1962, Raymond Callahan published Education and the Cult of Efficiency, a historical account of the influence that “scientific management” (also known as “Taylorism,” after its developer, Frederick Taylor) had on American schools in the early 20th century — that is, the push to run schools more like factories, where the productivity of workers was measured, controlled and refined.

Callahan’s main argument was that the pressures on the education system to adopt Taylorism resulted neither in more refined ways to teach nor in better ways to learn, but rather, in an emphasis on cost cutting. Efficiency, he argued, “amounted to an analysis of the budget. … Decisions on what should be taught were not made on educational, but on financial grounds.”

Fifty years later, we remain obsessed with creating a more “efficient” educational system (although ironically, we object to schools based on that very “factory model”). Indeed, this might be one of the major promises that educational technologies make: to deliver a more efficient way to teach and learn, and a more efficient way to manage schooling.

Deciding What We Want From Education

Adaptive learning — computer-based instruction and assessment that allows each student to move at her or his pace — is perhaps the latest in a series of technologies that promise more ­efficient education. The efficiency here comes, in part, from the focus on the individual — personalization — instead of on an entire classroom of students.

But it’s worth noting that adaptive learning isn’t new. “Intelligent tutoring systems” have been under development for decades now. The term “intelligent tutoring” was coined in the 1980s; research into computer-assisted instruction dates to the 1960s; and programmed instruction predates the computer altogether, with Sidney Pressey’s and B. F. Skinner’s “teaching machines” of the 1920s and 1950s, respectively.

“Education must become more efficient,” Skinner insisted. “To this end, curricula must be revised and simplified, and textbooks and classroom techniques improved.”

Rarely do we ask what exactly “efficiency” in education or ed tech ­entails. Does it mean a reduction in ­errors? Faster learning? Reshaping the curriculum based on market demands? Does it mean cutting labor costs — larger classroom sizes, perhaps, or teachers replaced by machines?

We also often fail to ask why efficiency would be something we would value in education at all. Schools shouldn’t be factories. Students aren’t algorithms.

What happens if we prioritize efficiency in education? By doing so, are we simply upgrading the factory model of schooling with newer technologies? What happens to spontaneity and messiness? What happens to contemplation and curiosity?

There’s danger, I’d argue, in relying on teaching machines — on a push for more automation in education. We forget that we’re teaching humans."
audreywatters  automation  education  edtech  learning  children  humanism  humans  efficiency  2014  1962  raymondcallahan  management  taylorism  factoryschools  schools  industrialeducation  schooling  adaptivelearning  bfskinner  sidneypressey  computers  computing  technology  curiosity  messiness  spontaneity  unschooling  deschooling 
april 2014 by robertogreco
George Dyson: No Time Is There--- The Digital Universe and Why Things Appear To Be Speeding Up - The Long Now
"The digital big bang

When the digital universe began, in 1951 in New Jersey, it was just 5 kilobytes in size. "That's just half a second of MP3 audio now," said Dyson. The place was the Institute for Advanced Study, Princeton. The builder was engineer Julian Bigelow. The instigator was mathematician John von Neumann. The purpose was to design hydrogen bombs.

Bigelow had helped develop signal processing and feedback (cybernetics) with Norbert Wiener. Von Neumann was applying ideas from Alan Turing and Kurt Gödel, along with his own. They were inventing and/or gates, addresses, shift registers, rapid-access memory, stored programs, a serial architecture—all the basics of the modern computer world, all without thought of patents. While recuperating from brain surgery, Stanislaw Ulam invented the Monte Carlo method of analysis as a shortcut to understanding solitaire. Shortly Von Neumann's wife Klári was employing it to model the behavior of neutrons in a fission explosion. By 1953, Nils Barricelli was modeling life itself in the machine—virtual digital beings competed and evolved freely in their 5-kilobyte world.

In the few years they ran that machine, from 1951 to 1957, they worked on the most difficult problems of their time, five main problems that are on very different time scales—26 orders of magnitude in time—from the lifetime of a neutron in a bomb's chain reaction measured in billionths of a second, to the behavior of shock waves on the scale of seconds, to weather prediction on a scale of days, to biological evolution on the scale of centuries, to the evolution of stars and galaxies over billions of years. And our lives, measured in days and years, is right in the middle of the scale of time. I still haven't figured that out."

Julian Bigelow was frustrated that the serial, address-constrained, clock-driven architecture of computers became standard because it is so inefficient. He thought that templates (recognition devices) would work better than addresses. The machine he had built for von Neumann ran on sequences rather than a clock. In 1999 Bigelow told George Dyson, "Sequence is different from time. No time is there." That's why the digital world keeps accelerating in relation to our analog world, which is based on time, and why from the perspective of the computational world, our world keeps slowing down.

The acceleration is reflected in the self-replication of computers, Dyson noted: "By now five or six trillion transistors per second are being added to the digital universe, and they're all connected." Dyson is a kayak builder, emulating the wood-scarce Arctic natives to work with minimum frame inside a skin craft. But in the tropics, where there is a surplus of wood, natives make dugout canoes, formed by removing wood. "We're now surrounded by so much information," Dyson concluded, "we have to become dugout canoe builders. The buzzword of last year was 'big data.' Here's my definition of the situation: Big data is what happened when the cost of storing information became less than the cost of throwing it away."

--Stewart Brand"

[See also: http://blog.longnow.org/02014/04/04/george-dyson-seminar-flashback-no-time-is-there/ ]
data  longnow  georgedyson  computing  history  stewartbrand  2013  ai  artificialintelligence  time  julianbigelow 
april 2014 by robertogreco
Mary Huang :: portfolio
"With computational design there is the opportunity to not only create beautifully intricate forms, but to define a design according to its governing processes and user interactions. This project sought to mediate between the avant-garde and ready-to-wear, between individual users and a designer's vision. Could we use technology to democratize haute couture? Could we let people design their own dress, and still maintain a cohesive, recognizable design?

Computational couture captures this philosophy and applies it toward solving the persistent problem of standardized sizing in ready-to-wear. CONTINUUM is a concept for a web-based fashion label in which designs are user-generated using custom software and made to order to your personal measurements. Its seminal collection is a deconstruction of the classic little black dress. Software allows you to "draw" a dress and converts it into a 3D model, which is turned into a flat pattern that can be cut out of fabric and sewn into the dress. Not only can the physical dress be purchased through the label, but the cutting patterns are downloadable free of charge for those who would rather devote the time to making their own. With design encompassing a continuous user experience, we can inspire changing attitudes and behaviors of mass consumption."

[See also: http://www.continuumfashion.com/Ddress/ ]
processing  fashion  wearable  wearables  triangles  glvo  computing  maryhuang 
december 2013 by robertogreco
In Defense of Messiness: David Weinberger and the iPad Summit - EdTech Researcher - Education Week
[via: http://willrichardson.com/post/67746828029/the-limitations-of-the-ipad ]

"We were very lucky today to have David Weinberger give the opening address at our iPad Summit in Boston yesterday. We've started a tradition at the iPad Summit that our opening keynote speaker should know, basically, nothing about teaching with iPads. We don't want to lead our conversation with technology, we want to lead with big ideas about how the world is changing and how we can prepare people for that changing world.

Dave spoke drawing on research from his most recent book, Too Big To Know: How the Facts are not the Facts, Experts are not Experts, and the Smartest Person in the Room is the Room.

It's hard to summarize a set of complex ideas, but at the core of Dave's argument is the idea that our framing of "knowledge," the metaphysics of knowledge (pause: yes, we start our iPad Summit with discussions of the metaphysics of knowledge), is deeply intertwined with the technology we have used for centuries to collect and organize knowledge: the book. So we think of things that are known as those that are agreed upon and fixed--placed on a page that cannot be changed; we think of them as stopping places--places for chapters to end; we think of them as bounded--literally bounded in the pages of a book; we think of them as organized in a single taxonomy--because each library has to choose a single place for the physical location of each book. The limitations of atoms constrained our metaphysics of knowledge.

We then encoded knowledge into bits, and we began to discover a new metaphysics of knowledge. Knowledge is not bound, but networked. It is not agreed, but debated. It is not ordered, but messy.

A changing shape of knowledge demands that we look seriously at changes in educational practice. For many educators at the iPad Summit, the messiness that David sees as generative the emerging shape of knowledge reflects the messiness that they see in their classrooms. As Holly Clark said in her presentation, "I used to want my administrators to drop in when my students were quiet, orderly, and working alone. See we're learning! Now I want them to drop in when we are active, engaged, collaborative, loud, messy, and chaotic. See, we're learning!"

These linkages are exactly what we hope can happen when we start our conversations about teaching with technology by leading with our ambitions for our students rather than leading with the affordances of a device.

I want to engage David a little further on one point. When I invited David to speak, he said "I can come, but I have some real issues with iPads in education." We talked about it some, and I said, "Great, those sound like serious concerns. Air them. Help us confront them."

David warned us again this morning "I have one curmudgeonly old man slide against iPads," and Tom Daccord (EdTechTeacher co-founder) and I both said "Great." The iPad Summit is not an Apple fanboygirl event. At the very beginning, Apple's staff, people like Paul Facteau, were very clear that iPads were never meant to be computer replacements--that some things were much better done on laptops or computes. Any educator using a technology in their classroom should be having an open conversation about the limitations of their tools.

Tom then gave some opening remarks where he said something to the effect of "The iPad is not a repository of apps, but a portable, media creation device." If you talk to most EdTechTeacher staff, we'll tell you that with an iPad, you get a camera, microphone, connection to the Internet, scratchpad, and keyboard--and a few useful apps that let you use those things. (Apparently, there are all kinds of people madly trying to shove "content" on the iPad, but we're not that interested. For the most part, they've done a terrible job.)

Dave took the podium and said in his introductory remarks, "There is one slide that I already regret." He followed up with this blog post, No More Magic Knowledge [http://www.hyperorg.com/blogger/2013/11/14/2b2k-no-more-magic-knowledge/ ]:
I gave a talk at the EdTechTeacher iPad Summit this morning, and felt compelled to throw in an Angry Old Man slide about why iPads annoy me, especially as education devices. Here's my List of Grievances:
• Apple censors apps
• iPads are designed for consumers. [This is false for these educators, however. They are using iPad apps to enable creativity.]
• They are closed systems and thus lock users in
• Apps generally don't link out
That last point was the one that meant the most in the context of the talk, since I was stressing the social obligation we all have to add to the Commons of ideas, data, knowledge, arguments, discussion, etc.
I was sorry I brought the whole thing up, though. None of the points I raised is new, and this particular audience is using iPads in creative ways, to engage students, to let them explore in depth, to create, and to make learning mobile.

I, for one, was not sorry that Dave brought these issues up. There are real issues with our ability as educators to add to the Commons through iPads. It's hard to share what you are doing inside a walled garden. In fact, one of the central motivations for the iPad Summit is to bring educators together to share their ideas and to encourage them to take that extra step to share their practice with the wider world; it pains me to think of all of the wheels being reinvented in the zillions of schools that have bought iPads. We're going to have to hack the garden walls of the iPad to bring our ideas together to the Common.

The issue of the "closedness" of iPads is also critical. Dave went on to say that one limitation of the iPad is that you can't view source from a browser. (It's not strictly true, but it's a nuisance of a hack--see here or here.) From Dave again:

"Even though very few of us ever do peek beneath the hood -- why would we? -- the fact that we know there's an openable hood changes things. It tells us that what we see on screen, no matter how slick, is the product of human hands. And that is the first lesson I'd like students to learn about knowledge: it often looks like something that's handed to us finished and perfect, but it's always something that we built together. And it's all the cooler because of that."

I'd go further than you can't view source: there is no command line. You can't get under the hood of the operating system, either. You can't unscrew the back. Now don't get wrong, when you want to make a video, I'm very happy to declare that you won't need to update your codecs in order to get things to compress properly. Simplicity is good in some circumstances. But we are captive to the slickness that Dave describes. Let's talk about that.

A quick tangent: Educators come up to me all the time with concerns that students can't word process on an iPad--I have pretty much zero concern about this. Kids can write papers using Swype on a smartphone with a cracked glass. Just because old people can't type on digitized keyboards doesn't mean kids can't (and you probably haven't been teaching them touch-typing anyway).

I'm not concerned that kids can't learn to write English on an iPad, I'm concerned they can't learn to write Python. If you believe that learning to code is a vital skill for young people, then the iPad is not the device for you. The block programming languages basically don't work. There is no Terminal or Putty or iPython Notebook. To teach kids to code, they need a real computer. (If someone has a robust counter-argument to that assertion, I'm all ears.) We should be very, very clear that if we are putting all of our financial eggs in the iPad basket, there are real opportunities that we are foreclosing.

Some of the issues that Dave raises we can hack around. Some we can't. The iPad Summit, all technology-based professional development, needs to be a place where we talk about what technology can't do, along with what it can.

Dave's keynote about the power of open systems reminds us that knowledge is networked and messy. Our classrooms, and the technologies we use to support learning in our classrooms, should be the same. To the extent that the technologies we choose are closed and overly-neat, we should be talking about that.

Many thanks again to Dave for a provocative morning, and many thanks to the attendees of the iPad Summit for joining in and enriching the conversation."
justinreich  ipad  2013  ipadsummit  davidweinberger  messiness  learning  contructionism  howthingswork  edtech  computers  computing  coding  python  scratch  knowledge  fluidity  flux  tools  open  closed  walledgardens  cv  teaching  pedagogy  curriculum  tomdaccord  apple  ios  closedness  viewsource  web  internet  commons  paulfacteau  schools  education  mutability  plasticity 
november 2013 by robertogreco
Counting Sheep
"Counting Sheep: NZ Merino in an Internet of Things is a three-year research project (2011-2014) based in the School of Design, Victoria University of Wellington, New Zealand. Led by Dr Anne Galloway, our work explores the role that cultural studies and design research can play in supporting public engagement with the development and use of science and technology.

The Internet of Things is a vision for computing that uses a variety of wireless identification, location, and sensor technologies to collect information about people, places and things - and make it available via the internet. Today's farms generate and collect enormous amounts of data, and we're interested in what people can do with this information - as well as what we might do with related science and technology in the future.

Over the past two years we've travelled around the country, visiting merino stations, going to A&P shows and shearing competitions, and spending time in offices and labs, talking with breeders, growers, shearers, wool shandlers, scientists, industry representatives, government policy makers and others - all so that we could learn as much as possible about NZ merino. Then we took what we learned and we started to imagine possible uses for these technologies in the future production and consumption of merino sheep and products.

This website showcases our fictional scenarios and we want to know what you think!"

[See also: http://www.designculturelab.org/projects/counting-sheep-project-overview/
http://www.designculturelab.org/projects/counting-sheep-research-outputs/ ]
annegalloway  design  research  sheep  animals  merino  newzealand  speculativefiction  internetofthings  technology  science  computing  sensors  spimes  designfiction  countingsheep  boneknitter  permalamb  growyourownlamb  iot 
november 2013 by robertogreco
Identify Yourself
"At its core function, the Internet is a tool for the communication of information, whether factual or fictional. It has allowed us access to knowledge we would have otherwise never known, at a rate that we could have never achieved with printed materials. Each tool that we have developed to spread information has exponentially increased the speed at which it travels, leading to bursts of creativity and collaboration that have accelerated human development and accomplishment. The wired Internet at broadband speeds allows us to consume content so fast that any delay causes us to balk and whine. Wireless Internet made this information network portable and extended our range of knowledge beyond the boundaries of offices and libraries and into the world. Mobile devices have completely transformed our consumption of information, putting tiny computers in our pockets and letting us petition the wishing well of the infoverse.

Many people say this access has made us impatient, and I agree. But I also believe it reveals an innate hunger. We are now so dependent on access to knowledge at these rapid speeds that any lull in our consumption feels like a wasted moment. The currency of the information appears at all levels of society. From seeing new television shows to enjoying free, immediate access to new scientific publications that could impact your life’s work, this rapid transmission model has meaning and changes lives. We have access to information when we are waiting for an oil change and in line for coffee. While we can choose to consume web junk, as many often will, there is also a wealth of human understanding and opinions, academic texts, online courses, and library archives that can be accessed day and night, often for free."



While many seem to experience their Internet lives as a separate space of reality, I have always felt that the two were inextricable. I don’t go on the Internet; I am in the Internet and I am always online. I have extended myself into the machines I carry with me at all times. This space is continually shifting and I veer to adjust, applying myself to new media, continually gathering and recording data about myself, my relationships, my thoughts. I am a immaterial database of memory and hypertext, with invisible links in and out between the Internet and myself.

THE TEXT OBJECT
I would sit for as long as I could and devour information. It was not uncommon for me to devour a book in a single day, limiting all bodily movement except for page-turning, absolutely rapt by whatever I was reading. I was honored to be literate and sure that my dedication to knowledge would lead to great things. I was addicted to the consumption and processing of that information. It frustrated me that I could not read faster and process more. The form of the book provided me structured, linear access to information, with the reward for my attention being a complete and coherent story or idea.

Access to computers and the Internet completely changed the way that I consumed information and organized ideas in my head. I saw information stacked on top of itself in simultaneity, no longer confined to spatiotemporal dimensions of the book. This information was editable, and I could copy, paste, and cut text and images from one place to the next, squirreling away bits that felt important to me. I suddenly understood how much of myself I was finding through digital information."



"There is a system, and there are people within this system. I am only one of them, but I value deeply the opportunities this space grants me, and the wealth contained within it. We must fight to keep the Internet safe and open. Though it has already lost the magical freedom and democracy that existed in the days of the early web, we must continue to put our best minds to work using this extensive network of machines to aid us. Technology gives us so much, and we put so much of ourselves back into it, but we must always remember that we made the web and it will always be tied to us as humans, with our vast range of beauty and ugliness.

I only know my stories, my perspective, but it feels important to take note during this new technical Renaissance, to try and capture the spirit of this shift. I am vastly inspired by the capabilities of my tiny iPhone, my laptop, and all the software contained therein. This feeling is empowerment. The empowerment to learn, to create, and to communicate is something I’ve always felt is at the core of art-making, to be able to translate a complex idea or feeling into some contained or open form. Even the most simple or ethereal works have some form; the body, the image, the object. The file, the machine, the URL, these are all just new vessels for this spirit to be contained.

The files are beautiful, but I move to nominate the Internet as “sublime,” because when I stare into the glass precipice of my screen, I am in awe of the vastness contained within it, the micro and macro, simultaneously hard and technical and soft and human. Most importantly, it feels alive—with constant newness and deepening history, with endless activity and variety. May we keep this spirit intact and continue to explore new vessels into which we can pour ourselves, and reform our identities, shifting into a new world of Internet natives."

[Available as book: http://www.lulu.com/shop/krystal-south/identify-yourself/paperback/product-21189499.html ]
[About page: http://idyrself.com/about.html ]
internet  online  krystalsouth  howweread  howwewrite  atemporality  simultaneity  text  books  internetasliterature  reading  writing  computing  impatience  information  learning  unbook  copypasteculture  mutability  change  sharing  editing  levmanovich  computers  software  technology  sorting  files  taxonomy  instagram  flickr  tagging  folksonomy  facebook  presence  identity  web2.0  language  communication  internetasfavoritebook 
november 2013 by robertogreco
« earlier      
per page:    204080120160

related tags

$100  1:1  1to1  3d  3dprinter  3dprinting  21stcenturyskills  23&me  1940s  1950s  1960s  1970s  1980s  1990s  aaronstraupcope  abandonment  ability  abnormal  absence  abstract  abstraction  abstractmachines  abundance  academia  accents  access  accessibility  accra  accreditation  accretion  accumulation  accuracy  acting  activism  activity  adambly  adamgreenfield  adammathes  adaptability  adaptive  adaptivelearning  addiction  adelegoldberg  adelinekoh  adobe  adolescence  adrianhon  ads  adults  adventureplaygrounds  advertising  advice  aegisanimator  aesthetics  africa  age  ageism  agency  aggregation  aggression  agilesoftwaredevelopment  agitation  agitprop  ai  air  AJAX  alankay  alanperlis  alanturing  alexismadrigal  alexpayne  alexwright  alfiekohn  alfrednorthwhitehead  algebra  algorithms  alice  aliciablum-ross  alienaesthetic  alinear  alisongopnik  allsorts  almostdeadtechnologies  altair  altgdp  alwayslearning  amaya  amazon  ambercase  ambient  ambientawareness  ambientintelligence  ambientintimacy  amiga  amigaworkbench  amitpitaru  amulets  amybrown  amyorben  amytan  anabjain  anagraph  analog  analogdesign  analysis  analyticalthinking  anarchism  anarchy  ancientcivilization  ancientgreece  ancienthistory  ancients  andrewblum  andrewprzybylski  android  andyclark  andyshouse  andywarhol  anger  animal-computer  animals  animation  animism  annalowenhaupttsing  annatsing  annegalloway  annotation  annoyance  anthonytownsend  anthropology  antikytheramechanism  antontroynikov  anxiaomina  anybots  anyshouse  apatternlanguage  apnea  apocalypse  apocalypto  apolitical  appinventor  apple  appleii  application  applications  arbitrary  archaeology  archigram  architecture  archives  archiving  arduino  arg  aristotle  arithmetic  arpa  arpanet  art  artechouse  artez  arthistory  arthurcclarke  artificial  artificialintelligence  artificialintelligent  artists  artleisure  ascii  assembly  assistivetechnology  assumptions  astrataylor  astronomy  asus  aswemaythink  asynchronous  atari  atemporality  atheism  attention  audience  audio  audrelorde  audreytang  audreywatters  augmentation  augmented  aural  australia  authority  autodidacticism  autodidactism  autodidacts  automation  autonomy  availabot  avant-garde  aware  awarehome  awareness  axelroesler  aymara  azaraskin  backup  balance  banking  bantu  barbararogoff  basic  basics  bayarea  beakerbrowser  beginner  behavior  being  belesshelpful  belgium  belief  belllabs  bencerveny  benjaminbratton  benjedwards  berg  bergcloud  berglondon  berkeley  bespoke  bespokeobjects  beteyoonchoi  bethanynowviskie  beyoncé  bfskinner  bias  big  bigdata  bigdog  bighere  billmoyers  billverplank  biography  bioinformatics  biology  biomimetics  biomimicry  birding  birds  bisexuallighting  bitcoins  blackmirror  blackswans  bladerunner  blainecook  bldgblog  blendedlearning  blocks  blogging  blogs  blue  bluetooth  boardgames  bobstein  bodies  body  bolivia  boneknitter  books  boolean  borders  boredom  bothumor  botpoetry  bots  bottomup  braid  brain  brasil  bravery  brazil  bretvictor  briancroxall  briandear  brianeno  briongysin  browser  browsers  brucesterling  bruteforcearchitecture  bryanboyer  buglabs  building  buildings  bullshitjobs  bureaucracy  burningman  business  buttons  byob  byod  calculation  california  californianideology  calit2  calltoaction  calm  calmdown  calmtechnology  cambodia  canada  candiceodgers  canon  capitalism  captcha  captology  cardgames  care  careerchanges  careers  caring  carldisalvo  carltheodorsorensen  caroldweck  carolespearinmccauley  carolynbrown  carpentry  cars  cartography  carvermead  caseygollan  caseyreas  casygollan  causes  celts  centuries  certification  cgi  chalktalk  challenge  chance  change  chaos  charism  charlesbukowski  charliekennel  charliestross  chartjunk  chat  chatroulette  cheatsheets  checkers  chemistry  chemtrails  chess  chichewa  childhood  children  chile  chrisanderson  chrisdixon  chrisheathcote  chrislehmann  chrismessina  chrismeyers  christianbök  christianity  christiannold  christopheralexander  christopherderndorfer  christopherkirkley  chrome  chromebooks  chromelaptops  chromeos  circa  circuits  cities  citizenscience  citizenship  citybuilding  cityofsound  cityplanning  civics  claendars  clarity  class  classideas  classification  classism  classroom  classrooms  claudeshannon  clayshirky  climate  climatescience  clivethompson  closed  closedness  closedsystems  clothing  cloud  cloudbook  cloudcomputing  cnc  coachella  cocreation  code  codespace  coding  coelbrenalphabet  coercion  cognition  cognitive  cohenvanbalen  colinfanning  coliving  collaboration  collaborative  collapse  collateraldamage  collecting  collections  collectiveintelligence  collectivism  colonialism  colonization  color  colors  comfort  comics  commandline  commentary  comments  commodification  commoditization  commodore  commodore64  commoncore  commons  communalism  communes  communication  communities  community  comparisons  complexity  compliance  composition  compsci  compubody  compulsory  computation  computationalexpression  computationaljurisdictions  computationallinguistics  computationallogic  computationalthinking  computer  computer-generatedtext  computerbullshit  computers  computersareforpeople  computerscience  computing  computires  conciousness  confusion  conjecture  connecteddevices  connectedlearning  connection  connectivism  connectivity  conquest  consciousness  consensus  consent  conservation  conspiracytheories  constraints  construction  constructionism<