robertogreco + singularity   45

Design Thinking is Kind of Like Syphilis — It’s Contagious and Rots Your Brains
"Miller never bothers to define all the modes, and we will consider them more below. But for now, we should just note that the entire model is based on design consulting: You try to understand the client’s problem, what he or she wants or needs. You sharpen that problem so it’s easier to solve. You think of ways to solve it. You try those solutions out to see if they work. And then once you’ve settled on something, you ask your client for feedback. By the end, you’ve created a “solution,” which is also apparently an “innovation.”

Miller also never bothers to define the liberal arts. The closest he comes is to say they are ways of “thinking that all students should be exposed to because it enhances their understanding of everything else.” Nor does he make clear what he means by the idea that Design Thinking is or could be the new liberal arts. Is it but one new art to be added to the traditional liberal arts, such as grammar, logic, rhetoric, math, music, and science? Or does Miller think, like Hennessy and Kelly, that all of education should be rebuilt around the DTs? Who knows.

Miller is most impressed with Design Thinking’s Empathize Mode. He writes lyrically, “Human-centered design redescribes the classical aim of education as the care and tending of the soul; its focus on empathy follows directly from Rousseau’s stress on compassion as a social virtue.” Beautiful. Interesting.

But what are we really talking about here? The d.school’s An Introduction to Design Thinking PROCESS GUIDE says, “The Empathize Mode is the work you do to understand people, within the context of your design challenge.” We can use language like “empathy” to dress things up, but this is Business 101. Listen to your client; find out what he or she wants or needs.

Miller calls the Empathize Mode “ethnography,” which is deeply uncharitable — and probably offensive — to cultural anthropologists who spend their entire lives learning how to observe other people. Few, if any, anthropologists would sign onto the idea that some amateurs at a d.school “boot camp,” strolling around Stanford and gawking at strangers, constitutes ethnography. The Empathize Mode of Design Thinking is roughly as ethnographic as a marketing focus group or a crew of sleazoid consultants trying to feel out and up their clients’ desires.

What Miller, Kelly, and Hennessy are asking us to imagine is that design consulting is or could be a model for retooling all of education, that it has some method for “producing reliably innovative results in any field.” They believe that we should use Design Thinking to reform education by treating students as customers, or clients, and making sure our customers are getting what they want. And they assert that Design Thinking should be a central part of what students learn, so that graduates come to approach social reality through the model of design consulting. In other words, we should view all of society as if we are in the design consulting business."



In recent episode of the Design Observer podcast, Jen added further thoughts on Design Thinking. “The marketing of design thinking is completely bullshit. It’s even getting worse and worse now that [Stanford has] three-day boot camps that offer certified programs — as if anyone who enrolled in these programs can become a designer and think like a designer and work like a designer.” She also resists the idea that any single methodology “can deal with any kind of situation — not to mention the very complex society that we’re in today.”

In informal survey I conducted with individuals who either teach at or were trained at the top art, architecture, and design schools in the USA, most respondents said that they and their colleagues do not use the term Design Thinking. Most of the people pushing the DTs in higher education are at second- and third-tier universities and, ironically, aren’t innovating but rather emulating Stanford. In afew cases, respondents said they did know a colleague or two who was saying “Design Thinking” frequently, but in every case, the individuals were using the DTs either to increase their turf within the university or to extract resources from college administrators who are often willing to throw money at anything that smacks of “innovation.”

Moreover, individuals working in art, architecture, and design schools tend to be quite critical of existing DT programs. Reportedly, some schools are creating Design Thinking tracks for unpromising students who couldn’t hack it in traditional architecture or design programs — DT as “design lite.” The individuals I talked to also had strong reservations about the products coming out of Design Thinking classes. A traditional project in DT classes involves undergraduate students leading “multidisciplinary” or “transdisciplinary” teams drawing on faculty expertise around campus to solve some problem of interest to the students. The students are not experts in anything, however, and the projects often take the form of, as one person put it, “kids trying to save the world.”

One architecture professor I interviewed had been asked to sit in on a Design Thinking course’s critique, a tradition at architecture and design schools where outside experts are brought in to offer (often tough) feedback on student projects. The professor watched a student explain her design: a technology that was meant to connect mothers with their premature babies who they cannot touch directly. The professor wondered, what is the message about learning that students get from such projects? “I guess the idea is that this work empowers the students to believe they are applying their design skills,” the professor told me. “But I couldn’t critique it as design because there was nothing to it as design. So what’s left? Is good will enough?

As others put it to me, Design Thinking gives students an unrealistic idea of design and the work that goes into creating positive change. Upending that old dictum “knowledge is power,” Design Thinkers giver their students power without knowledge, “creative confidence” without actual capabilities.

It’s also an elitist, Great White Hope vision of change that literally asks students to imagine themselves entering a situation to solve other people’s problems. Among other things, this situation often leads to significant mismatch between designers’ visions — even after practicing “empathy” — and users’ actual needs. Perhaps the most famous example is the PlayPump, a piece of merry-go-round equipment that would pump water when children used it. Designers envisioned that the PlayPump would provide water to thousands of African communities. Only kids didn’t show up, including because there was no local cultural tradition of playing with merry-go-rounds.

Unsurprisingly, Design Thinking-types were enthusiastic about the PlayPump. Tom Hulme, the design director at IDEO’s London office, created a webpage called OpenIDEO, where users could share “open source innovation.” Hulme explained that he found himself asking, “What would IDEO look like on steroids? [We might ask the same question about crack cocaine or PCP.] What would it look like when you invite everybody into everything? I set myself the challenge of . . . radical open-innovation collaboration.” OpenIDEO community users were enthusiastic about the PlayPump — even a year after the system had been debunked, suggesting inviting everyone to everything gets you people who don’t do research. One OpenIDEO user enthused that the PlayPump highlighted how “fun can be combined with real needs.”

Thom Moran, an Assistant Professor of Architecture at the University of Michigan, told me that Design Thinking brought “a whole set of values about what design’s supposed to look like,” including that everything is supposed to be “fun” and “play,” and that the focus is less on “what would work.” Moran went on, “The disappointing part for me is that I really do believe that architecture, art, and design should be thought of as being a part of the liberal arts. They provide a unique skill set for looking at and engaging the world, and being critical of it.” Like others I talked to, Moran doesn’t see this kind of critical thinking in the popular form of Design Thinking, which tends to ignore politics, environmental issues, and global economic problems.

Moran holds up the Swiffer — the sweeper-mop with disposable covers designed by an IDEO-clone design consultancy, Continuum — as a good example of what Design Thinking is all about. “It’s design as marketing,” he said. “It’s about looking for and exploiting a market niche. It’s not really about a new and better world. It’s about exquisitely calibrating a product to a market niche that is underexploited.” The Swiffer involves a slight change in old technologies, and it is wasteful. Others made this same connection between Design Thinking and marketing. One architect said that Design Thinking “really belongs in business schools, where they teach marketing and other forms of moral depravity.”

“That’s what’s most annoying,” Moran went on. “I fundamentally believe in this stuff as a model of education. But it’s business consultants who give TED Talks who are out there selling it. It’s all anti-intellectual. That’s the problem. Architecture and design are profoundly intellectual. But for these people, it’s not a form of critical thought; it’s a form of salesmanship.”

Here’s my one caveat: it could be true that the DTs are a good way to teach design or business. I wouldn’t know. I am not a designer (or business school professor). I am struck, however, by how many designers, including Natasha Jen and Thom Moran, believe that the DTs are nonsense. In the end, I will leave this discussion up to designers. It’s their show. My concern is a different one — namely that… [more]
designthinking  innovation  ideas  2017  design  leevinsel  maintenance  repair  ideation  problemsolving  davidedgerton  willthomas  billburnett  daveevans  stanford  d.school  natashajen  herbertsimon  robertmckim  ideo  singularity  singularityuniversity  d.tech  education  schools  teaching  liberalarts  petermiller  esaleninstitute  newage  hassoplattner  johnhennessey  davidkelly  jimjones  empathy  ethnography  consulting  business  bullshit  marketing  snakeoil  criticism  criticalthinking  highereducation  highered  thomamoran  tedtalks  openideo  playpump  designimperialism  whitesaviors  post-its  transdisciplinary  multidisciplinary  crossdisciplinary  art  architecture  complexity  simplicity  methodology  process  emptiness  universities  colleges  philipmirowski  entrepreneurship  lawrencebusch  elizabethpoppberman  nathanielcomfort  margaretbrindle  peterstearns  christophermckenna  hucksterism  self-promotion  hype  georgeorwell  nathanrosenberg  davidmowery  stevenklepper  davidhounshell  patrickmccray  marianamazzucato  andréspicer  humanitariandesign  themaintainers  ma 
december 2017 by robertogreco
Will Self: Are humans evolving beyond the need to tell stories? | Books | The Guardian
"Neuroscientists who insist technology is changing our brains may have it wrong. What if we are switching from books to digital entertainment because of a change in our need to communicate?"



"A few years ago I gave a lecture in Oxford that was reprinted in the Guardian under the heading: “The novel is dead (this time it’s for real)”. In it I argued that the novel was losing its cultural centrality due to the digitisation of print: we are entering a new era, one with a radically different form of knowledge technology, and while those of us who have what Marshal McLuhan termed “Gutenberg minds” may find it hard to comprehend – such was our sense of the solidity of the literary world – without the necessity for the physical book itself, there’s no clear requirement for the art forms it gave rise to. I never actually argued that the novel was dead, nor that narrative itself was imperilled, yet whenever I discuss these matters with bookish folk they all exclaim: “But we need stories – people will always need stories.” As if that were an end to the matter.

Non-coincidentally, in line with this shift from print to digital there’s been an increase in the number of scientific studies of narrative forms and our cognitive responses to them. There’s a nice symmetry here: just as the technology arrives to convert the actual into the virtual, so other technologies arise, making it possible for us to look inside the brain and see its actual response to the virtual worlds we fabulate and confabulate. In truth, I find much of this research – which marries arty anxiety with techno-assuredness – to be self-serving, reflecting an ability to win the grants available for modish interdisciplinary studies, rather than some new physical paradigm with which to explain highly complex mental phenomena. Really, neuroscience has taken on the sexy mantle once draped round the shoulders of genetics. A few years ago, each day seemed to bring forth a new gene for this or that. Such “discoveries” rested on a very simplistic view of how the DNA of the human genotype is expressed in us poor, individual phenotypes – and I suspect many of the current discoveries, which link alterations in our highly plastic brains to cognitive functions we can observe using sophisticated equipment, will prove to be equally ill-founded.

The neuroscientist Susan Greenfield has been prominent in arguing that our new digital lives are profoundly altering the structure of our brains. This is undoubtedly the case – but then all human activities impact upon the individual brain as they’re happening; this by no means implies a permanent alteration, let alone a heritable one. After all, so far as we can tell the gross neural anatomy of the human has remained unchanged for hundreds of millennia, while the age of bi-directional digital media only properly dates – in my view – from the inception of wireless broadband in the early 2000s, hardly enough time for natural selection to get to work on the adaptive advantages of … tweeting. Nevertheless, pioneering studies have long since shown that licensed London cab drivers, who’ve completed the exhaustive “Knowledge” (which consists of memorising every street and notable building within a six mile radius of Charing Cross), have considerably enlarged posterior hippocampi.

This is the part of brain concerned with way-finding, but it’s also strongly implicated in memory formation; neuroscientists are now discovering that at the cognitive level all three abilities – memory, location, and narration – are intimately bound up. This, too, is hardly surprising: key for humans, throughout their long pre-history as hunter-gatherers, has been the ability to find food, remember where food is and tell the others about it. It’s strange, of course, to think of Pride and Prejudice or Ulysses as simply elaborations upon our biologically determined inclination to give people directions – but then it’s perhaps stranger still to realise that sustained use of satellite navigation, combined with absorbing all our narrative requirements in pictorial rather written form, may transform us into miserable and disoriented amnesiacs.

When he lectured on literature in the 1950s, Vladimir Nabokov would draw a map on the blackboard at the beginning of each session, depicting, for example, the floor plan of Austen’s Mansfield Park, or the “two ways” of Proust’s Combray. What Nabokov seems to have understood intuitively is what neuroscience is now proving: reading fiction enables a deeply memorable engagement with our sense of space and place. What the master was perhaps less aware of – because, as yet, this phenomenon was inchoate – was that throughout the 20th century the editing techniques employed in Hollywood films were being increasingly refined. This is the so-called “tyranny of film”: editing methods that compel our attention, rather than leaving us free to absorb the narrative in our own way. Anyone now in middle age will have an intuitive understanding of this: shots are shorter nowadays, and almost all transitions are effected by crosscutting, whereby two ongoing scenes are intercut in order to force upon the viewer the idea of their synchrony. It’s in large part this tyranny that makes contemporary films something of a headache for older viewers, to whom they can seem like a hypnotic swirl of action.

It will come as no surprise to Gutenberg minds to learn that reading is a better means of forming memory than watching films, as is listening to afternoon drama on Radio 4. This is the so-called “visualisation hypothesis” that proposes that people – and children in particular – find it harder not only to remember film as against spoken or written narratives, but also to come up with novel responses to them, because the amount of information they’re given, together with its determinate nature, forecloses imaginative response.

Almost all contemporary parents – and especially those of us who class themselves as “readers” – have engaged in the Great Battle of Screen: attempting to limit our children’s consumption of films, videos, computer games and phone-based social media. We feel intuitively that it can’t be doing our kids any good – they seem mentally distracted as well as physically fidgety: unable to concentrate as they often look from one handheld screen to a second freestanding one, alternating between tweezering some images on a touchscreen and manipulating others using a remote control. Far from admonishing my younger children to “read the classics” – an utterly forlorn hope – I often find myself simply wishing they’d put their phones down long enough to have their attention compelled by the film we’re watching.

If we take seriously the conclusions of these recent neuroscientific studies, one fact is indisputable: whatever the figures for books sales (either in print or digital form), reading for pleasure has been in serious decline for over a decade. That this form of narrative absorption (if you’ll forgive the coinage) is closely correlated with high attainment and wellbeing may tell us nothing about the underlying causation, but the studies do demonstrate that the suite of cognitive aptitudes needed to decipher text and turn it into living, breathing, visible and tangible worlds seem to wither away once we stop turning the pages and start goggling at virtual tales.

Of course, the sidelining of reading narrative (and along with it the semi-retirement of all those narrative forms we love) is small potatoes compared with the loss of our capacity for episodic memory: would we be quite so quick to post those fantastic holiday photographs on Facebook if we knew that in so doing we’d imperil our ability to recall unaided our walk along the perfect crescent of sand, and our first ecstatic kiss? You might’ve thought that as a novelist who depends on fully attuned Gutenberg minds to read his increasingly complex and confusing texts I’d be dismayed by this craven new couch-based world; and, as a novelist, I am.

I began writing my books on a manual typewriter at around the same time wireless broadband became ubiquitous, sensing it was inimical not only to the act of writing, but that of reading as well: a novel should be a self-contained and self-explanatory world (at least, that’s how the form has evolved), and it needs to be created in the same cognitive mode as it’s consumed: the writer hunkering down into his own episodic memories, and using his own canonical knowledge, while imagining all the things he’s describing, rather than Googling them to see what someone else thinks they look like. I also sense the decline in committed reading among the young that these studies claim: true, the number of those who’ve ever been inclined “to get up in the morning in the fullness of youth”, as Nietzsche so eloquently put it, “and open a book” has always been small; but then it’s worth recalling the sting in the tail of his remark: “now that’s what I call vicious”.

And there is something vicious about all that book learning, especially when it had to be done by rote. There’s something vicious as well about the baby boomer generation, which, not content to dominate the cultural landscape, also demands that everyone younger than us survey it in the same way. For the past five years I’ve been working on a trilogy of novels that aim to map the connections between technological change, warfare and human psychopathology, so obviously I’m attempting to respond to the zeitgeist using this increasingly obsolete art form. My view is that we’re deluded if we think new technologies come into existence because of clearly defined human objectives – let alone benevolent ones – and it’s this that should shape our response to them. No, the history of the 20th century – and now the 21st – is replete with examples of technologies that were developed purely in order to facilitate the killing of people at … [more]
willself  communication  digital  writing  howwewrite  entertainment  books  socialmedia  neuroscience  2016  marshallmcluhan  gutenbergminds  print  change  singularity  videogames  gaming  games  poetry  novels  susangreenfield  rote  rotelearning  twitter  knowledge  education  brain  wayfinding  memory  location  narration  navigation  vladimirnabokov  proust  janeausten  film  video  attention  editing  reading  howweread  visualizationhypothesis  visualization  text  imagery  images  cognition  literacy  multiliteracies  memories  nietzsche  booklearning  technology  mobile  phones  mentalillness  ptsd  humans  humanity  digitalmedia  richardbrautigan  narrative  storytelling 
november 2016 by robertogreco
Web Design - The First 100 Years
"Today I hope to persuade you that the same thing that happened to aviation is happening with the Internet. Here we are, fifty years into the computer revolution, at what feels like our moment of greatest progress. The outlines of the future are clear, and oh boy is it futuristic.

But we're running into physical and economic barriers that aren't worth crossing.

We're starting to see that putting everything online has real and troubling social costs.

And the devices we use are becoming 'good enough', to the point where we can focus on making them cheaper, more efficient, and accessible to everyone.

So despite appearances, despite the feeling that things are accelerating and changing faster than ever, I want to make the shocking prediction that the Internet of 2060 is going to look recognizably the same as the Internet today.

Unless we screw it up.

And I want to convince you that this is the best possible news for you as designers, and for us as people."



"So while Moore's Law still technically holds—the number of transistors on a chip keeps increasing—its spirit is broken. Computers don't necessarily get faster with time. In fact, they're getting slower!

This is because we're moving from desktops to laptops, and from laptops to smartphones. Some people are threatening to move us to wristwatches.
In terms of capability, these devices are a step into the past. Compared to their desktop brethren, they have limited memory, weak processors, and barely adequate storage.

And nobody cares, because the advantages of having a portable, lightweight connected device are so great. And for the purposes of taking pictures, making calls, and surfing the internet, they've crossed the threshold of 'good enough'.

What people want from computers now is better displays, better battery life and above all, a better Internet connection.

Something similar happened with storage, where the growth rate was even faster than Moore's Law. I remember the state-of-the-art 1MB hard drive in our computer room in high school. It cost a thousand dollars.
Here's a photo of a multi-megabyte hard drive from the seventies. I like to think that the guy in the picture didn't have to put on the bunny suit, it was just what he liked to wear.

Modern hard drives are a hundred times smaller, with a hundred times the capacity, and they cost a pittance. Seagate recently released an 8TB consumer hard drive.

But again, we've chosen to go backwards by moving to solid state storage, like you find in smartphones and newer laptops. Flash storage sacrifices capacity for speed, efficiency and durability.

Or else we put our data in 'the cloud', which has vast capacity but is orders of magnitude slower.

These are the victories of good enough. This stuff is fast enough.

Intel could probably build a 20 GHz processor, just like Boeing can make a Mach 3 airliner. But they won't. There's a corrollary to Moore's law, that every time you double the number of transistors, your production costs go up. Every two years, Intel has to build a completely new factory and production line for this stuff. And the industry is turning away from super high performance, because most people don't need it.

The hardware is still improving, but it's improving along other dimensions, ones where we are already up against hard physical limits and can't use the trick of miniaturization that won us all that exponential growth.

Battery life, for example. The limits on energy density are much more severe than on processor speed. And it's really hard to make progress. So far our advances have come from making processors more efficient, not from any breakthrough in battery chemistry.

Another limit that doesn't grow exponentially is our ability to move information. There's no point in having an 8 TB hard drive if you're trying to fill it over an AT&T network. Data constraints hit us on multiple levels. There are limits on how fast cores can talk to memory, how fast the computer can talk to its peripherals, and above all how quickly computers can talk to the Internet. We can store incredible amounts of information, but we can't really move it around.

So the world of the near future is one of power constrained devices in a bandwidth-constrained environment. It's very different from the recent past, where hardware performance went up like clockwork, with more storage and faster CPUs every year.

And as designers, you should be jumping up and down with relief, because hard constraints are the midwife to good design. The past couple of decades have left us with what I call an exponential hangover.

Our industry is in complete denial that the exponential sleigh ride is over. Please, we'll do anything! Optical computing, quantum computers, whatever it takes. We'll switch from silicon to whatever you want. Just don't take our toys away.
But all this exponential growth has given us terrible habits. One of them is to discount the present.

When things are doubling, the only sane place to be is at the cutting edge. By definition, exponential growth means the thing that comes next will be equal in importance to everything that came before. So if you're not working on the next big thing, you're nothing.



A further symptom of our exponential hangover is bloat. As soon as a system shows signs of performance, developers will add enough abstraction to make it borderline unusable. Software forever remains at the limits of what people will put up with. Developers and designers together create overweight systems in hopes that the hardware will catch up in time and cover their mistakes.

We complained for years that browsers couldn't do layout and javascript consistently. As soon as that got fixed, we got busy writing libraries that reimplemented the browser within itself, only slower.

It's 2014, and consider one hot blogging site, Medium. On a late-model computer it takes me ten seconds for a Medium page (which is literally a formatted text file) to load and render. This experience was faster in the sixties.

The web is full of these abuses, extravagant animations and so on, forever a step ahead of the hardware, waiting for it to catch up.

This exponential hangover leads to a feeling of exponential despair.

What's the point of pouring real effort into something that is going to disappear or transform in just a few months? The restless sense of excitement we feel that something new may be around the corner also brings with it a hopelessness about whatever we are working on now, and a dread that we are missing out on the next big thing.

The other part of our exponential hangover is how we build our businesses. The cult of growth denies the idea that you can build anything useful or helpful unless you're prepared to bring it to so-called "Internet scale". There's no point in opening a lemonade stand unless you're prepared to take on PepsiCo.

I always thought that things should go the other way. Once you remove the barriers of distance, there's room for all sorts of crazy niche products to find a little market online. People can eke out a living that would not be possible in the physical world. Venture capital has its place, as a useful way to fund long-shot projects, but not everything fits in that mold.

The cult of growth has led us to a sterile, centralized web. And having burned through all the easy ideas within our industry, we're convinced that it's our manifest destiny to start disrupting everyone else.

I think it's time to ask ourselves a very designy question: "What is the web actually for?"
I will argue that there are three competing visions of the web right now. The one we settle on will determine whether the idiosyncratic, fun Internet of today can survive.



Vision 1: CONNECT KNOWLEDGE, PEOPLE, AND CATS.

This is the correct vision.



Vision 2: FIX THE WORLD WITH SOFTWARE

This is the prevailing vision in Silicon Valley.



Vision 3: BECOME AS GODS, IMMORTAL CREATURES OF PURE ENERGY LIVING IN A CRYSTALLINE PARADISE OF OUR OWN CONSTRUCTION

This is the insane vision. I'm a little embarrassed to talk about it, because it's so stupid. But circumstances compel me.



There's a William Gibson quote that Tim O'Reilly likes to repeat: "the future is here, it's just not evenly distributed yet."

O'Reilly takes this to mean that if we surround ourselves with the right people, it can give us a sneak peek at coming attractions.

I like to interpret this quote differently, as a call to action. Rather than waiting passively for technology to change the world, let's see how much we can do with what we already have.

Let's reclaim the web from technologists who tell us that the future they've imagined is inevitable, and that our role in it is as consumers.

The Web belongs to us all, and those of us in this room are going to spend the rest of our lives working there. So we need to make it our home.

We live in a world now where not millions but billions of people work in rice fields, textile factories, where children grow up in appalling poverty. Of those billions, how many are the greatest minds of our time? How many deserve better than they get? What if instead of dreaming about changing the world with tomorrow's technology, we used today's technology and let the world change us? Why do we need to obsess on artificial intelligence, when we're wasting so much natural intelligence?


When I talk about a hundred years of web design, I mean it as a challenge. There's no law that says that things are guaranteed to keep getting better.

The web we have right now is beautiful. It shatters the tyranny of distance. It opens the libraries of the world to you. It gives you a way to bear witness to people half a world away, in your own words. It is full of cats. We built it by accident, yet already we're taking it for granted. We should fight to keep it! "
technology  web  webdesign  internet  culture  design  history  aviation  airplanes  planes  2014  constraints  growth  singularity  scale  webdev  siliconvalley  technosolutionism  boeing  intel  microsoft  cloud  raykurzweil  elonmusk  williamgibson  inequality  mooreslaw  timoreilly  software  bloat  progress  present  future  manifestdestiny 
july 2015 by robertogreco
Episode Eighty Six: Solid 2 of 2; Requests - GOV.UK 2018; Next
"Today, reading LinkedIn recommendations as they came in felt like reading eulogies. Apart from me not quite being dead. Not yet, at least. Or, I was dead and I hadn't realised it yet. It doesn't matter, anyway: all the recommendations from people I've enjoyed working with over the past three years just feel, unfortunately, like double-edged knives - ultimately good but only really readable with a twist.

Right now is a bad time, one of those terrible times when it doesn't even really matter that one of my good friends has pulled me aside, insisted that I have something to eat and sat patiently with me in a pizza joint while I stare off into space and mumble. It doesn't matter that he's great and doing these things for me and telling me that this too will pass: I am hearing all of the words that he's saying, the sounds he's making as that make all the little bits of air vibrate and hit my ear and undergo some sort of magic transformation as they get understood in my brain. But they don't connect. Understanding is different from feeling. And right now, I'm feeling useless and broken and disconnected and above all, sad. But I can't feel those things. I have meetings to go to. Hustle to hust. Against what felt at times like the relentless optimism of an O'Reilly conference I had to finally hide away for a while, behind a Diet Coke and a slice of cheesecake, because dealing with that much social interaction was just far too draining.

And so I'm hiding again tonight, instead of out with friends, because it's just too hard to smile and pretend that everything's OK when it's demonstrably not."



"Over the past couple of days at Solid it's become almost painfully apparent that the Valley, in broad terms, is suffering from a chronic lack of empathy in terms of how it both sees and deals with the rest of the world, not just in communicating what it's doing and what it's excited about, but also in its acts. Sometimes these are genuine gaffes - mistakes that do not betray a deeper level of consideration, thinking or strategy. Other times, they *are* genuine, and they betray at the very least a naivety as to consequence or second-order impact (and I'm prepared to accept that without at least a certain level of naivety or lack of consideration for impact we'd find it pretty hard as a species to ever take advantage of any technological advance), but let me instead perhaps point to a potential parallel. 

There are a bunch of people worried about what might happen if, or when, we finally get around to a sort of singularity event and we have to deal with a genuine superhuman artificial intelligence that can think (and act) rings around us, never mind improving its ability at a rate greater than O(n). 

One of the reasons to be afraid of such a strong AI was explained by Elizer Yudkowsky:

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

And here's how the rest of the world, I think, can unfairly perceive Silicon Valley: Silicon Valley doesn't care about humans, really. Silicon Valley loves solving problems. It doesn't hate you and it doesn't love you, but you do things that it can use for something else. Right now, those things include things-that-humans-are-good-at, like content generation and pointing at things. Right now, those things include things like getting together and making things. But solving problems is more fun than looking after people, and sometimes solving problems can be rationalised away as looking after people because hey, now that $20bn worth of manufacturing involved in making planes has gone away, people can go do stuff that they want to do, instead of having to make planes!

Would that it were that easy.

So anyway. I'm thinking about the Internet of Things and how no one's done a good job of branding it or explaining it or communicating it to Everyone Else. Because that needs doing.

--

As ever, thanks for the notes. Keep them coming in. If you haven't said hi already, please do and let me know where you heard about my newsletter. And if you like the newsletter, please consider telling some friends about it."
danhon  2014  siliconvalley  ai  empathy  problemsolving  society  californianideology  unemplyment  capitalism  depression  elizeryudkowsky  humans  singularity 
may 2014 by robertogreco
Maciej Ceglowski - Barely succeed! It's easier! - YouTube
"We live in a remarkable time when small teams (or even lone programmers) can successfully compete against internet giants. But while the last few years have seen an explosion of product ideas, there has been far less innovation in how to actually build a business. Silicon Valley is stuck in an outdated 'grow or die' mentality that overvalues risk, while investors dismiss sustainable, interesting projects for being too practical. So who needs investors anyway?

I'll talk about some alternative definitions of success that are more achievable (and more fun!) than the Silicon Valley casino. It turns out that staying small offers some surprising advantages, not just in the day-to-day experience of work, but in marketing and getting customers to love your project. Best of all, there's plenty more room at the bottom.

If your goal is to do meaningful work you love, you may be much closer to realizing your dreams than you think."
via:lukeneff  maciejceglowski  2013  startups  pinboard  culture  atalhualpa  larrywall  perl  coding  slow  small  success  community  communities  diversity  growth  sustainability  venturecapital  technology  tonyrobbins  timferris  raykurzweil  singularity  humanism  laziness  idleness  wealth  motivation  siliconvalley  money  imperialism  corneliusvanderbilt  meaning  incubators  stevejobs  stevewozniak  empirebuilders  makers  fundraising  closedloops  viscouscircles  labor  paulgraham  ycombinator  gender  publishing  hits  recordingindustry  business  lavabit  mistakes  duckduckgo  zootool  instapaper  newsblur  metafilter  minecraft  ravelry  4chan  backblaze  prgmr.com  conscience  growstuff  parentmeetings  lifestylebusinesses  authenticity  googlereader  yahoopipes  voice  longtail  fanfiction  internet  web  online  powerofculture  counterculture  transcontextualism  maciejcegłowski  transcontextualization 
march 2014 by robertogreco
Omniorthogonal: Hostile AI: You’re soaking in it!
"Corporations are at least somewhat constrained by the need to actually provide some service that is useful to people. Exxon provides energy, McDonald’s provides food, etc. The exception to this seems to be the financial industry. These institutions consume vast amounts of wealth and intelligence to essentially no human end. Of all human institutions, these seem the most parasitical and dangerous. Because of their ability to extract wealth, they are also siphoning off great amounts of human energy and intelligence — they have their own parallel universe of high-speed technology, for instance.

The financial system as a whole functions as a hostile AI. It has its own form of intelligence, it has interests that are distant or hostile to human goals. It is quite artificial, and quite intelligent in an alien sort of way. While it is not autonomous in the way we envision killer robots or Skynet, it is effectively autonomous of human control, which makes it just as dangerous."

[via and more: http://mini.quietbabylon.com/post/44276219648/the-singularity-already-happened-we-got-corporations ]
ai  singularity  corporations  corporatism  economics  finance  parasitism  2013 
march 2013 by robertogreco
Douglas Rushkoff's Present Shock: The End Of Time Is Not The End Of The World - Forbes
"Narrative Collapse… In remix culture and contemporary activism, he sees the potential for us to seize the narrative frame and use them in new ways to invent innovative story forms and flexible agendas.

Digiphrenia… Knowing when to be in “the now,” and when to insulate yourself from it can help you reclaim control of your time and attention.

Overwinding… The “shock” part of future shock really comes from how much time we have “springloaded” into the present. …But we can also use this fact in more constructive ways to “springload” time into things, like the example Rushkoff cites of the fully functional “pop-up” hospital that Israel sent to Japan after the Tsunami.

Fractalnoia… Computers, operating out of human time, can in fact discern patterns in that noise, but it is up to us humans to put those patterns in the correct context.

Rushkoff suggests that young people have reacted to the loss of storytellers by realizing they have to become the storyteller."
present  future  singularity  apocalypto  context  patternrecognition  computers  computing  storytelling  linearthinking  linearity  narrativecollapse  digiphrenia  overwinding  fractalnoia  time  presentshock  2012  douglasrushkoff  linear 
december 2012 by robertogreco
The Singularity is not coming - Cognitive Social Web - A better web, for a better world.
"I would like to simply argue that scientific progress is in fact linear, and this despite the capitalization of past results into current research (“accelerating returns”), and despite an exponentially increasing population of scientists and engineers working on advancing it (resource explosion). And since I don’t want to argue in the realm of opinion, I am going to propose a simple, convincing mathematical model of the progress of science. Using the same model, I’ll point out that a hypothetical self-improving AI would actually see its own research progress and intelligence stagnate soon enough, rather than explode —unless we provide it with exponentially increasing computing resources, in which case it may do linear progress (or even better, given a fast enough exponential rate of resource increase). … Intelligence is just a skill, more precisely a meta-skill that defines your ability to get new skills. But imagination is a fucking superpower. Do not rely solely on your intelligence and hard work to make an impact on this world, or even luck, it’s not going to work. After all the total quantity of intelligence and hard work available around is millionfold what you can provide —you’re just a drop of water in the ocean. Rather use your imagination, the one thing that makes you a beautiful unique snowflake. Intelligence and hard work should be merely at the service of our imagination. Think outside of the box. Break out. Shake the axioms."
singularity  intelligence  AI  imagination  2012  science  via:Preoccupations 
august 2012 by robertogreco
Joi Ito's Near-Perfect Explanation of the Next 100 Years - Technology Review
"One hundred years from now, the role of science and technology will be about becoming part of nature rather than trying to control it.

So much of science and technology has been about pursuing efficiency, scale and “exponential growth” at the expense of our environment and our resources. We have rewarded those who invent technologies that control our triumph over nature in some way. This is clearly not sustainable.

We must understand that we live in a complex system where everything is interrelated and interdependent and that everything we design impacts a larger system.

My dream is that 100 years from now, we will be learning from nature, integrating with nature and using science and technology to bring nature into our lives to make human beings and our artifacts not only zero impact but a positive impact to the natural system that we live in."
systemsthinking  systems  complexsystems  complexity  environment  growth  scale  sustainability  2012  technology  science  nature  future  biology  singularity  mit  joiito  from delicious
may 2012 by robertogreco
Blue Brain Project - Wikipedia
"The Blue Brain Project is an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level.
The aim of the project, founded in May 2005 by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne (Switzerland) is to study the brain's architectural and functional principles. The project is headed by the Institute's director, Henry Markram. Using a Blue Gene supercomputer running Michael Hines's NEURON software, the simulation does not consist simply of an artificial neural network, but involves a biologically realistic model of neurons.[1][2][not in citation given] It is hoped that it will eventually shed light on the nature of consciousness.[citation needed]

There are a number of sub-projects, including the Cajal Blue Brain, coordinated by the Supercomputing and Visualization Center of Madrid (CeSViMa), and others run by universities and independent laboratories in the UK, US, and Israel."
stumbleduponwhilesearching  reverse-engineering  bluebrainproject  bluebrain  wikipedia  singularity  transhumanism  neuroscience  brain  from delicious
february 2012 by robertogreco
Open the Future: Not Giving Up
"Our technologies are not going to rob us (or relieve us) of our humanity…are part of what makes us human…are the clear expression of our uniquely human minds…both manifest & enable human culture; we co-evolve w/ them, & have done so for hundreds of thousands of years. The technologies of the future will make us neither inhuman nor posthuman, no matter how much they change our sense of place & identity…

Technology is part of who we are. What both critics & cheerleaders of technological evolution miss is something both subtle & important: our technologies will, as they always have, make us who we are—make us human. The definition of Human is no more fixed by our ancestors’ first use of tools, than it is by using a mouse to control a computer. What it means to be Human is flexible, & we change it every day by changing our technology…it is this, more than the demands for abandonment or invocations of a secular nirvana, that will give us enormous challenges in the years to come."
jamaiscascio  technology  billjoy  2011  2000  nihilism  human  humans  humanism  singularity  nicholascarr  rejectionists  sherryturkle  society  democracy  freedom  peterthiel  posthuman  posthumanism  raykurzweil  identity  evolution  change  classideas  civilization  from delicious
june 2011 by robertogreco
Where the F**k Was I? (A Book) | booktwo.org
"Where Selvadurai is interested in the space between two human cultural identities, I suppose I am interested in the space where human and artificial cultures overlap. (“Artificial” is wrong; feels—what? Prejudiced? Colonial? Anthropocentric? Carboncentric?)

There are no digital natives but the devices themselves; no digital immigrants but the devices too. They are a diaspora, tentatively reaching out into the world to understand it and themselves, and across the network to find and touch one another. This mapping is a byproduct, part of the process by which any of us, separate and indistinct so long, find a place in the world."
books  iphone  maps  mobile  data  jamesbridle  shyamselvaduri  kevinslavin  digitalnatives  digital  devices  internet  web  singularity  mapping  place  meaning  meaningmaking  digitalimmigrants  understanding  learning  exploration  networkedlearning  networks  ai  2011  from delicious
june 2011 by robertogreco
What Technology Wants, Kevin Kelly, Book - Barnes & Noble
"A refreshing view of technology as a living force in the world.

This provocative book introduces a brand-new view of technology. It suggests that technology as a whole is not a jumble of wires and metal but a living, evolving organism that has its own unconscious needs and tendencies. Kevin Kelly looks out through the eyes of this global technological system to discover "what it wants." He uses vivid examples from the past to trace technology's long course and then follows a dozen trajectories of technology into the near future to project where technology is headed.

This new theory of technology offers three practical lessons: By listening to what technology wants we can better prepare ourselves and our children for the inevitable technologies to come. By adopting the principles of pro-action and engagement, we can steer technologies into their best roles. And by aligning ourselves with the long-term imperatives of this near-living system, we can capture its full gifts."
books  toread  kevinkelly  technium  technology  society  civilization  engagement  pro-action  singularity  future 
july 2010 by robertogreco
Why Robin Sloan is the Future of Publishing (and Science Fiction) | Wet Asphalt [Gets right to the heart of (a) why I love Robin's brand of science fiction; and (b) how the content is also related to the process of its creation.]
"While Bruce Sterling & Cory Doctorow & Vernor Vinge fantasize about Singularity or augmented reality or 3D printers that can reproduce themselves (which, incidentally, all appeal heavily to juvenile power fantasies), Sloan is writing a fiction that speaks to a world in which we find ourselves not exactly emancipated by technology but simply hyper-connected by it, our identities as people redefined by the media we share, media which we embrace & deeply care about even when it leaves us bewildered, co-opted, & reduced in a thousand ways to algorithms. It isn't "hard" Science Fiction, not by a long shot, but most "hard" SF long ago stopped being able to figure out how to be relevant to most readers (as can be seen by their sales figures), with its greatest practitioners, William Gibson & Neal Stephenson, turning instead to the present day, on the one hand, & history & alternate history, on the other. Sloan, however, has found an entirely different & exciting avenue of attack."
robinsloan  sciencefiction  scifi  writing  publishing  social  socialmedia  kickstarter  via:robinsloan  future  present  quantumcomputing  corydoctorow  singularity  williamgibson  brucesterling  vernorvinge 
june 2010 by robertogreco
Education Futures
"Founded on November 20, 2004, Education Futures explores a New Paradigm in human capital development, fueled by globalization, the rise of innovative knowledge societies, and driven by exponential, accelerating change."
education  educationfutures  mayafrost  johnmoravec  academics  blogging  blogs  elearning  future  futures  classroom  curriculum  futurism  futurology  games  technology  teaching  singularity  learning  knowledge  innovation  globalization  edublogs  gaming  e-learning  edtech  web2.0  tcsnmy  unschooling  deschooling  lcproject  classrooms 
june 2010 by robertogreco
click opera - Supersize mind
"Most of all, though, I feel that Clark and Chalmers' supersizing idea -- the Extended Mind Thesis -- fits my life intuitively. I feel that both technology and media extend my mind, and mingle it with other minds. This is why I do what I do; I like that promiscuity, that cultural reproduction."
culture  devices  momus  bionics  singularity  memory  iphone  technology 
february 2009 by robertogreco
Whole Earth Catalog: Access to Tools and Ideas
"In 1968 Stewart Brand launched an innovative publication called The Whole Earth Catalog.It was groundbreaking, enlightening, and spawned a group of later publications. The collection of that work provided on this site is not complete — and probably never will be — but it is a gift to readers who loved the CATALOG and those who are discovering it for the first time."
1968  wholeearthcatalog  stewartbrand  culture  technology  activism  reference  magazines  tools  environment  green  singularity  history  sustainability 
december 2008 by robertogreco
so heres what (12 December 2002, Interconnected)
"And I wanted to howl like a wolf and grow and smash everything up, and I wanted not to be there, stuck in this Now, and what I did was curl up and lie on the sofa and not speak and not cry until mother said "Are you alright?"
mattwebb  death  life  identity  singularity  definingmoments  memory  childhood  2002 
december 2008 by robertogreco
Kevin Kelly -- The Technium - Another One for the Machine
"Last week...a software program running on borrowed supercomputers...beat a US Go professional...Go has been Turing'd [as well as chess and checkers]. Driving a car has been Turing'd. The list of human cognitive activities that normal humans believe computers can't do is very short; Make art. Create a novel, symphony, movie. Have a conversation. Laugh at a joke. Are there other things people popularly believe computers can't do?"
go  chess  checkers  turing  singularity  future  ai  computing 
august 2008 by robertogreco
Intel: Human and computer intelligence will merge in 40 years
"Most aspects of our lives, in fact, will be very different as we close in on the year 2050. Computing will be less about launching applications and more about living lives in which computers are inextricably woven into our daily activities."
everyware  future  intelligence  singularity  via:preoccupations  metaverse  ubicomp  virtualworlds  ai  computing  intel 
july 2008 by robertogreco
Edge 250 - ENGINEERS' DREAMS By George Dyson
"Data that are associated frequently by search requests are locally replicated—establishing physical proximity, in the real universe, that is manifested computationally as proximity in time. Google was more than a map. Google was becoming something else
georgedyson  sciencefiction  scifi  singularity  google  intelligence  artificial  ai  dreaming  science  programming  fiction  internet  literature 
july 2008 by robertogreco
Infoporn: Tap Into the 12-Million-Teraflop Handheld Megacomputer
"next stage in technological evolution is...the One Machine...hardware is assembled from our myriad devices, its software is written by our collective online behavior...the Machine also includes us. After all, our brains are programming & underpinning it"
computing  wired  cloud  kevinkelly  cloudcomputing  evolution  singularity  science  innovation  infodesign  collectiveintelligence  intelligence  computers  human  networks  mobile  mind  visualization  internet  future  brain  crowdsourcing  ai  data  it  learning2.0  trends  storage 
july 2008 by robertogreco
Blackbeltjones/Work: » If it walks like a singularity, and quacks like a singularity
"What kind of society...likely to get if...hitting peak oil...but it’s possible to process random junk biomass into crude oil for $100 a barrel"..."Fortune500 companies would be better off hiring science-fiction writers than MBA consultants right now."

[Now at: http://magicalnihilism.com/2008/06/17/if-it-walks-like-a-singularity-and-quacks-like-a-singularity/ ]
scifi  sciencefiction  singularity  technology  biomass  peakoil  oil  energy  future  futurism  mattjones 
june 2008 by robertogreco
Paul Bunyan vs. the Singularity - Boing Boing
"I had this wacky idea a few days ago, about writing some Paul-Bunyan kinds of stories from the point of view of a post-Singularity storyteller. I always had a thing for tall tales."
singularity  storytelling  fables  fiction  humor  technology  online  internet  culture 
june 2008 by robertogreco
Paralyzed Man 'Walks' In Second Life | Game | Life from Wired.com
"Despite his condition, a progressive muscle disease that prevents him from using a keyboard or mouse, the new technology allowed him to control a character using the same set of brain impulses normally used to move a person's arms and legs."
paralysis  singularity  sl  secondlife  brain  muscles  neuroscience 
june 2008 by robertogreco
Futurist Ray Kurzweil Pulls Out All the Stops (and Pills) to Live to Witness the Singularity
"Artificial intelligence will render biological humans obsolete, he says, but will not make human consciousness irrelevant. Kurzweil argues the singularity won't destroy us -- it will immortalize us."
raykurzweil  future  health  healthcare  singularity  technology  diet  cosmology  aging 
march 2008 by robertogreco
I, Cringely . The Pulpit . War of the Worlds | PBS
"the younger technical generations are so empowered they are impatient and ready to jettison institutions most of the rest of us tend to think of as essential, central, even immortal. They are ready to dump our schools"
education  future  schools  reform  change  learning  technology  culture  society  certification  homeschool  deschooling  unschooling  generations  e-learning  cringely  knowledge  search  gamechanging  millennials  digitalnatives  via:preoccupations  software  philosophy  sharing  pedagogy  singularity  literacy  elearning  academia  demographics  parenting  schooling  internet  futurism 
march 2008 by robertogreco
Kevin Kelly -- The Technium - Lumpers and Splitters
"In every classification scheme...those who tend to find similarities & lump smaller groups into larger...those who find differences...split larger groups into smaller...Sometimes lumpers prevail or splitters...rare moments of revolution mixer-uppers prev
classification  taxonomy  change  biology  technology  singularity  future  predictions  kevinkelly  species 
january 2008 by robertogreco
Coming Soon to a Theater Near You: The Singularity
"Futurist Ray Kurzweil is writing, directing, producing and acting in his first feature film, The Singularity Is Near: A True Story About The Future"
future  raykurzweil  singularity  scifi  film  billjoy  aubreydegrey 
november 2007 by robertogreco
Kevin Kelly -- The Technium: Dimensions of the One Machine
"100 billion neurons in human brain..Today the Machine has as 5 X transistors than you have neurons in your head...Somewhere between 2020 & 2040 the Machine should exceed 6 billion HB. That is, it will exceed the processing power of humanity."
ai  brain  computers  technology  networks  singularity  future  internet  gamechanging  web  online  technium  kevinkelly  onemachine  human  processing  hardware  software  storage  mooreslaw 
november 2007 by robertogreco
The Outsourced Brain - New York Times
"Now, you may wonder if in the process of outsourcing my thinking I am losing my individuality. Not so. My preferences are more narrow and individualistic than ever. It’s merely my autonomy that I’m losing."
gps  memory  technology  davidbrooks  online  geography  internet  web  singularity  affection 
october 2007 by robertogreco
Homunculus - Wikipedia
"The concept of a homunculus (Latin for "little man", sometimes spelled "homonculus," plural "homunculi") is often used to illustrate the functioning of a system. In the scientific sense of an unknowable prime actor, it can be viewed as an entity or agent
biology  ai  folklore  magic  philosophy  mind  logic  science  history  singularity  homunculus  thought  glvo 
september 2007 by robertogreco
Saffo: journal: All Watched over by Machines of Loving Grace -Richard Brautigan, October 1967 [reformatted here: http://www.saffo.com/journal/entry.php?id=799]
"I like to think (and it has to be!) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace."
poetry  machines  computers  society  future  cybernetics  nature  singularity 
september 2007 by robertogreco
Our Lives, Controlled From Some Guy’s Couch - New York Times
"if you accept a pretty reasonable assumption of Dr. Bostrom’s, it is almost a mathematical certainty that we are living in someone else’s computer simulation"
computers  future  life  mind  philosophy  religion  singularity  technology  simulations  science  matrix  artificial  virtual  virtuality  theory  evolution  neuroscience  visualization  existence  perception 
august 2007 by robertogreco
Are You Living in a Computer Simulation?
"... at least one of the propositions is true: 1 human species is likely to go extinct before reaching a “posthuman” stage; 2 any posthuman civilization is unlikely to run a significant number of simulations of their evolutionary history 3 we are liv
academia  mind  artificial  simulations  computer  computing  consciousness  theory  technology  philosophy  science  matrix  evolution  neuroscience  visualization  virtuality  singularity  scifi  futurism  existence  religion  perception  debate 
august 2007 by robertogreco
Seed: Rise of Roboethics
"Grappling with the implications of an artificially intelligent culture."
ai  consciousness  brain  robots  robotics  singularity  ethics  future  law  mind  philosophy  technology  culture  japan 
august 2007 by robertogreco
Garnet Hertz - Experiments in Galvanism: Frog with Implanted Webserver
"Experiments in Galvanism is both a reference to the origins of electricity, one of the earliest new media, and, through Galvani's discovery that bioelectric forces exist within living tissue, a nod to what many theorists and practitioners consider to be
robots  animals  frogs  biology  computers  electricity  experiments  technology  singularity  science  art  interactive  biotechnology 
july 2007 by robertogreco
The blade runner generation - Times Online
"Soon, we will transfer information by thought, run faster and further without tiring...What started as a quest to help the disabled will revolutionise the lives of the able-bodied...robotics is the next giant leap for mankind"
singularity  robots  robotics  technology  future 
july 2007 by robertogreco
Nano geekery and otherwise « Speedbird
"Our tools are nothing but frozen maps of our needs, desires, lacks and weaknesses...we’d be far better off recognizing these all-too-human qualities...than lunging after some maximally improbable transcendence."
technology  society  future  nanotechnology  singularity  perspective  skepticism  criticism  enthusiasm  adamgreenfield 
march 2007 by robertogreco
A new breed: Mary Mattingly
"After the fall of post-industrial civilization, humans will transform into comfortably numb spiritually nomads (the "navigators"), they will wear their high-tech home on their backs and be mentally and materially equipped to survive in a landscape reconf
photography  photoshop  survival  technology  future  homes  fashion  gadgets  art  design  singularity 
december 2006 by robertogreco

related tags

4chan  academia  academics  acculturation  activism  adamgreenfield  affection  aging  ai  airplanes  andréspicer  animals  apocalypto  architecture  art  artificial  artists  atalhualpa  attention  aubreydegrey  authenticity  aviation  backblaze  billburnett  billjoy  biology  biomass  bionics  biotechnology  bladerunner  bloat  blogging  blogs  bluebrain  bluebrainproject  boeing  booklearning  books  brain  brucesterling  bullshit  business  californiaideology  californianideology  capitalism  certification  change  checkers  chess  childhood  childrenofmen  christophermckenna  civilization  classideas  classification  classroom  classrooms  closedloops  cloud  cloudcomputing  coding  cognition  collapse  collectiveintelligence  collectivism  colleges  commercialization  communication  communities  community  complexity  complexsystems  computer  computers  computing  conscience  consciousness  constraints  consulting  consumerism  context  corneliusvanderbilt  corporations  corporatism  corydoctorow  cosmology  counterculture  coworking  cringely  criticalthinking  criticism  crossdisciplinary  crowdsourcing  culture  curriculum  cybernetics  cycles  d.school  d.tech  danhon  data  daveevans  davidbrooks  davidedgerton  davidhounshell  davidkelly  davidmowery  death  debate  definingmoments  democracy  demographics  depression  deschooling  design  designimperialism  designthinking  devices  diet  digiphrenia  digital  digitalimmigrants  digitalmedia  digitalnatives  diversity  dougengelbart  douglasrushkoff  dreaming  duckduckgo  dystopia  e-learning  economics  editing  edtech  edublogs  education  educationfutures  elearning  electricity  elizabethpoppberman  elizeryudkowsky  elonmusk  elysium  empathy  empirebuilders  emptiness  energy  engagement  entertainment  enthusiasm  entrepreneurship  environment  esaleninstitute  escape  ethics  ethnography  everyware  evolution  existence  experiments  exploration  fables  fanfiction  fashion  fiction  film  finance  folklore  fractalnoia  freedom  frogs  fundraising  future  futures  futurism  futurology  gadgets  gamechanging  games  gaming  gender  generations  geography  georgedyson  georgeorwell  globalization  globalwarming  glvo  go  google  googlereader  gps  green  growstuff  growth  gutenbergminds  hardware  hassoplattner  health  healthcare  herbertsimon  highered  highereducation  history  hits  homes  homeschool  homunculus  hope  howweread  howwewrite  hucksterism  human  humanism  humanitariandesign  humanity  humans  humor  hype  hypermobility  ideas  ideation  identity  ideo  idleness  ieee  imagery  images  imagination  imperialism  incubators  inequality  infodesign  informationtechnology  innovation  instapaper  intel  intellectualproperty  intelligence  interactive  internet  invention  iphone  iplaw  it  jamaiscascio  jamesbridle  janeausten  japan  jimjones  johnhennessey  johnmoravec  joiito  jugaad  kevinkelly  kevinslavin  kickstarter  knowledge  kowloon  kowlooncity  kowloonwalledcity  labor  larrywall  latecapitalism  lavabit  law  lawrencebusch  laziness  lcproject  learning  learning2.0  leevinsel  liberalarts  life  lifestylebusinesses  linear  linearity  linearthinking  literacy  literature  location  logic  longtail  machines  maciejceglowski  maciejcegłowski  magazines  magic  maintainers  maintenance  makers  manifestdestiny  mapping  maps  margaretbrindle  marianamazzucato  marketing  marshallmcluhan  matrix  mattjones  mattwebb  mayafrost  meaning  meaningmaking  memex  memories  memory  mending  mentalillness  metafilter  metaverse  methodology  microsoft  militarization  millennials  mind  mindfulness  minecraft  mistakes  mit  mobile  mobility  momus  money  mooreslaw  mortality  motivation  multidisciplinary  multiliteracies  muscles  mutualaid  nanotechnology  narration  narrative  narrativecollapse  natashajen  nathanielcomfort  nathanmyhrvold  nathanrosenberg  nature  navigation  neo-nomads  neoliberalism  networkedlearning  networks  neuroscience  newage  newsblur  nicholascarr  nietzsche  nihilism  nomads  novels  occupysandy  occupywallstreet  oil  onemachine  online  openideo  optimism  overwinding  ownership  ows  paralysis  parasitism  parenting  parentmeetings  patrickmccray  patternrecognition  paulgraham  peakoil  pedagogy  perception  perl  perspective  petermiller  peterstearns  peterthiel  philipmirowski  philosophy  phones  photo  photography  photoshop  pinboard  place  planes  playpump  poetry  pollution  post-its  posthuman  posthumanism  powerofculture  precarity  predictions  present  presentshock  prgmr.com  print  privatization  pro-action  problemsolving  process  processing  programming  progress  proust  psychology  ptsd  publishing  qualityoflife  quantumcomputing  ravelry  raykurzweil  reading  reality  recordingindustry  reference  reform  rejectionists  religion  rentseeking  repair  repairing  resilience  response  reverse-engineering  richardbrautigan  riodejaneiro  robertmckim  robinsloan  robotics  robots  rote  rotelearning  safetynet  scale  schooling  schools  science  sciencefiction  scifi  search  secondlife  self-promotion  sharing  sherryturkle  shyamselvaduri  siliconvalley  simplicity  simulations  singularity  singularityuniversity  skepticism  sl  slow  small  snakeoil  social  socialmedia  socialsafetynet  society  software  solarpunk  species  stanford  startups  stevejobs  stevenklepper  stevewozniak  stewartbrand  storage  storytelling  structure  stumbleduponwhilesearching  subjectivity  success  survival  susangreenfield  sustainability  systems  systemsthinking  sãopaulo  taxonomy  tcsnmy  teaching  technium  technology  technosolutionism  tedtalks  text  themaintainers  theory  thinking  thomamoran  thought  time  timferris  timoreilly  tonyrobbins  tools  toread  torredavid  transcendence  transcontextualism  transcontextualization  transdisciplinary  transhumanism  trends  turing  twitter  ubicomp  understanding  unemplyment  universities  unschooling  utopia  vannevarbush  venturecapital  vernorvinge  via:lukeneff  via:preoccupations  via:robinsloan  video  videogames  virtual  virtuality  virtualworlds  viscouscircles  visualization  visualizationhypothesis  vladimirnabokov  voice  wayfinding  wealth  web  web2.0  webdesign  webdev  whitesaviors  wholeearthcatalog  wikipedia  williamgibson  willself  willthomas  winnertakeall  wired  wireless  work  writing  yahoopipes  ycombinator  zootool 

Copy this bookmark:



description:


tags: