robertogreco + artificialintelligence   33

Inhumanism Rising - Benjamin H Bratton - YouTube
[See also:
https://trust.support/watch/inhumanism-rising

“Benjamin H. Bratton considers the role ideologies play in technical systems that operate at scales beyond human perception. Deep time, deep learning, deep ecology and deep states force a redrawing of political divisions. What previously may have been called left and right comes to reflect various positions on what it means to be, and want to be, human. Bratton is a design theorist as much as he is a philosopher. In his work remodelling our operating system, he shows how humans might be the medium, rather than the message, in planetary-scale ways of knowing.

Benjamin H. Bratton's work spans Philosophy, Art, Design and Computer Science. He is Professor of Visual Arts and Director of the Center for Design and Geopolitics at the University of California, San Diego. He is Program Director of the Strelka Institute of Media, Architecture and Design in Moscow. He is also a Professor of Digital Design at The European Graduate School and Visiting Faculty at SCI_Arc (The Southern California Institute of Architecture)

In The Stack: On Software and Sovereignty (MIT Press, 2016. 503 pages) Bratton outlines a new theory for the age of global computation and algorithmic governance. He proposes that different genres of planetary-scale computation – smart grids, cloud platforms, mobile apps, smart cities, the Internet of Things, automation – can be seen not as so many species evolving on their own, but as forming a coherent whole: an accidental megastructure that is both a computational infrastructure and a new governing architecture. The book plots an expansive interdisciplinary design brief for The Stack-to-Come.

His current research project, Theory and Design in the Age of Machine Intelligence, is on the unexpected and uncomfortable design challenges posed by A.I in various guises: from machine vision to synthetic cognition and sensation, and the macroeconomics of robotics to everyday geoengineering.”]
benjaminbratton  libertarianism  technology  botcoin  blockchain  peterthiel  society  technodeterminism  organization  anarchism  anarchy  jamesbridle  2019  power  powerlessness  control  inhumanism  ecology  capitalism  fascism  interdependence  surveillance  economics  data  computation  ai  artificialintelligence  californianideology  ideology  philosophy  occult  deeplearning  deepecology  magic  deepstate  politics  agency  theory  conspiracytheories  jordanpeterson  johnmichaelgreer  anxiety  software  automation  science  psychology  meaning  meaningfulness  apophenia  posthumanism  robotics  privilege  revelation  cities  canon  tools  beatrizcolomina  markwigley  markfisher  design  transhumanism  multispecies  cybotgs  syntheticbiology  intelligence  biology  matter  machines  industry  morethanhuman  literacy  metaphysics  carlschmitt  chantalmouffe  human-centereddesign  human-centered  experience  systems  access  intuition  abstraction  expedience  ideals  users  systemsthinking  aesthetics  accessibility  singularity  primitivism  communism  duty  sovietunion  ussr  luxury  ianhacking 
17 days ago by robertogreco
Silicon Valley Thinks Everyone Feels the Same Six Emotions
"From Alexa to self-driving cars, emotion-detecting technologies are becoming ubiquitous—but they rely on out-of-date science"
emotions  ai  artificialintelligence  2018  psychology  richfirth-godbehere  faces 
january 2019 by robertogreco
Silicon Valley Is Turning Into Its Own Worst Fear
"Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism."



"Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.”"



"
It’d be tempting to say that fearmongering about superintelligent AI is a deliberate ploy by tech behemoths like Google and Facebook to distract us from what they themselves are doing, which is selling their users’ data to advertisers. If you doubt that’s their goal, ask yourself, why doesn’t Facebook offer a paid version that’s ad free and collects no private information? Most of the apps on your smartphone are available in premium versions that remove the ads; if those developers can manage it, why can’t Facebook? Because Facebook doesn’t want to. Its goal as a company is not to connect you to your friends, it’s to show you ads while making you believe that it’s doing you a favor because the ads are targeted.

So it would make sense if Mark Zuckerberg were issuing the loudest warnings about AI, because pointing to a monster on the horizon would be an effective red herring. But he’s not; he’s actually pretty complacent about AI. The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

Which brings us back to the importance of insight. Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health.

We need for the machines to wake up, not in the sense of computers becoming self-aware, but in the sense of corporations recognizing the consequences of their behavior. Just as a superintelligent AI ought to realize that covering the planet in strawberry fields isn’t actually in its or anyone else’s best interests, companies in Silicon Valley need to realize that increasing market share isn’t a good reason to ignore all other considerations. Individuals often reevaluate their priorities after experiencing a personal wake-up call. What we need is for companies to do the same — not to abandon capitalism completely, just to rethink the way they practice it. We need them to behave better than the AIs they fear and demonstrate a capacity for insight."
ai  elonmusk  capitalism  siliconvalley  technology  artificialintelligence  tedchiang  2017  insight  intelligence  regulation  governance  government  johnperrybarlow  1996  autonomy  externalcontrols  corporations  corporatism  fredericjameson  excess  growth  monopolies  technosolutionism  ethics  economics  policy  civilization  libertarianism  aynrand  billgates  markzuckerberg 
december 2017 by robertogreco
Impakt Festival 2017 - Performance: ANAB JAIN. HQ - YouTube
[Embedded here: http://impakt.nl/festival/reports/impakt-festival-2017/impakt-festival-2017-anab-jain/ ]

"'Everything is Beautiful and Nothing Hurts': @anab_jain's expansive keynote @impaktfestival weaves threads through death, transcience, uncertainty, growthism, technological determinism, precarity, imagination and truths. Thanks to @jonardern for masterful advise on 'modelling reality', and @tobias_revell and @ndkane for the invitation."
https://www.instagram.com/p/BbctTcRFlFI/ ]
anabjain  2017  superflux  death  aging  transience  time  temporary  abundance  scarcity  future  futurism  prototyping  speculativedesign  predictions  life  living  uncertainty  film  filmmaking  design  speculativefiction  experimentation  counternarratives  designfiction  futuremaking  climatechange  food  homegrowing  smarthomes  iot  internetofthings  capitalism  hope  futures  hopefulness  data  dataviz  datavisualization  visualization  williamplayfair  society  economics  wonder  williamstanleyjevons  explanation  statistics  wiiliambernstein  prosperity  growth  latecapitalism  propertyrights  jamescscott  objectivity  technocrats  democracy  probability  scale  measurement  observation  policy  ai  artificialintelligence  deeplearning  algorithms  technology  control  agency  bias  biases  neoliberalism  communism  present  past  worldview  change  ideas  reality  lucagatti  alextaylor  unknown  possibility  stability  annalowenhaupttsing  imagination  ursulaleguin  truth  storytelling  paradigmshifts  optimism  annegalloway  miyamotomusashi  annatsing 
november 2017 by robertogreco
Zeynep Tufekci: We're building a dystopia just to make people click on ads | TED Talk | TED.com
"We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response."

[See also: "Machine intelligence makes human morals more important"
https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important

"Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics.""]
zeyneptufekci  machinelearning  ai  artificialintelligence  youtube  facebook  google  amazon  ethics  computing  advertising  politics  behavior  technology  web  online  internet  susceptibility  dystopia  sociology  donaldtrump 
october 2017 by robertogreco
Ellen Ullman: Life in Code: "A Personal History of Technology" | Talks at Google - YouTube
"The last twenty years have brought us the rise of the internet, the development of artificial intelligence, the ubiquity of once unimaginably powerful computers, and the thorough transformation of our economy and society. Through it all, Ellen Ullman lived and worked inside that rising culture of technology, and in Life in Code she tells the continuing story of the changes it wrought with a unique, expert perspective.

When Ellen Ullman moved to San Francisco in the early 1970s and went on to become a computer programmer, she was joining a small, idealistic, and almost exclusively male cadre that aspired to genuinely change the world. In 1997 Ullman wrote Close to the Machine, the now classic and still definitive account of life as a coder at the birth of what would be a sweeping technological, cultural, and financial revolution.

Twenty years later, the story Ullman recounts is neither one of unbridled triumph nor a nostalgic denial of progress. It is necessarily the story of digital technology’s loss of innocence as it entered the cultural mainstream, and it is a personal reckoning with all that has changed, and so much that hasn’t. Life in Code is an essential text toward our understanding of the last twenty years—and the next twenty."
ellenullman  bias  algorithms  2017  technology  sexism  racism  age  ageism  society  exclusion  perspective  families  parenting  mothers  programming  coding  humans  humanism  google  larrypage  discrimination  self-drivingcars  machinelearning  ai  artificialintelligence  literacy  reading  howweread  humanities  education  publicschools  schools  publicgood  libertarianism  siliconvalley  generations  future  pessimism  optimism  hardfun  kevinkelly  computing 
october 2017 by robertogreco
Eyes Without a Face — Real Life
"The American painter and sculptor Ellsworth Kelly — remembered mainly for his contributions to minimalism, Color Field, and Hard-edge painting — was also a prodigious birdwatcher. “I’ve always been a colorist, I think,” he said in 2013. “I started when I was very young, being a birdwatcher, fascinated by the bird colors.” In the introduction to his monograph, published by Phaidon shortly before his death in 2015, he writes, “I remember vividly the first time I saw a Redstart, a small black bird with a few very bright red marks … I believe my early interest in nature taught me how to ‘see.’”

Vladimir Nabokov, the world’s most famous lepidopterist, classified, described, and named multiple butterfly species, reproducing their anatomy and characteristics in thousands of drawings and letters. “Few things have I known in the way of emotion or appetite, ambition or achievement, that could surpass in richness and strength the excitement of entomological exploration,” he wrote. Tom Bradley suggests that Nabokov suffered from the same “referential mania” as the afflicted son in his story “Signs and Symbols,” imagining that “everything happening around him is a veiled reference to his personality and existence” (as evidenced by Nabokov’s own “entomological erudition” and the influence of a most major input: “After reading Gogol,” he once wrote, “one’s eyes become Gogolized. One is apt to see bits of his world in the most unexpected places”).

For me, a kind of referential mania of things unnamed began with fabric swatches culled from Alibaba and fine suiting websites, with their wonderfully zoomed images that give you a sense of a particular material’s grain or flow. The sumptuous decadence of velvets and velours that suggest the gloved armatures of state power, and their botanical analogue, mosses and plant lichens. Industrial materials too: the seductive artifice of Gore-Tex and other thermo-regulating meshes, weather-palimpsested blue tarpaulins and piney green garden netting (winningly known as “shade cloth”). What began as an urge to collect colors and textures, to collect moods, quickly expanded into the delicious world of carnivorous plants and bugs — mantises exhibit a particularly pleasing biomimicry — and deep-sea aphotic creatures, which rewardingly incorporate a further dimension of movement. Walls suggest piled textiles, and plastics the murky translucence of jellyfish, and in every bag of steaming city garbage I now smell a corpse flower.

“The most pleasurable thing in the world, for me,” wrote Kelly, “is to see something and then translate how I see it.” I feel the same way, dosed with a healthy fear of cliché or redundancy. Why would you describe a new executive order as violent when you could compare it to the callous brutality of the peacock shrimp obliterating a crab, or call a dress “blue” when it could be cobalt, indigo, cerulean? Or ivory, alabaster, mayonnaise?

We might call this impulse building visual acuity, or simply learning how to see, the seeing that John Berger describes as preceding even words, and then again as completely renewed after he underwent the “minor miracle” of cataract surgery: “Your eyes begin to re-remember first times,” he wrote in the illustrated Cataract, “…details — the exact gray of the sky in a certain direction, the way a knuckle creases when a hand is relaxed, the slope of a green field on the far side of a house, such details reassume a forgotten significance.” We might also consider it as training our own visual recognition algorithms and taking note of visual or affective relationships between images: building up our datasets. For myself, I forget people’s faces with ease but never seem to forget an image I have seen on the internet.

At some level, this training is no different from Facebook’s algorithm learning based on the images we upload. Unlike Google, which relies on humans solving CAPTCHAs to help train its AI, Facebook’s automatic generation of alt tags pays dividends in speed as well as privacy. Still, the accessibility context in which the tags are deployed limits what the machines currently tell us about what they see: Facebook’s researchers are trying to “understand and mitigate the cost of algorithmic failures,” according to the aforementioned white paper, as when, for example, humans were misidentified as gorillas and blind users were led to then comment inappropriately. “To address these issues,” the paper states, “we designed our system to show only object tags with very high confidence.” “People smiling” is less ambiguous and more anodyne than happy people, or people crying.

So there is a gap between what the algorithm sees (analyzes) and says (populates an image’s alt text with). Even though it might only be authorized to tell us that a picture is taken outside, then, it’s fair to assume that computer vision is training itself to distinguish gesture, or the various colors and textures of the slope of a green field. A tag of “sky” today might be “cloudy with a threat of rain” by next year. But machine vision has the potential to do more than merely to confirm what humans see. It is learning to see something different that doesn’t reproduce human biases and uncover emotional timbres that are machinic. On Facebook’s platforms (including Instagram, Messenger, and WhatsApp) alone, over two billion images are shared every day: the monolith’s referential mania looks more like fact than delusion."
2017  rahelaima  algorithms  facebook  ai  artificialintelligence  machinelearning  tagging  machinevision  at  ellsworthkelly  color  tombrdley  google  captchas  matthewplummerfernandez  julesolitski  neuralnetworks  eliezeryudkowsky  seeing 
may 2017 by robertogreco
Physiognomy’s New Clothes – Blaise Aguera y Arcas – Medium
"In 1844, a laborer from a small town in southern Italy was put on trial for stealing “five ricottas, a hard cheese, two loaves of bread […] and two kid goats”. The laborer, Giuseppe Villella, was reportedly convicted of being a brigante (bandit), at a time when brigandage — banditry and state insurrection — was seen as endemic. Villella died in prison in Pavia, northern Italy, in 1864.

Villella’s death led to the birth of modern criminology. Nearby lived a scientist and surgeon named Cesare Lombroso, who believed that brigantes were a primitive type of people, prone to crime. Examining Villella’s remains, Lombroso found “evidence” confirming his belief: a depression on the occiput of the skull reminiscent of the skulls of “savages and apes”.

Using precise measurements, Lombroso recorded further physical traits he found indicative of derangement, including an “asymmetric face”. Criminals, Lombroso wrote, were “born criminals”. He held that criminality is inherited, and carries with it inherited physical characteristics that can be measured with instruments like calipers and craniographs [1]. This belief conveniently justified his a priori assumption that southern Italians were racially inferior to northern Italians.

The practice of using people’s outer appearance to infer inner character is called physiognomy. While today it is understood to be pseudoscience, the folk belief that there are inferior “types” of people, identifiable by their facial features and body measurements, has at various times been codified into country-wide law, providing a basis to acquire land, block immigration, justify slavery, and permit genocide. When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism.

Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development. Whether intentional or not, this “laundering” of human prejudice through computer algorithms can make those biases appear to be justified objectively.

A recent case in point is Xiaolin Wu and Xi Zhang’s paper, “Automated Inference on Criminality Using Face Images”, submitted to arXiv (a popular online repository for physics and machine learning researchers) in November 2016. Wu and Zhang’s claim is that machine learning techniques can predict the likelihood that a person is a convicted criminal with nearly 90% accuracy using nothing but a driver’s license-style face photo. Although the paper was not peer-reviewed, its provocative findings generated a range of press coverage. [2]
Many of us in the research community found Wu and Zhang’s analysis deeply problematic, both ethically and scientifically. In one sense, it’s nothing new. However, the use of modern machine learning (which is both powerful and, to many, mysterious) can lend these old claims new credibility.

In an era of pervasive cameras and big data, machine-learned physiognomy can also be applied at unprecedented scale. Given society’s increasing reliance on machine learning for the automation of routine cognitive tasks, it is urgent that developers, critics, and users of artificial intelligence understand both the limits of the technology and the history of physiognomy, a set of practices and beliefs now being dressed in modern clothes. Hence, we are writing both in depth and for a wide audience: not only for researchers, engineers, journalists, and policymakers, but for anyone concerned about making sure AI technologies are a force for good.

We will begin by reviewing how the underlying machine learning technology works, then turn to a discussion of how machine learning can perpetuate human biases."



"Research shows that the photographer’s preconceptions and the context in which the photo is taken are as important as the faces themselves; different images of the same person can lead to widely different impressions. It is relatively easy to find a pair of images of two individuals matched with respect to age, race, and gender, such that one of them looks more trustworthy or more attractive, while in a different pair of images of the same people the other looks more trustworthy or more attractive."



"On a scientific level, machine learning can give us an unprecedented window into nature and human behavior, allowing us to introspect and systematically analyze patterns that used to be in the domain of intuition or folk wisdom. Seen through this lens, Wu and Zhang’s result is consistent with and extends a body of research that reveals some uncomfortable truths about how we tend to judge people.

On a practical level, machine learning technologies will increasingly become a part of all of our lives, and like many powerful tools they can and often will be used for good — including to make judgments based on data faster and fairer.

Machine learning can also be misused, often unintentionally. Such misuse tends to arise from an overly narrow focus on the technical problem, hence:

• Lack of insight into sources of bias in the training data;
• Lack of a careful review of existing research in the area, especially outside the field of machine learning;
• Not considering the various causal relationships that can produce a measured correlation;
• Not thinking through how the machine learning system might actually be used, and what societal effects that might have in practice.

Wu and Zhang’s paper illustrates all of the above traps. This is especially unfortunate given that the correlation they measure — assuming that it remains significant under more rigorous treatment — may actually be an important addition to the already significant body of research revealing pervasive bias in criminal judgment. Deep learning based on superficial features is decidedly not a tool that should be deployed to “accelerate” criminal justice; attempts to do so, like Faception’s, will instead perpetuate injustice."
blaiseaguerayarcas  physiognomy  2017  facerecognition  ai  artificialintelligence  machinelearning  racism  bias  xiaolinwu  xi  zhang  race  profiling  racialprofiling  giuseppevillella  cesarelombroso  pseudoscience  photography  chrononet  deeplearning  alexkrizhevsky  ilyasutskever  geoffreyhinton  gillevi  talhassner  alexnet  mugshots  objectivity  giambattistadellaporta  francisgalton  samuelnorton  josiahnott  georgegiddon  charlesdarwin  johnhoward  thomasclarkson  williamshakespeare  iscnewton  ernsthaeckel  scientificracism  jamesweidmann  faception  criminality  lawenforcement  faces  doothelange  mikeburton  trust  trustworthiness  stephenjaygould  philippafawcett  roberthughes  testosterone  gender  criminalclass  aggression  risk  riskassessment  judgement  brianholtz  shermanalexie  feedbackloops  identity  disability  ableism  disabilities 
may 2017 by robertogreco
Learning Gardens
[See also: https://www.are.na/blog/case%20study/2016/11/16/learning-gardens.html
https://www.are.na/edouard-u/learning-gardens ]

"Learning Gardens is a meta-organization to support grassroots non-institutional learning, exploration, and community-building.

At its simplest, this means we want to help you start and run your own learning group.

At its best, we hope you and your friends achieve nirvana."



"Our Mission

It's difficult to carve out time for focused study. We support learning groups in any discipline to overcome this inertia and build their own lessons, community, and learning styles.
If we succeed in our mission, participating groups should feel empowered and free of institutional shackles.

Community-based learning — free, with friends, using public resources — is simply a more sustainable and distributed form of learning for the 21st century. Peer-oriented and interest-driven study often fosters the best learning anyway.

Learning Gardens is an internet-native organization. As such, we seek to embrace transparency, decentralization, and multiple access points."



"Joining

Joining us largely means joining our slack. Say hello!

If you own or participate in your own learning group, we additionally encourage you to message us for further information.

Organization

We try to use tools that are free, open, and relatively transparent.

Slack to communicate and chat.
Github and Google Drive to build public learning resources.

You're welcome to join and assemble with us on Are.na, which we use to find and collect research materials. In a way, Learning Gardens was born from this network.

We also use Notion and Dropbox internally."



"Our lovely learning groups:

Mondays [http://mondays.nyc/ ]
Mondays is a casual discussion group for creative thinkers from all disciplines. Its simple aim is to encourage knowledge-sharing and self-learning by providing a space for the commingling of ideas, for reflective conversations that might otherwise not be had.

Pixel Lab [http://morgane.com/pixel-lab ]
A community of indie game devs and weird web artists — we're here to learn from each other and provide feedback and support for our digital side projects.

Emulating Intelligence [https://github.com/learning-gardens/_emulating_intelligence ]
EI is a learning group organized around the design, implementation, and implications of artificial intelligence as it is increasingly deployed throughout our lives. We'll weave together the theoretical, the practical, and the social aspects of the field and link it up to current events, anxieties, and discussions. To tie it all together, we'll experiment with tools for integrating AI into our own processes and practices.

Cybernetics Club [https://github.com/learning-gardens/cybernetics-club ]
Cybernetics Club is a learning group organized around the legacy of cybernetics and all the fields it has touched. What is the relevance of cybernetics today? Can it provide us the tools to make sense of the world today? Better yet, can it give us a direction for improving things?

Pedagogy Play Lab [http://ryancan.build/pedagogy-play-lab/ ]
A reading club about play, pedagogy, and learning meeting biweekly starting soon in Williamsburg, Brooklyn.

[http://millennialfocusgroup.info/ ]
monthly irl discussion. 4 reading, collaborating, presenting, critiquing, and hanging vaguely identity-oriented, creatively-inclined, internet-aware, structurally-experimental networked thinking <<<>>> intersectional thinking

Utopia School [http://www.utopiaschool.org/ ]
Utopia School is an ongoing project that shares information about both failed and successful utopian projects and work towards new ones. For us, utopias are those spaces and initiatives that re-imagine the world in some crucial way. The school engages and connects people through urgent conversations, with the goal of exploring, archiving and distributing collective knowledge throughout this multi-city project.

A Pattern Language [https://github.com/learning-gardens/pattern_language ]
Biweekly reading group on A Pattern Language, attempting to reinterpret the book for the current-day."

[See also: "Getting Started with Learning Gardens: An introduction of sorts"
http://learning-gardens.co/2016/08/13/getting_started.html

"Hi, welcome to this place.

If you’re reading this, you’re probably wondering where to start! Try sifting through some links on our site, especially our resources, Github Organization, and Google Drive.

If you’re tired of reading docs and this website in general, we’d highly recommend you join our lively community in real time chat. We’re using Slack for this. It’s great.

When you enter the chat, you’ll be dumped in a channel called #_landing_pad. This channel is muted by default so that any channels you join feel fully voluntary.

We’ve recently started a system where we append any ”Learning Gardens”-related channels with an underscore (_), so it’s easy to tell which channels are meta (e.g. #_help), and which are related to actual learning groups (e.g. #cybernetics).

Everything is up for revision." ]
education  learninggardens  learningnetworks  networks  slack  aldgdp  artschools  learning  howwlearn  sfsh  self-directed  self-directedlearning  empowerment  unschooling  deschooling  decentralization  transparency  accessibility  bookclubs  readinggroups  utopiaschool  apatternlanguage  christopheralexander  pedagogy  pedagogyplaylab  cyberneticsclub  emulatingintelligence  pixellab  games  gaming  videogames  mondays  creativity  multidisciplinary  crossdisciplinary  interdisciplinary  ai  artificialintelligence  distributed  online  web  socialmedia  édouardurcades  artschool 
december 2016 by robertogreco
Werner-Herzog comenta en I am Werner Herzog, the filmmaker. AMA.
"Q: You’ve covered everything from the prehistoric Chauvet Cave to the impending overthrow of not-so-far-off futuristic artificial intelligence. What about humankind's history/capability terrifies you the most?

A: It's a difficult question, because it encompasses almost all of human history so far. What is interesting about this paleolithic cave is that we see with our own eyes the origins, the beginning of the modern human soul. These people were like us, and what their concept of art was, we do not really comprehend fully. We can only guess.

And of course now today, we are into almost futuristic moments where we create artificial intelligence and we may not even need other human beings anymore as companions. We can have fluffy robots, and we can have assistants who brew the coffee for us and serve us to the bed, and all these things. So we have to be very careful and should understand what basic things, what makes us human, what essentially makes us into what we are. And once we understand that, we can make our educated choices, and we can use our inner filters, our conceptual filters. How far would we use artificial intelligence? How far would we trust, for example into the logic of a self-driving car? Will it crash or not if we don't look after the steering wheel ourselves?

So, we should make a clear choice, what we would like to preserve as human beings, and for that, for these kinds of conceptual answers, I always advise to read books. Read read read read read! And I say that not only to filmmakers, I say that to everyone. People do not read enough, and that's how you create critical thinking, conceptual thinking. You create a way of how to shape your life. Although, it seems to elude us into a pseudo-life, into a synthetic life out there in cyberspace, out there in social media. So it's good that we are using Facebook, but use it wisely."
via:savasavasava  wernerherzog  2016  reading  ai  artificialintelligence  humanity  humans  humanism  criticalthinking  coneptualithinking  thinking  howwething  howwelearn  socialmedia  cyberspace  redditama 
july 2016 by robertogreco
Master of Go Board Game Is Walloped by Google Computer Program - The New York Times
"Mr. Hassabis said AlphaGo did not try to consider all the possible moves in a match, as a traditional artificial intelligence machine like Deep Blue does. Rather, it narrows its options based on what it has learned from millions of matches played against itself and in 100,000 Go games available online.

Mr. Hassabis said that a central advantage of AlphaGo was that “it will never get tired, and it will not get intimidated either.

Kim Sung-ryong, a South Korean Go master who provided commentary during Wednesday’s match, said that AlphaGo made a clear mistake early on, but that unlike most human players, it did not lose its “cool.”

“It didn’t play Go as a human does,” he said. “It was a Go match with human emotional elements carved out.”

Mr. Lee said he knew he had lost the match after AlphaGo made a move so unexpected and unconventional that he thought “it was impossible to make such a move.”"
via:tealtan  alhphago  ai  artificialintelligence  go  2016  games  deepmind  leese-dol 
march 2016 by robertogreco
Chats with Bots | BBH Labs
"AI bots are everywhere. Or at least, chatter about chatbots is everywhere. The slick new Quartz app wants to msg you the news. Forbes launched their own official Telegram newsbot yesterday. Will 2016 be the year of the bot, the year we start chatting and stop worrying about whether the person(a) at the other end of the chat is human or not?

At Labs we like to get stuck in and get our hands dirty. Metaphorically. So we fired up Telegram, added some bots to our contact list, and started chatting. And here’s the resulting chat, screengrabbed for your edification."
bots  api  telegram  quartz  interface  ai  artificialintelligence  2016  jeremyettinghausen 
march 2016 by robertogreco
The Future of Chat Isn’t AI — Medium
"So if not AI, then what? What will bots let you do that was never possible before?

We think the answer is actually quite simple: For the first time ever, bots will let you instantly interact with the world around you. This is best illustrated through something that I experienced recently.

During last year’s baseball playoffs, I went to a Blue Jays game at the Rogers Centre. I was running late, so I went straight to my seat to catch as much of the game as I could. But when I got there, I realized I was the only one of my friends without a beer. So, with no beer guy in sight, I turned back to go get a beer. After 10 minutes of waiting in line, I finally got back to my seat. I had missed two home runs.

But good news! In the future, this will never have to happen again. The stadium is developing an app that will let you order from your seat. So next time, I won’t have to miss a beat — I’ll just order through the app. It will be great. Or will it?

Imagine I had sat down and found that there was a sticker on the back of the chair in front of me that said, “Want a beer? Download our app!” Sounds great! I’d unlock my phone, go to the App Store, search for the app, put in my password, wait for it to download, create an account, enter my credit card details, figure out where in the app I actually order from, figure out how to input how many beers I want and of what type, enter my seat number, and then finally my beer would be on its way.

Actually, I would have been better off just waiting in line.

And yet there are so many of these types of apps: apps to order train tickets at stations; apps to order food at restaurants; and apps to order movie tickets at theatres. Everyone wants you to just “download our app!” And yet, after spending millions of dollars developing them, how many people actually use them? My guess: not a lot.

But imagine the stadium one more time, except now instead of spending millions to develop an app, the stadium had spent thousands to develop a simple, text-based bot. I’d sit down and see a similar sticker: “Want a beer? Chat with us!” with a chat code beside it. I’d unlock my phone, open my chat app, and scan the code. Instantly, I’d be chatting with the stadium bot, and it’d ask me how many beers I wanted: “1, 2, 3, or 4.” It’d ask me what type: “Bud, Coors, or Corona.” And then it’d ask me how I wanted to pay: Credit card already on file (**** 0345), or a new card.

Chat app > Scan > 2 > Bud > **** 0345. Done."



"To be clear, this is just the beginning of the bots era, and there are many developments to come. The leaders in this space — Kik, WeChat, Line, Facebook, Slack, and Telegram — all have their own ideas about how this is all going to play out. But one thing I think we can all agree on is that chat is going to be the world’s next great operating system: a Bot OS (or, as we like to call it, BOS).

These developments open up new and giant opportunities for consumers, developers, and businesses. Chat apps will come to be thought of as the new browsers; bots will be the new websites. This is the beginning of a new internet."
chat  ai  artificialintelligence  2016  tedlivingston  kik  slack  telegram  facebook  ui  ux  interface  api  wechat  bots  qrcodes 
march 2016 by robertogreco
How to Think About Bots | Motherboard
"Who is responsible for the output and actions of bots, both ethically and legally? How does semi-autonomy create ethical constraints that limit the maker of a bot?"



"Given the public and social role they increasingly play—and whatever responsibility their creators assume—the actions of bots, whether implicitly or explicitly, have political outcomes. The last several years have seen a rise in bots being used to spread political propaganda, stymie activism and bolster social media follower lists of public figures. Activists can use bots to mobilize people around social and political causes. People working for a variety of groups and causes use bots to inject automated discourse on platforms like Twitter and Reddit. Over the last few years both government employees and opposition activists in Mexico have used bots in attempts to sway public opinion. Where do we draw the line between propaganda, public relations and smart communication?

Platforms, governments and citizens must step in and consider the purpose, and future, of bot technology before manipulative anonymity becomes a hallmark of the social bot."
bots  robots  ethics  ai  artificialintelligence  twitter  bot-ifesto  programming  coding  automation  samuelwoolley  danahboyd  meredithbroussard  madeleineelish  lainnafader  timhwang  alexislloyd  giladlotan  luisdanielpalacios  allisonparrish  giladrosner  saiphsavage  smanthashorey  socialbots  oliviataters  politics  policy 
march 2016 by robertogreco
Education Outrage: Now it is Facebook's turn to be stupid about AI
"What could Facebook be thinking here? We read stories to our children for many reasons. These are read because they have been around a long time, which is not a great reason. The reason to read frightening stories to children has never ben clear to me. The only value I saw in doing this sort of thing as a parent was to begin a discussion with the child about the story which might lead somewhere interesting. Now my particular children had been living in the real world at the time so they had some way to relate to the story because of their own fears, or because of experiences they might have had.

Facebook’s AI will be able to relate to these stories by matching words it has seen before. Oh good. It will not learn anything from the stories because it cannot learn anything from any story. Learning from stories means mapping your experiences (your own stories) to the new story and finding some commonalities and some differences. It also entails discussing those commonalties and differences with someone who is willing to have that conversation with you. In order to do that you have to be able to construct sentences on your own and be able to interpret your own experiences through conversations with your friends and family.

Facebook’s “AI” will not be doing this because it can’t. It has had no experiences. Apparently its experience is loading lots of text and counting patterns. Too bad there isn’t a children’s story about that.

Facebook hasn’t a clue about AI, but it will continue to spend money and accomplish nothing until AI is declared to have failed again,"
rogerschank  2016  facebook  ai  artificialintelligence  algorithms  via:audreywatters  context  experience  understanding  stories  storytelling 
february 2016 by robertogreco
From AI to IA: How AI and architecture created interactivity - YouTube
"The architecture of digital systems isn't just a metaphor. It developed out of a 50-year collaborative relationship between architects and designers, on one side, and technologists in AI, cybernetics, and computer science, on the other. In this talk at the O'Reilly Design Conference in 2016, Molly Steenson traces that history of interaction, tying it to contemporary lessons aimed at designing for a complex world."
mollysteenson  2016  ai  artificialintelligence  douglasenglebart  symbiosis  augmentation  christopheralexander  nicholasnegroponte  richardsaulwurman  architecture  physical  digital  mitmedialab  history  mitarchitecturemachinegroup  technology  compsci  computerscience  cybernetics  interaction  structures  computing  design  complexity  frederickbrooks  computers  interactivity  activity  metaphor  marvinminsky  heuristics  problemsolving  kent  wardcunningham  gangoffour  objectorientedprogramming  apatternlanguage  wikis  agilesoftwaredevelopment  software  patterns  users  digitalspace  interactiondesign  terrywinograd  xeroxparc  petermccolough  medialab 
february 2016 by robertogreco
Why Do I Have to Call This App ‘Julie’? - The New York Times
"Technologies speak with recorded feminine voices because women “weren’t normally there to be heard,” Helen Hester, a media studies lecturer at Middlesex University, told me. A woman’s voice stood out. For example, an automated recording of a woman’s voice used in cockpit navigation becomes a beacon, a voice in stark contrast with that of everyone else, when all the pilots on board are men.

Ms. Hester lives in London, where the spectral sound of robotic women is piping from nearly every corner. Enter the Underground and you hear a disembodied woman announcing “the next station is Mornington Crescent” and the train’s signature canned message, “please mind the gap between the train and the platform.”

A similar voice — emotionless, timeless, with an accent difficult to place — emits from clocks and traffic lights, and inside elevators and supermarkets. The “coldness, the forthrightness of the voice” is what Ms. Hester finds striking. What human speaks with such emotionless authority? And, as Ms. Hester points out: “It’s not real authority. There’s a maternal edge to all of it. It is personal guidance rather than definite directions.”

And, she says, these voices can even play into people’s expectations of male authority because they aren’t actual women. People hear a woman’s voice, realize it is robotic, and “imagine a male programmer” did the actual work.

No one seems to market tech products in the image of the most famous virtual assistant in film history. Hal from “2001: A Space Odyssey” was so brilliant and manly that it attempted to kill off the crew of the spacecraft it was built to manage. Instead, people build what I call “Stepford apps.” These are the Internet’s answer to those old sci-fi robots in dresses mopping floors with manufactured enthusiasm."
ai  artificialintelligence  gender  joannemcneil  voices  siri  cortana  alexa  2015  sexism  apple  amazon  microsoft 
december 2015 by robertogreco
I spent a weekend at Google talking with nerds about charity. I came away … worried. - Vox
"To be fair, the AI folks weren't the only game in town. Another group emphasized "meta-charity," or giving to and working for effective altruist groups. The idea is that more good can be done if effective altruists try to expand the movement and get more people on board than if they focus on first-order projects like fighting poverty.

This is obviously true to an extent. There's a reason that charities buy ads. But ultimately you have to stop being meta. As Jeff Kaufman — a developer in Cambridge who's famous among effective altruists for, along with his wife Julia Wise, donating half their household's income to effective charities — argued in a talk about why global poverty should be a major focus, if you take meta-charity too far, you get a movement that's really good at expanding itself but not necessarily good at actually helping people.

And you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it's hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.

The self-congratulatory tone of the event didn't help matters either. I physically recoiled during the introductory session when Kerry Vaughan, one of the event's organizers, declared, "I really do believe that effective altruism could be the last social movement we ever need." In the annals of sentences that could only be said with a straight face by white men, that one might take the cake.

Effective altruism is a useful framework for thinking through how to do good through one's career, or through political advocacy, or through charitable giving. It is not a replacement for movements through which marginalized peoples seek their own liberation. If EA is to have any hope of getting more buy-in from women and people of color, it has to at least acknowledge that."
charity  philanthropy  ethics  2015  altruism  dylanmatthews  google  siliconvalley  ai  artificialintelligence 
november 2015 by robertogreco
Facebook, communication, and personhood - Text Patterns - The New Atlantis
"William Davies tells us about Mark Zuckerberg's hope to create an “ultimate communication technology,” and explains how Zuckerberg's hopes arise from a deep dissatisfaction with and mistrust of the ways humans have always communicated with one another. Nick Carr follows up with a thoughtful supplement:
If language is bound up in living, if it is an expression of both sense and sensibility, then computers, being non-living, having no sensibility, will have a very difficult time mastering “natural-language processing” beyond a certain rudimentary level. The best solution, if you have a need to get computers to “understand” human communication, may to be avoid the problem altogether. Instead of figuring out how to get computers to understand natural language, you get people to speak artificial language, the language of computers. A good way to start is to encourage people to express themselves not through messy assemblages of fuzzily defined words but through neat, formal symbols — emoticons or emoji, for instance. When we speak with emoji, we’re speaking a language that machines can understand.

People like Mark Zuckerberg have always been uncomfortable with natural language. Now, they can do something about it.

I think we should be very concerned about this move by Facebook. In these contexts, I often think of a shrewd and troubling comment by Jaron Lanier: “The Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you?” In this sense, the degradation of personhood is one of Facebook's explicit goals, and Facebook will increasingly require its users to cooperate in lowering their standards of intelligence and personhood."
williamdavies  markzuckerberg  communication  technology  2015  facebook  alanjacobs  jaronlanier  turingtest  ai  artificialintelligence  personhood  dehumanization  machines 
september 2015 by robertogreco
Teaching Machines and Turing Machines: The History of the Future of Labor and Learning
"In all things, all tasks, all jobs, women are expected to perform affective labor – caring, listening, smiling, reassuring, comforting, supporting. This work is not valued; often it is unpaid. But affective labor has become a core part of the teaching profession – even though it is, no doubt, “inefficient.” It is what we expect – stereotypically, perhaps – teachers to do. (We can debate, I think, if it’s what we reward professors for doing. We can interrogate too whether all students receive care and support; some get “no excuses,” depending on race and class.)

What happens to affective teaching labor when it runs up against robots, against automation? Even the tasks that education technology purports to now be able to automate – teaching, testing, grading – are shot through with emotion when done by humans, or at least when done by a person who’s supposed to have a caring, supportive relationship with their students. Grading essays isn’t necessarily burdensome because it’s menial, for example; grading essays is burdensome because it is affective labor; it is emotionally and intellectually exhausting.

This is part of our conundrum: teaching labor is affective not simply intellectual. Affective labor is not valued. Intellectual labor is valued in research. At both the K12 and college level, teaching of content is often seen as menial, routine, and as such replaceable by machine. Intelligent machines will soon handle the task of cultivating human intellect, or so we’re told.

Of course, we should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?

And what sorts of signals are the machines gathering in turn? What are they learning to do?
Often, of course, we do not know the answer to those last two questions, as the code and the algorithms in education technologies (most technologies, truth be told) are hidden from us. We are becoming as law professor Frank Pasquale argues a “black box society.” And the irony is hardly lost on me that one of the promises of massive collection of student data under the guise of education technology and learning analytics is to crack open the “black box” of the human brain.

We still know so little about how the brain works, and yet, we’ve adopted a number of metaphors from our understanding of that organ to explain how computers operate: memory, language, intelligence. Of course, our notion of intelligence – its measurability – has its own history, one wrapped up in eugenics and, of course, testing (and teaching) machines. Machines now both frame and are framed by this question of intelligence, with little reflection on the intellectual and ideological baggage that we carry forward and hard-code into them."



"We’re told by some automation proponents that instead of a future of work, we will find ourselves with a future of leisure. Once the robots replace us, we will have immense personal freedom, so they say – the freedom to pursue “unproductive” tasks, the freedom to do nothing at all even, except I imagine, to continue to buy things.
On one hand that means that we must address questions of unemployment. What will we do without work? How will we make ends meet? How will this affect identity, intellectual development?

Yet despite predictions about the end of work, we are all working more. As games theorist Ian Bogost and others have observed, we seem to be in a period of hyper-employment, where we find ourselves not only working numerous jobs, but working all the time on and for technology platforms. There is no escaping email, no escaping social media. Professionally, personally – no matter what you say in your Twitter bio that your Tweets do not represent the opinions of your employer – we are always working. Computers and AI do not (yet) mark the end of work. Indeed, they may mark the opposite: we are overworked by and for machines (for, to be clear, their corporate owners).

Often, we volunteer to do this work. We are not paid for our status updates on Twitter. We are not compensated for our check-in’s in Foursquare. We don’t get kick-backs for leaving a review on Yelp. We don’t get royalties from our photos on Flickr.

We ask our students to do this volunteer labor too. They are not compensated for the data and content that they generate that is used in turn to feed the algorithms that run TurnItIn, Blackboard, Knewton, Pearson, Google, and the like. Free labor fuels our technologies: Forum moderation on Reddit – done by volunteers. Translation of the courses on Coursera and of the videos on Khan Academy – done by volunteers. The content on pretty much every “Web 2.0” platform – done by volunteers.

We are working all the time; we are working for free.

It’s being framed, as of late, as the “gig economy,” the “freelance economy,” the “sharing economy” – but mostly it’s the service economy that now comes with an app and that’s creeping into our personal not just professional lives thanks to billions of dollars in venture capital. Work is still precarious. It is low-prestige. It remains unpaid or underpaid. It is short-term. It is feminized.

We all do affective labor now, cultivating and caring for our networks. We respond to the machines, the latest version of ELIZA, typing and chatting away hoping that someone or something responds, that someone or something cares. It’s a performance of care, disguising what is the extraction of our personal data."



"Personalization. Automation. Management. The algorithms will be crafted, based on our data, ostensibly to suit us individually, more likely to suit power structures in turn that are increasingly opaque.

Programmatically, the world’s interfaces will be crafted for each of us, individually, alone. As such, I fear, we will lose our capacity to experience collectivity and resist together. I do not know what the future of unions looks like – pretty grim, I fear; but I do know that we must enhance collective action in order to resist a future of technological exploitation, dehumanization, and economic precarity. We must fight at the level of infrastructure – political infrastructure, social infrastructure, and yes technical infrastructure.

It isn’t simply that we need to resist “robots taking our jobs,” but we need to challenge the ideologies, the systems that loath collectivity, care, and creativity, and that champion some sort of Randian individual. And I think the three strands at this event – networks, identity, and praxis – can and should be leveraged to precisely those ends.

A future of teaching humans not teaching machines depends on how we respond, how we design a critical ethos for ed-tech, one that recognizes, for example, the very gendered questions at the heart of the Turing Machine’s imagined capabilities, a parlor game that tricks us into believing that machines can actually love, learn, or care."
2015  audreywatters  education  technology  academia  labor  work  emotionallabor  affect  edtech  history  highered  highereducation  teaching  schools  automation  bfskinner  behaviorism  sexism  howweteach  alanturing  turingtest  frankpasquale  eliza  ai  artificialintelligence  robots  sharingeconomy  power  control  economics  exploitation  edwardthorndike  thomasedison  bobdylan  socialmedia  ianbogost  unemployment  employment  freelancing  gigeconomy  serviceeconomy  caring  care  love  loving  learning  praxis  identity  networks  privacy  algorithms  freedom  danagoldstein  adjuncts  unions  herbertsimon  kevinkelly  arthurcclarke  sebastianthrun  ellenlagemann  sidneypressey  matthewyglesias  karelčapek  productivity  efficiency  bots  chatbots  sherryturkle 
august 2015 by robertogreco
Matt Jones: Jumping to the End -- Practical Design Fiction on Vimeo
[Matt says (http://magicalnihilism.com/2015/03/06/my-ixd15-conference-talk-jumping-to-the-end/ ):

"This talk summarizes a lot of the approaches that we used in the studio at BERG, and some of those that have carried on in my work with the gang at Google Creative Lab in NYC.

Unfortunately, I can’t show a lot of that work in public, so many of the examples are from BERG days…

Many thanks to Catherine Nygaard and Ben Fullerton for inviting me (and especially to Catherine for putting up with me clowning around behind here while she was introducing me…)"]

[At ~35:00:
“[(Copy)Writers] are the fastest designers in the world. They are amazing… They are just amazing at that kind of boiling down of incredibly abstract concepts into tiny packages of cognition, language. Working with writers has been my favorite thing of the last two years.”
mattjones  berg  berglondon  google  googlecreativelab  interactiondesign  scifi  sciencefiction  designfiction  futurism  speculativefiction  julianbleecker  howwework  1970s  comics  marvel  marvelcomics  2001aspaceodyssey  fiction  speculation  technology  history  umbertoeco  design  wernerherzog  dansaffer  storytelling  stories  microinteractions  signaturemoments  worldbuilding  stanleykubrick  details  grain  grammars  computervision  ai  artificialintelligence  ui  personofinterest  culture  popculture  surveillance  networks  productdesign  canon  communication  johnthackara  macroscopes  howethink  thinking  context  patternsensing  systemsthinking  systems  mattrolandson  objects  buckminsterfuller  normanfoster  brianarthur  advertising  experiencedesign  ux  copywriting  writing  film  filmmaking  prototyping  posters  video  howwewrite  cognition  language  ara  openstudioproject  transdisciplinary  crossdisciplinary  interdisciplinary  sketching  time  change  seams  seamlessness 
march 2015 by robertogreco
The real robot economy and the bus ticket inspector | Science | The Guardian
"Hidden in these everyday, mundane interactions are different moral or ethical questions about the future of AI: if a job is affected but not taken over by a robot, how and when does the new system interact with a consumer? Is it ok to turn human social intelligence – managing a difficult customer – into a commodity? Is it ok that a decision lies with a handheld device, while the human is just a mouthpiece?

What does this mean for the second wave robot economy?

Mike Osborne and Carl Benedikt Frey from Oxford University have studied the risk of automation in the US economy, concluding that 47 per cent of jobs in the current workforce are at high risk of computerisation. They come to this conclusion by looking for jobs that can’t be automated; the 47 per cent is what’s left over. In their model, there are three bottlenecks that prevent automation:
…occupations that involve complex perception and manipulation tasks, creative intelligence tasks, and social intelligence tasks are unlikely to be substituted by computer capital over the next decade or two.


These are bottlenecks which technological advances will find it hard to overcome. The authors predict that the next decade will see steps forward in the algorithms that automate cognitive tasks, including cutting edge techniques like machine learning, artificial intelligence and mobile robotics.

This second wave of the robot economy follows a first wave that automated manufacturing and repetitive manual tasks. So many of the desk jobs that our parents and grandparents would have done, like typing and manual data entry, are now becoming obsolete. And according to Osborne and Frey, some of the jobs that are most at risk of automation, were formerly present in droves at many city offices. This includes the likes of accountants, legal clerks and book keepers - dying breeds, and casualties of the robot economy. But Osborne and Frey think that tasks like navigating complex environments, creative thinking and social influence and persuasion will not be automated as part of these advances.

Some of my colleagues are interested in the second kind of task – creativity. They are working with Osborne and Frey to understand how resistant the creative economy is to automation: how many jobs in the creative economy involve truly creative tasks (if that’s not tautologous). Preliminary results look pretty good for creative occupations. 87 per cent are at low or no risk of automation.

Maybe service occupations where persuasion and influence are important will be saved too. The bus ticket inspector requires exactly the kind of social intelligence that Osborne and Frey argue a machine cannot replicate. But this doesn’t take into account the subtleties I witnessed on the top deck of the 76. It may not be job titles or wages that are most affected by the day-to-day of a robot economy. Automation of parts of a job, or of the context that someone works in, means that jobs not taken by machines are fundamentally changed in other ways. We may become slaves to hardwired decision-making systems.

To avoid this, we need to design human-machine jobs with the humans who will be part of them. I met Carla Brodley, Computer Scientist from North­eastern Uni­ver­sity in the US a few months ago. She has applies advanced computing techniques to med­ical imaging, diagnosis and neu­ro­science. Brodley has publicly argued that the most interesting problems for machine learning come from real world uses of these computational techniques. She says the tough bit of her job is knowing when and how to bring the expert - doctor, radiologist, scientist - into the design of the algorithm. But she is avid that the success of her work depends entirely on this kind of user-led computational design. We need to find a Brodley for the bus ticket inspector."

[via: "'The real robot economy and the bus ticket inspector' @pesska on why we need user-led computational design."
https://twitter.com/Superflux/status/567745423163789312 ]
automation  robots  2015  design  jessicabland  computationaldesign  technology  london  mikeosborne  carlbenediktfrey  computerization  economics  services  socialintelligence  ai  artificialintelligence 
february 2015 by robertogreco
Eyeo 2014 - Claire Evans on Vimeo
"Science Fiction & The Synthesized Sound – Turn on the radio in the year 3000, and what will you hear? When we make first contact with an alien race, will we—as in "Close Encounters of the Third Kind"—communicate through melody? If the future has a sound, what can it possibly be? If science fiction has so far failed to produce convincing future music, it won’t be for lack of trying. It’s just that the problem of future-proofing music is complex, likely impossible. The music of 1,000 years from now will not be composed by, or even for, human ears. It may be strident, seemingly random, mathematical; like the “Musica Universalis” of the ancients, it might not be audible at all. It might be the symphony of pure data. It used to take a needle, a laser, or a magnet to reproduce sound. Now all it takes is code. The age of posthuman art is near; music, like mathematics, may be a universal language—but if we’re too proud to learn its new dialects, we’ll find ourselves silent and friendless in a foreign future."
claireevans  sciencefiction  scifi  music  future  sound  audio  communication  aesthetics  robertscholes  williamgibson  code  composition  2014  johncage  film  history  ai  artificialintelligence  machines  universality  appreciation  language  turingtest 
february 2015 by robertogreco
Valley Of The Meatpuppets on Huffduffer
"The Valley of the Meatpuppets is an ethereal space where people, agents, thingbots, action heroes and big dogs coexist. In this new habitat, we are forming complex relationships with nebulous surveillance systems, machine intelligences and architectures of control, confronting questions about our freedom and capacity to act under invisible constraints."
anabjain  2014  dconstruct  dconstruct2014  bigdog  surveillance  machineintelligence  ai  artificialintelligence  technology  design  systesmthinking  individualism  privacy  future  wearable  wearables  nsa  complexity  googleglass  intenetofthings  control 
september 2014 by robertogreco
Deep Belief by Jetpac - teach your phone to recognize any object on the App Store on iTunes
"Teach your iPhone to see! Teach it to recognize any object using the Jetpac Deep Belief framework running on the phone.

See the future - this is the latest in Object Recognition technology, on a phone for the first time.

The app helps you to teach the phone to recognize an object by taking a short video of that object, and then teach it what is not the object, by taking a short video of everything around, except that object. Then you can scan your surroundings with your phone camera, and it will detect when you are pointing at the object which you taught it to recognize.

We trained our Deep Belief Convoluted Neural Network on a million photos, and like a brain, it learned concepts of textures, shapes and patterns, and combining those to recognize objects. It includes an easily-trainable top layer so you can recognize the objects that you are interested in.

If you want to build custom object recognition into your own iOS app, you can download our Deep Belief SDK framework. It's an implementation of the Krizhevsky convolutional neural network architecture for object recognition in images, running in under 300ms on an iPhone 5S, and available under an open BSD License."

[via: https://medium.com/message/the-fire-phone-at-the-farmers-market-34f51c2ba885 petewarden ]

[See also: http://petewarden.com/2014/04/08/how-to-add-a-brain-to-your-smart-phone/ ]
applications  ios  ios7  iphone  ipad  objects  objectrecognition  identification  objectidentification  mobile  phones  2014  learning  deepbelief  petewarden  ai  artificialintelligence  cameras  computervision  commonplace  deeplearning 
june 2014 by robertogreco
The Fire Phone at the farmers market — The Message — Medium
"With the exception of a few paintings, all of Amazon’s demo “items” were commercial products: things with ISBNs, bar codes, and/or spectral signatures. Things with price tags.

We did not see the Fire Phone recognize a eucalyptus tree.

There is reason to suspect the Fire Phone cannot identify a goldfinch.

And I do not think the Fire Phone can tell me which of these “items” is kale.

This last one is the most troubling, because a system that greets a bag of frozen vegetables with a bar code like an old friend but draws a blank on a basket of fresh greens at the farmers market—that’s not just technical. That’s political.

But here’s the thing: The kale is coming.

There’s an iPhone app called Deep Belief, a tech demo from programmer Pete Warden. It’s free."



"If Amazon’s Fire Phone could tell kale from Swiss chard, if it could recognize trees and birds, I think its polarity would flip entirely, and it would become a powerful ally of humanistic values. As it stands, Firefly adds itself to the forces expanding the commercial sphere, encroaching on public space, insisting that anything interesting must have a price tag. But of course, that’s Amazon: They’re in The Goldfinch detection business, not the goldfinch detection business.

If we ever do get a Firefly for all the things without price tags, we’ll probably get it from Google, a company that’s already working hard on computer vision optimized for public space. It’s lovely to imagine one of Google’s self-driving cars roaming around, looking everywhere at once, diligently noting street signs and stop lights… and noting also the trees standing alongside those streets and the birds perched alongside those lights.

Lovely, but not likely.

Maybe the National Park Service needs to get good at this.

At this point, the really deeply humanistic critics are thinking: “Give me a break. You need an app for this? Buy a bird book. Learn the names of trees.” Okay, fine. But, you know what? I have passed so much flora and fauna in my journeys around this fecund neighborhood of mine and wondered: What is that? If I had a humanistic Firefly to tell me, I’d know their names by now."
amazon  technology  robinsloan  objects  objectrecognition  identification  objectidentification  firefly  mobile  phones  2014  jeffbezos  consumption  learning  deepbelief  petewarden  ai  artificialintelligence  cameras  computervision  commonplace  deeplearning 
june 2014 by robertogreco
George Dyson: No Time Is There--- The Digital Universe and Why Things Appear To Be Speeding Up - The Long Now
"The digital big bang

When the digital universe began, in 1951 in New Jersey, it was just 5 kilobytes in size. "That's just half a second of MP3 audio now," said Dyson. The place was the Institute for Advanced Study, Princeton. The builder was engineer Julian Bigelow. The instigator was mathematician John von Neumann. The purpose was to design hydrogen bombs.

Bigelow had helped develop signal processing and feedback (cybernetics) with Norbert Wiener. Von Neumann was applying ideas from Alan Turing and Kurt Gödel, along with his own. They were inventing and/or gates, addresses, shift registers, rapid-access memory, stored programs, a serial architecture—all the basics of the modern computer world, all without thought of patents. While recuperating from brain surgery, Stanislaw Ulam invented the Monte Carlo method of analysis as a shortcut to understanding solitaire. Shortly Von Neumann's wife Klári was employing it to model the behavior of neutrons in a fission explosion. By 1953, Nils Barricelli was modeling life itself in the machine—virtual digital beings competed and evolved freely in their 5-kilobyte world.

In the few years they ran that machine, from 1951 to 1957, they worked on the most difficult problems of their time, five main problems that are on very different time scales—26 orders of magnitude in time—from the lifetime of a neutron in a bomb's chain reaction measured in billionths of a second, to the behavior of shock waves on the scale of seconds, to weather prediction on a scale of days, to biological evolution on the scale of centuries, to the evolution of stars and galaxies over billions of years. And our lives, measured in days and years, is right in the middle of the scale of time. I still haven't figured that out."

Julian Bigelow was frustrated that the serial, address-constrained, clock-driven architecture of computers became standard because it is so inefficient. He thought that templates (recognition devices) would work better than addresses. The machine he had built for von Neumann ran on sequences rather than a clock. In 1999 Bigelow told George Dyson, "Sequence is different from time. No time is there." That's why the digital world keeps accelerating in relation to our analog world, which is based on time, and why from the perspective of the computational world, our world keeps slowing down.

The acceleration is reflected in the self-replication of computers, Dyson noted: "By now five or six trillion transistors per second are being added to the digital universe, and they're all connected." Dyson is a kayak builder, emulating the wood-scarce Arctic natives to work with minimum frame inside a skin craft. But in the tropics, where there is a surplus of wood, natives make dugout canoes, formed by removing wood. "We're now surrounded by so much information," Dyson concluded, "we have to become dugout canoe builders. The buzzword of last year was 'big data.' Here's my definition of the situation: Big data is what happened when the cost of storing information became less than the cost of throwing it away."

--Stewart Brand"

[See also: http://blog.longnow.org/02014/04/04/george-dyson-seminar-flashback-no-time-is-there/ ]
data  longnow  georgedyson  computing  history  stewartbrand  2013  ai  artificialintelligence  time  julianbigelow 
april 2014 by robertogreco
Patent US8156160 - Poet personalities - Google Patents
"A method of generating a poet personality including reading poems, each of the poems containing text, generating analysis models, each of the analysis models representing one of poems and storing the analysis models in a personality data structure. The personality data structure further includes weights, each of the weights associated with each of the analysis models. The weights include integer values."



BACKGROUND
This invention relates to generating poetry from a computer.

A computer may be used to generate text, such as poetry, to an output device and/or storage device. The displayed text may be in response to a user input or via an automatic composition process. Devices for generating poetry via a computer have been proposed which involve set slot grammars in which certain parts of speech, that are provided in a list, are selected for certain slots.

SUMMARY
In an aspect, the invention features a method of generating a poet personality including reading poems, each of the poems containing text, generating analysis models, each of the analysis models representing one of poems and storing the analysis models in a personality data structure. The personality data structure further includes weights, each of the weights associated with each of the analysis models. The weights include integer values.

In another aspect a poet's assistant method including loading a word processing program, receiving a word in the word processing program provided by a user, displaying poet windows in response to receiving the word and processing the word in each of the windows. The poet windows may include combinations of a finish word window, a finish line window and a finish poem window. Processing the word in the finish word window includes loading an analysis model, locating the word in the analysis model and generating a proposed word in conjunction with the author analysis model.

The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.



"1. A computer-implemented method of generating a poet personality comprising:
analyzing by one or more computers a plurality of poems, each of the poems containing a plurality of words;

generating by the one or more computers a plurality of analysis models, each of said analysis models representing one of said plurality of poems, by

marking by the one or more computers words in the poems with rhyme numbers with words that rhyme with each other having the same rhyme number;

generating by the one or more computers a data structure that specifies n-grams found in the text, with each analysis model haying a set of weights, bigram, trigram and quadgram exponents; and

storing the plurality of analysis models in a personality data structure including a set of parameters that control poetry generation using the personality data structure."
poetry  poets  patents  raykurzweil  google  johnkeklak  1999  via:soulellis  artificialintelligence  ai 
march 2014 by robertogreco
Q&A;: Hacker Historian George Dyson Sits Down With Wired's Kevin Kelly | Wired Magazine | Wired.com
"In some creation myths, life arises out of the earth; in others, life falls out of the sky. The creation myth of the digital universe entails both metaphors. The hardware came out of the mud of World War II, and the code fell out of abstract mathematical concepts. Computation needs both physical stuff and a logical soul to bring it to life…"

"…When I first visited Google…I thought, my God, this is not Turing’s mansion—this is Turing’s cathedral. Cathedrals were built over hundreds of years by thousands of nameless people, each one carving a little corner somewhere or adding one little stone. That’s how I feel about the whole computational universe. Everybody is putting these small stones in place, incrementally creating this cathedral that no one could even imagine doing on their own."
artificialintelligence  ai  software  nuclearbombs  stanulam  hackers  hacking  alanturing  coding  klarivanneumann  nilsbarricelli  MANIAC  digitaluniverse  biology  digitalorganisms  computers  computing  freemandyson  johnvanneumann  interviews  creation  kevinkelly  turing'smansion  turing'scathedral  turing  wired  history  georgedyson 
february 2012 by robertogreco
Gardens and Zoos – Blog – BERG
"So, much simpler systems that people or pets can find places in our lives as companions. Legible motives, limited behaviours and agency can illicit response, empathy and engagement from us.

We think this is rich territory for design as the things around us start to acquire means of context-awareness, computation and connectivity.

As we move from making inert tools – that we are unequivocally the users of – to companions, with behaviours that animate them – we wonder whether we should go straight from this…

Ultimately we’re interested in the potential for new forms of companion species that extend us. A favourite project for us is Natalie Jeremijenko’s “Feral Robotic Dogs” – a fantastic example of legibility, seamful-ness and BASAAP…

We need to engage with the complexity and make it open up to us.

To make evident, seamful surfaces through which we can engage with puppy-smart things."
williamsburroughs  chrisheathcote  nataliejeremijenko  companionship  simplicity  context-awareness  artificialintelligence  ai  behavior  empathy  2012  interactiondesign  interaction  internetofthings  basaap  robots  future  berglondon  berg  mattjones  design  spimes  iot  from delicious
january 2012 by robertogreco

related tags

1970s  2001aspaceodyssey  ableism  abstraction  abundance  academia  access  accessibility  activity  adjuncts  advertising  aesthetics  affect  age  ageism  agency  aggression  agilesoftwaredevelopment  aging  ai  alanjacobs  alanturing  aldgdp  alexa  alexislloyd  alexkrizhevsky  alexnet  alextaylor  algorithms  alhphago  allisonparrish  altruism  amazon  anabjain  anarchism  anarchy  annalowenhaupttsing  annatsing  annegalloway  anxiety  apatternlanguage  api  apophenia  apple  applications  appreciation  ara  architecture  art  arthurcclarke  artificialintelligence  artschool  artschools  at  audio  audreywatters  augmentation  automation  autonomy  aynrand  basaap  beatrizcolomina  behavior  behaviorism  benjaminbratton  berg  berglondon  bfskinner  bias  biases  bigdog  billgates  biology  blaiseaguerayarcas  blockchain  bobdylan  bookclubs  bot-ifesto  botcoin  bots  brianarthur  brianholtz  buckminsterfuller  californianideology  cameras  canon  capitalism  captchas  care  caring  carlbenediktfrey  carlschmitt  centralization  cesarelombroso  change  chantalmouffe  charity  charlesdarwin  chat  chatbots  chess  chrisheathcote  christopheralexander  chrononet  cities  civilization  claireevans  climatechange  code  coding  cognition  color  comics  commonplace  commons  communication  communism  companionship  complexity  composition  compsci  computation  computationaldesign  computerization  computers  computerscience  computervision  computing  coneptualithinking  conspiracytheories  consumption  context  context-awareness  control  copywriting  corporations  corporatism  cortana  cosmompolitanism  counternarratives  creation  creativity  criminalclass  criminality  criticalthinking  crossdisciplinary  culture  cybernetics  cyberneticsclub  cyberspace  cybotgs  danagoldstein  danahboyd  dansaffer  darwinism  data  datavisualization  dataviz  dconstruct  dconstruct2014  death  decentralization  deepbelief  deepecology  deeplearning  deepmind  deepstate  dehumanization  democracy  deschooling  design  designfiction  details  determinacy  digital  digitalorganisms  digitalspace  digitaluniverse  disabilities  disability  discrimination  distributed  donaldtrump  doothelange  douglasenglebart  duty  dylanmatthews  dystopia  ecology  economics  ecosystems  edtech  education  edwardthorndike  efficiency  eliezeryudkowsky  eliza  elizabethgrosz  ellenlagemann  ellenullman  ellsworthkelly  elonmusk  emotionallabor  emotions  empathy  employment  empowerment  emulatingintelligence  environment  ernsthaeckel  ethics  excess  exclusion  expedience  experience  experiencedesign  experimentation  explanation  exploitation  externalcontrols  facebook  faception  facerecognition  faces  families  fascism  feedbackloops  fiction  film  filmmaking  firefly  food  francisgalton  frankpasquale  fredericjameson  frederickbrooks  freedom  freelancing  freemandyson  future  futuremaking  futures  futurism  games  gaming  gangoffour  gender  generations  geoffreyhinton  georgedyson  georgegiddon  giambattistadellaporta  gigeconomy  giladlotan  giladrosner  gillevi  giuseppevillella  go  google  googlecreativelab  googleglass  governance  government  grain  grammars  growth  hackers  hacking  hardfun  herbertsimon  heuristics  highered  highereducation  history  hoarding  homegrowing  hope  hopefulness  howethink  howwelearn  howweread  howweteach  howwething  howwework  howwewrite  howwlearn  human-centered  human-centereddesign  humanism  humanities  humanity  humans  ianbogost  ianhacking  ideals  ideas  identification  identity  ideology  ilyasutskever  imagination  indeterminacy  indigeneity  individualism  industry  inequality  information  inhumanism  insight  intelligence  intenetofthings  interaction  interactiondesign  interactivity  interdependence  interdisciplinary  interface  internet  internetofthings  interviews  intuition  ios  ios7  iot  ipad  iphone  iscnewton  jamesbridle  jamescscott  jamesweidmann  jaronlanier  jeffbezos  jeremyettinghausen  jessicabland  joannemcneil  johncage  johnhoward  johnkeklak  johnmichaelgreer  johnperrybarlow  johnthackara  johnvanneumann  jordanpeterson  josiahnott  judgement  julesolitski  julianbigelow  julianbleecker  karelčapek  kent  kevinkelly  kik  klarivanneumann  labor  lainnafader  language  larrypage  latecapitalism  lawenforcement  learning  learninggardens  learningnetworks  leese-dol  libertarianism  life  literacy  living  local  london  longnow  love  loving  lucagatti  luisdanielpalacios  luxury  lynnmargulis  machineintelligence  machinelearning  machines  machinevision  macroscopes  madeleineelish  magic  MANIAC  markfisher  markwigley  markzuckerberg  marvel  marvelcomics  marvinminsky  matter  matthewplummerfernandez  matthewyglesias  mattjones  mattrolandson  meaning  meaningfulness  measurement  medialab  meredithbroussard  metaphor  metaphysics  microinteractions  microsoft  mikeburton  mikeosborne  mitarchitecturemachinegroup  mitmedialab  miyamotomusashi  mobile  modeling  mollysteenson  mondays  monopolies  morethanhuman  mothers  mugshots  multidisciplinary  multispecies  music  mutualism  nataliejeremijenko  nationalism  neoliberalism  networks  neuralnetworks  nicholasnegroponte  nilsbarricelli  normanfoster  npr  nsa  nuclearbombs  objectidentification  objectivity  objectorientedprogramming  objectrecognition  objects  observation  occult  oliviataters  online  openstudioproject  optimism  organization  paradigmshifts  parenting  past  patents  patterns  patternsensing  pedagogy  pedagogyplaylab  personhood  personofinterest  perspective  pessimism  petermccolough  peterthiel  petewarden  philanthropy  philippafawcett  philosophy  phones  photography  physical  physiognomy  pixellab  platforms  play  pluralism  plurality  poetry  poets  policy  politics  popculture  populism  possibility  posters  posthumanism  power  powerlessness  praxis  predictions  present  primitivism  privacy  privilege  probability  problemsolving  productdesign  productivity  profiling  programming  propertyrights  prosperity  prototyping  pseudoscience  psychology  publicgood  publicschools  qrcodes  quartz  race  racialprofiling  racism  rahelaima  raykurzweil  reading  readinggroups  reality  redditama  regulation  revelation  richardsaulwurman  richfirth-godbehere  risk  riskassessment  roberthughes  robertscholes  robinsloan  robotics  robots  rogerschank  saiphsavage  samuelnorton  samuelwoolley  scale  scarcity  schools  science  sciencefiction  scientificracism  scifi  seamlessness  seams  sebastianthrun  seeing  self-directed  self-directedlearning  self-drivingcars  serviceeconomy  services  sexism  sfsh  sharingeconomy  shermanalexie  sherryturkle  sidneypressey  signaturemoments  siliconvalley  simplicity  singularity  siri  sketching  slack  smanthashorey  smarthomes  socialbots  socialdarwinism  socialintelligence  socialmedia  society  sociology  software  sound  sovietunion  speculation  speculativedesign  speculativefiction  spimes  stability  standardization  stanleykubrick  stanulam  statistics  stephenjaygould  stewartbrand  stories  storytelling  structures  superflux  surveillance  susceptibility  symbiosis  syntheticbiology  systems  systemsthinking  systesmthinking  tagging  talhassner  teaching  technocracy  technocrats  technodeterminism  technology  technosolutionism  tedchiang  tedlivingston  tegabrain  telegram  temporary  terrywinograd  testosterone  theory  thinking  thomasclarkson  thomasedison  time  timhwang  tombrdley  tools  transdisciplinary  transhumanism  transience  transparency  trust  trustworthiness  truth  turing  turing'scathedral  turing'smansion  turingtest  twitter  ui  umbertoeco  uncertainty  understanding  unemployment  unions  universality  unknown  unschooling  ursulaleguin  users  ussr  utopiaschool  ux  via:audreywatters  via:savasavasava  via:soulellis  via:tealtan  video  videogames  visualization  voices  wardcunningham  wearable  wearables  web  wechat  wernerherzog  wiiliambernstein  wikis  williamdavies  williamgibson  williamplayfair  williamsburroughs  williamshakespeare  williamstanleyjevons  wired  wonder  work  worldbuilding  worldview  writing  xeroxparc  xi  xiaolinwu  xkcd  youtube  zeyneptufekci  zhang  zizek  édouardurcades 

Copy this bookmark:



description:


tags: