robertogreco + machinelearning   15

James Bridle on New Dark Age: Technology and the End of the Future - YouTube
"As the world around us increases in technological complexity, our understanding of it diminishes. Underlying this trend is a single idea: the belief that our existence is understandable through computation, and more data is enough to help us build a better world.

In his brilliant new work, leading artist and writer James Bridle surveys the history of art, technology, and information systems, and reveals the dark clouds that gather over our dreams of the digital sublime."
quantification  computationalthinking  systems  modeling  bigdata  data  jamesbridle  2018  technology  software  systemsthinking  bias  ai  artificialintelligent  objectivity  inequality  equality  enlightenment  science  complexity  democracy  information  unschooling  deschooling  art  computation  computing  machinelearning  internet  email  web  online  colonialism  decolonization  infrastructure  power  imperialism  deportation  migration  chemtrails  folkliterature  storytelling  conspiracytheories  narrative  populism  politics  confusion  simplification  globalization  global  process  facts  problemsolving  violence  trust  authority  control  newdarkage  darkage  understanding  thinking  howwethink  collapse 
september 2018 by robertogreco
Zeynep Tufekci: We're building a dystopia just to make people click on ads | TED Talk | TED.com
"We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response."

[See also: "Machine intelligence makes human morals more important"
https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important

"Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics.""]
zeyneptufekci  machinelearning  ai  artificialintelligence  youtube  facebook  google  amazon  ethics  computing  advertising  politics  behavior  technology  web  online  internet  susceptibility  dystopia  sociology  donaldtrump 
october 2017 by robertogreco
Ellen Ullman: Life in Code: "A Personal History of Technology" | Talks at Google - YouTube
"The last twenty years have brought us the rise of the internet, the development of artificial intelligence, the ubiquity of once unimaginably powerful computers, and the thorough transformation of our economy and society. Through it all, Ellen Ullman lived and worked inside that rising culture of technology, and in Life in Code she tells the continuing story of the changes it wrought with a unique, expert perspective.

When Ellen Ullman moved to San Francisco in the early 1970s and went on to become a computer programmer, she was joining a small, idealistic, and almost exclusively male cadre that aspired to genuinely change the world. In 1997 Ullman wrote Close to the Machine, the now classic and still definitive account of life as a coder at the birth of what would be a sweeping technological, cultural, and financial revolution.

Twenty years later, the story Ullman recounts is neither one of unbridled triumph nor a nostalgic denial of progress. It is necessarily the story of digital technology’s loss of innocence as it entered the cultural mainstream, and it is a personal reckoning with all that has changed, and so much that hasn’t. Life in Code is an essential text toward our understanding of the last twenty years—and the next twenty."
ellenullman  bias  algorithms  2017  technology  sexism  racism  age  ageism  society  exclusion  perspective  families  parenting  mothers  programming  coding  humans  humanism  google  larrypage  discrimination  self-drivingcars  machinelearning  ai  artificialintelligence  literacy  reading  howweread  humanities  education  publicschools  schools  publicgood  libertarianism  siliconvalley  generations  future  pessimism  optimism  hardfun  kevinkelly  computing 
october 2017 by robertogreco
Idle Words
"The real story in this mess is not the threat that algorithms pose to Amazon shoppers, but the threat that algorithms pose to journalism. By forcing reporters to optimize every story for clicks, not giving them time to check or contextualize their reporting, and requiring them to race to publish follow-on articles on every topic, the clickbait economics of online media encourage carelessness and drama. This is particularly true for technical topics outside the reporter’s area of expertise.

And reporters have no choice but to chase clicks. Because Google and Facebook have a duopoly on online advertising, the only measure of success in publishing is whether a story goes viral on social media. Authors are evaluated by how individual stories perform online, and face constant pressure to make them more arresting. Highly technical pieces are farmed out to junior freelancers working under strict time limits. Corrections, if they happen at all, are inserted quietly through ‘ninja edits’ after the fact.

There is no real penalty for making mistakes, but there is enormous pressure to frame stories in whatever way maximizes page views. Once those stories get picked up by rival news outlets, they become ineradicable. The sheer weight of copycat coverage creates the impression of legitimacy. As the old adage has it, a lie can get halfway around the world while the truth is pulling its boots on.

Earlier this year, when the Guardian published an equally ignorant (and far more harmful) scare piece about a popular secure messenger app, it took a group of security experts six months of cajoling and pressure to shame the site into amending its coverage. And the Guardian is a prestige publication, with an independent public editor. Not every story can get such editorial scrutiny on appeal, or attract the sympathetic attention of Teen Vogue.

The very machine learning systems that Channel 4’s article purports to expose are eroding online journalism’s ability to do its job.

Moral panics like this one are not just harmful to musket owners and model rocket builders. They distract and discredit journalists, making it harder to perform the essential function of serving as a check on the powerful.

The real story of machine learning is not how it promotes home bomb-making, but that it's being deployed at scale with minimal ethical oversight, in the service of a business model that relies entirely on psychological manipulation and mass surveillance. The capacity to manipulate people at scale is being sold to the highest bidder, and has infected every aspect of civic life, including democratic elections and journalism.

Together with climate change, this algorithmic takeover of the public sphere is the biggest news story of the early 21st century. We desperately need journalists to cover it. But as they grow more dependent on online publishing for their professional survival, their capacity to do this kind of reporting will disappear, if it has not disappeared already."
algorithms  amazon  internet  journalism  climatechange  maciejceglowski  moralpanic  us  clickbait  attention  ethics  machinelearning  maciejcegłowski 
september 2017 by robertogreco
GitHub - Microsoft/ELL: Embedded Learning Library
"The Embedded Learning Library (ELL) allows you to build and deploy machine-learned pipelines onto embedded platforms, like Raspberry Pis, Arduinos, micro:bits, and other microcontrollers. The deployed machine learning model runs on the device, disconnected from the cloud. Our APIs can be used either from C++ or Python.

This project has been developed by a team of researchers at Microsoft Research. It's a work in progress, and we expect it to change rapidly, including breaking API changes. Despite this code churn, we welcome you to try it and give us feedback!

A good place to start is the tutorial, which allows you to do image recognition on a Raspberry Pi with a web cam, disconnected from the cloud. The software you deploy to the Pi will recognize a variety of common objects on camera and print a label for the recognized object on the Pi's screen."
machinelearning  embedded  arduino  ai  raspberrypi  microsoft  code  microcontrollers  via:clivethompson 
july 2017 by robertogreco
Eyes Without a Face — Real Life
"The American painter and sculptor Ellsworth Kelly — remembered mainly for his contributions to minimalism, Color Field, and Hard-edge painting — was also a prodigious birdwatcher. “I’ve always been a colorist, I think,” he said in 2013. “I started when I was very young, being a birdwatcher, fascinated by the bird colors.” In the introduction to his monograph, published by Phaidon shortly before his death in 2015, he writes, “I remember vividly the first time I saw a Redstart, a small black bird with a few very bright red marks … I believe my early interest in nature taught me how to ‘see.’”

Vladimir Nabokov, the world’s most famous lepidopterist, classified, described, and named multiple butterfly species, reproducing their anatomy and characteristics in thousands of drawings and letters. “Few things have I known in the way of emotion or appetite, ambition or achievement, that could surpass in richness and strength the excitement of entomological exploration,” he wrote. Tom Bradley suggests that Nabokov suffered from the same “referential mania” as the afflicted son in his story “Signs and Symbols,” imagining that “everything happening around him is a veiled reference to his personality and existence” (as evidenced by Nabokov’s own “entomological erudition” and the influence of a most major input: “After reading Gogol,” he once wrote, “one’s eyes become Gogolized. One is apt to see bits of his world in the most unexpected places”).

For me, a kind of referential mania of things unnamed began with fabric swatches culled from Alibaba and fine suiting websites, with their wonderfully zoomed images that give you a sense of a particular material’s grain or flow. The sumptuous decadence of velvets and velours that suggest the gloved armatures of state power, and their botanical analogue, mosses and plant lichens. Industrial materials too: the seductive artifice of Gore-Tex and other thermo-regulating meshes, weather-palimpsested blue tarpaulins and piney green garden netting (winningly known as “shade cloth”). What began as an urge to collect colors and textures, to collect moods, quickly expanded into the delicious world of carnivorous plants and bugs — mantises exhibit a particularly pleasing biomimicry — and deep-sea aphotic creatures, which rewardingly incorporate a further dimension of movement. Walls suggest piled textiles, and plastics the murky translucence of jellyfish, and in every bag of steaming city garbage I now smell a corpse flower.

“The most pleasurable thing in the world, for me,” wrote Kelly, “is to see something and then translate how I see it.” I feel the same way, dosed with a healthy fear of cliché or redundancy. Why would you describe a new executive order as violent when you could compare it to the callous brutality of the peacock shrimp obliterating a crab, or call a dress “blue” when it could be cobalt, indigo, cerulean? Or ivory, alabaster, mayonnaise?

We might call this impulse building visual acuity, or simply learning how to see, the seeing that John Berger describes as preceding even words, and then again as completely renewed after he underwent the “minor miracle” of cataract surgery: “Your eyes begin to re-remember first times,” he wrote in the illustrated Cataract, “…details — the exact gray of the sky in a certain direction, the way a knuckle creases when a hand is relaxed, the slope of a green field on the far side of a house, such details reassume a forgotten significance.” We might also consider it as training our own visual recognition algorithms and taking note of visual or affective relationships between images: building up our datasets. For myself, I forget people’s faces with ease but never seem to forget an image I have seen on the internet.

At some level, this training is no different from Facebook’s algorithm learning based on the images we upload. Unlike Google, which relies on humans solving CAPTCHAs to help train its AI, Facebook’s automatic generation of alt tags pays dividends in speed as well as privacy. Still, the accessibility context in which the tags are deployed limits what the machines currently tell us about what they see: Facebook’s researchers are trying to “understand and mitigate the cost of algorithmic failures,” according to the aforementioned white paper, as when, for example, humans were misidentified as gorillas and blind users were led to then comment inappropriately. “To address these issues,” the paper states, “we designed our system to show only object tags with very high confidence.” “People smiling” is less ambiguous and more anodyne than happy people, or people crying.

So there is a gap between what the algorithm sees (analyzes) and says (populates an image’s alt text with). Even though it might only be authorized to tell us that a picture is taken outside, then, it’s fair to assume that computer vision is training itself to distinguish gesture, or the various colors and textures of the slope of a green field. A tag of “sky” today might be “cloudy with a threat of rain” by next year. But machine vision has the potential to do more than merely to confirm what humans see. It is learning to see something different that doesn’t reproduce human biases and uncover emotional timbres that are machinic. On Facebook’s platforms (including Instagram, Messenger, and WhatsApp) alone, over two billion images are shared every day: the monolith’s referential mania looks more like fact than delusion."
2017  rahelaima  algorithms  facebook  ai  artificialintelligence  machinelearning  tagging  machinevision  at  ellsworthkelly  color  tombrdley  google  captchas  matthewplummerfernandez  julesolitski  neuralnetworks  eliezeryudkowsky  seeing 
may 2017 by robertogreco
Physiognomy’s New Clothes – Blaise Aguera y Arcas – Medium
"In 1844, a laborer from a small town in southern Italy was put on trial for stealing “five ricottas, a hard cheese, two loaves of bread […] and two kid goats”. The laborer, Giuseppe Villella, was reportedly convicted of being a brigante (bandit), at a time when brigandage — banditry and state insurrection — was seen as endemic. Villella died in prison in Pavia, northern Italy, in 1864.

Villella’s death led to the birth of modern criminology. Nearby lived a scientist and surgeon named Cesare Lombroso, who believed that brigantes were a primitive type of people, prone to crime. Examining Villella’s remains, Lombroso found “evidence” confirming his belief: a depression on the occiput of the skull reminiscent of the skulls of “savages and apes”.

Using precise measurements, Lombroso recorded further physical traits he found indicative of derangement, including an “asymmetric face”. Criminals, Lombroso wrote, were “born criminals”. He held that criminality is inherited, and carries with it inherited physical characteristics that can be measured with instruments like calipers and craniographs [1]. This belief conveniently justified his a priori assumption that southern Italians were racially inferior to northern Italians.

The practice of using people’s outer appearance to infer inner character is called physiognomy. While today it is understood to be pseudoscience, the folk belief that there are inferior “types” of people, identifiable by their facial features and body measurements, has at various times been codified into country-wide law, providing a basis to acquire land, block immigration, justify slavery, and permit genocide. When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism.

Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development. Whether intentional or not, this “laundering” of human prejudice through computer algorithms can make those biases appear to be justified objectively.

A recent case in point is Xiaolin Wu and Xi Zhang’s paper, “Automated Inference on Criminality Using Face Images”, submitted to arXiv (a popular online repository for physics and machine learning researchers) in November 2016. Wu and Zhang’s claim is that machine learning techniques can predict the likelihood that a person is a convicted criminal with nearly 90% accuracy using nothing but a driver’s license-style face photo. Although the paper was not peer-reviewed, its provocative findings generated a range of press coverage. [2]
Many of us in the research community found Wu and Zhang’s analysis deeply problematic, both ethically and scientifically. In one sense, it’s nothing new. However, the use of modern machine learning (which is both powerful and, to many, mysterious) can lend these old claims new credibility.

In an era of pervasive cameras and big data, machine-learned physiognomy can also be applied at unprecedented scale. Given society’s increasing reliance on machine learning for the automation of routine cognitive tasks, it is urgent that developers, critics, and users of artificial intelligence understand both the limits of the technology and the history of physiognomy, a set of practices and beliefs now being dressed in modern clothes. Hence, we are writing both in depth and for a wide audience: not only for researchers, engineers, journalists, and policymakers, but for anyone concerned about making sure AI technologies are a force for good.

We will begin by reviewing how the underlying machine learning technology works, then turn to a discussion of how machine learning can perpetuate human biases."



"Research shows that the photographer’s preconceptions and the context in which the photo is taken are as important as the faces themselves; different images of the same person can lead to widely different impressions. It is relatively easy to find a pair of images of two individuals matched with respect to age, race, and gender, such that one of them looks more trustworthy or more attractive, while in a different pair of images of the same people the other looks more trustworthy or more attractive."



"On a scientific level, machine learning can give us an unprecedented window into nature and human behavior, allowing us to introspect and systematically analyze patterns that used to be in the domain of intuition or folk wisdom. Seen through this lens, Wu and Zhang’s result is consistent with and extends a body of research that reveals some uncomfortable truths about how we tend to judge people.

On a practical level, machine learning technologies will increasingly become a part of all of our lives, and like many powerful tools they can and often will be used for good — including to make judgments based on data faster and fairer.

Machine learning can also be misused, often unintentionally. Such misuse tends to arise from an overly narrow focus on the technical problem, hence:

• Lack of insight into sources of bias in the training data;
• Lack of a careful review of existing research in the area, especially outside the field of machine learning;
• Not considering the various causal relationships that can produce a measured correlation;
• Not thinking through how the machine learning system might actually be used, and what societal effects that might have in practice.

Wu and Zhang’s paper illustrates all of the above traps. This is especially unfortunate given that the correlation they measure — assuming that it remains significant under more rigorous treatment — may actually be an important addition to the already significant body of research revealing pervasive bias in criminal judgment. Deep learning based on superficial features is decidedly not a tool that should be deployed to “accelerate” criminal justice; attempts to do so, like Faception’s, will instead perpetuate injustice."
blaiseaguerayarcas  physiognomy  2017  facerecognition  ai  artificialintelligence  machinelearning  racism  bias  xiaolinwu  xi  zhang  race  profiling  racialprofiling  giuseppevillella  cesarelombroso  pseudoscience  photography  chrononet  deeplearning  alexkrizhevsky  ilyasutskever  geoffreyhinton  gillevi  talhassner  alexnet  mugshots  objectivity  giambattistadellaporta  francisgalton  samuelnorton  josiahnott  georgegiddon  charlesdarwin  johnhoward  thomasclarkson  williamshakespeare  iscnewton  ernsthaeckel  scientificracism  jamesweidmann  faception  criminality  lawenforcement  faces  doothelange  mikeburton  trust  trustworthiness  stephenjaygould  philippafawcett  roberthughes  testosterone  gender  criminalclass  aggression  risk  riskassessment  judgement  brianholtz  shermanalexie  feedbackloops  identity  disability  ableism  disabilities 
may 2017 by robertogreco
Build a Better Monster: Morality, Machine Learning, and Mass Surveillance
"technology and ethics aren't so easy to separate, and that if you want to know how a system works, it helps to follow the money."



"A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea.

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we're good people. We like freedom. How could we have built tools that subvert it?"



"The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likeable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills."



"These companies exemplify the centralized, feudal Internet of 2017. While the protocols that comprise the Internet remain open and free, in practice a few large American companies dominate every aspect of online life. Google controls search and email, AWS controls cloud hosting, Apple and Google have a duopoly in mobile phone operating systems. Facebook is the one social network.

There is more competition and variety among telecommunications providers and gas stations than there is among the Internet giants."



"Build a Better Monster
Idle Words · by Maciej Cegłowski
I came to the United States as a six year old kid from Eastern Europe. One of my earliest memories of that time was the Safeway supermarket, an astonishing display of American abundance.

It was hard to understand how there could be so much wealth in the world.

There was an entire aisle devoted to breakfast cereals, a food that didn't exist in Poland. It was like walking through a canyon where the walls were cartoon characters telling me to eat sugar.

Every time we went to the supermarket, my mom would give me a quarter to play Pac Man. As a good socialist kid, I thought the goal of the game was to help Pac Man, who was stranded in a maze and needed to find his friends, who were looking for him.

My games didn't last very long.

The correct way to play Pac Man, of course, is to consume as much as possible while running from the ghosts that relentlessly pursue you. This was a valuable early lesson in what it means to be an American.

It also taught me that technology and ethics aren't so easy to separate, and that if you want to know how a system works, it helps to follow the money.

Today the technology that ran that arcade game permeates every aspect of our lives. We’re here at an emerging technology conference to celebrate it, and find out what exciting things will come next. But like the tail follows the dog, ethical concerns about how technology affects who we are as human beings, and how we live together in society, follow us into this golden future. No matter how fast we run, we can’t shake them.

This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things”, some of what we knocked down were the load-bearing walls of our democracy.

Worried CEOs are roving the landscape, peering into the churches and diners of red America. Steve Case, the AOL founder, roams the land trying to get people to found more startups. Mark Zuckerberg is traveling America having beautifully photographed conversations.

We’re all trying to understand why people can’t just get along. The emerging consensus in Silicon Valley is that polarization is a baffling phenomenon, but we can fight it with better fact-checking, with more empathy, and (at least in Facebook's case) with advanced algorithms to try and guide conversations between opposing camps in a more productive direction.

A question few are asking is whether the tools of mass surveillance and social control we spent the last decade building could have had anything to do with the debacle of the 2017 election, or whether destroying local journalism and making national journalism so dependent on our platforms was, in retrospect, a good idea.

We built the commercial internet by mastering techniques of persuasion and surveillance that we’ve extended to billions of people, including essentially the entire population of the Western democracies. But admitting that this tool of social control might be conducive to authoritarianism is not something we’re ready to face. After all, we're good people. We like freedom. How could we have built tools that subvert it?

As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

I contend that there are structural reasons to worry about the role of the tech industry in American political life, and that we have only a brief window of time in which to fix this.

Surveillance Capitalism

The economic basis of the Internet is surveillance. Every interaction with a computing device leaves a data trail, and whole industries exist to consume this data. Unlike dystopian visions from the past, this surveillance is not just being conducted by governments or faceless corporations. Instead, it’s the work of a small number of sympathetic tech companies with likeable founders, whose real dream is to build robots and Mars rockets and do cool things that make the world better. Surveillance just pays the bills.

It is a striking fact that mass surveillance has been driven almost entirely by private industry. While the Snowden revelations in 2012 made people anxious about government monitoring, that anxiety never seemed to carry over to the much more intrusive surveillance being conducted by the commercial Internet. Anyone who owns a smartphone carries a tracking device that knows (with great accuracy) where you’ve been, who you last spoke to and when, contains potentially decades-long archives of your private communications, a list of your closest contacts, your personal photos, and other very intimate information.

Internet providers collect (and can sell) your aggregated browsing data to anyone they want. A wave of connected devices for the home is competing to bring internet surveillance into the most private spaces. Enormous ingenuity goes into tracking people across multiple devices, and circumventing any attempts to hide from the tracking.

With the exception of China (which has its own ecology), the information these sites collect on users is stored permanently and with almost no legal controls by a small set of companies headquartered in the United States.

Two companies in particular dominate the world of online advertising and publishing, the economic engines of the surveillance economy.

Google, valued at $560 billion, is the world’s de facto email server, and occupies a dominant position in almost every area of online life. It’s unremarkable for a user to connect to the Internet on a Google phone using Google hardware, talking to Google servers via a Google browser, while blocking ads served over a Google ad network on sites that track visitors with Google analytics. This combination of search history, analytics and ad tracking gives the company unrivaled visibility into users’ browsing history. Through initiatives like AMP (advanced mobile pages), the company is attempting to extend its reach so that it becomes a proxy server for much of online publishing.

Facebook, valued at $400 billion, has close to two billion users and is aggressively seeking its next billion. It is the world’s largest photo storage service, and owns the world’s largest messaging service, WhatsApp. For many communities, Facebook is the tool of choice for political outreach and organizing, event planning, fundraising and communication. It is the primary source of news for a sizable fraction of Americans, and through its feed algorithm (which determines who sees what) has an unparalleled degree of editorial control over what that news looks like.

Together, these companies control some 65% of the online ad market, which in 2015 was estimated at $60B. Of that, half went to Google and $8B to Facebook. Facebook, the smaller player, is more aggressive in the move to new ad and content formats, particularly video and virtual reality.

These companies exemplify the centralized, feudal Internet of 2017. While the protocols that comprise the Internet remain open and free, in practice a few large American companies dominate every aspect of online life. Google controls search and email, AWS controls cloud hosting, Apple and Google have a duopoly in mobile phone operating systems. Facebook is the one social network.

There is more competition and variety among telecommunications providers and gas stations than there is among the Internet giants.

Data Hunger

The one thing these companies share is an insatiable appetite for data. They want to know where their users are, what they’re viewing, where their eyes are on the page, who they’re with, what they’re discussing, their purchasing habits, major life events (like moving or pregnancy), and anything else they can discover.

There are two interlocking motives for this data hunger: to target online advertising, and to train machine learning algorithms.

Advertising

Everyone is familiar with online advertising. Ads are served indirectly, based on real-time auctions … [more]
advertising  facebook  google  internet  politics  technology  apple  labor  work  machinelearning  security  democracy  california  taxes  engagement 
april 2017 by robertogreco
Image-to-Image Demo - Affine Layer
"Recently, I made a Tensorflow port of pix2pix by Isola et al., covered in the article Image-to-Image Translation in Tensorflow. I've taken a few pre-trained models and made an interactive web thing for trying them out. Chrome is recommended.

The pix2pix model works by training on pairs of images such as building facade labels to building facades, and then attempts to generate the corresponding output image from any input image you give it. The idea is straight from the pix2pix paper, which is a good read."

[See also: https://phillipi.github.io/pix2pix/

"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either."



"Here we show comprehensive results from each experiment in our paper. Please see the paper for details on these experiments.

Effect of the objective
Cityscapes
Facades

Effect of the generator architecture
Cityscapes

Effect of the discriminator patch scale
Cityscapes
Facades

Additional results
Map to aerial
Aerial to map
Semantic segmentation
Day to night
Edges to handbags
Edges to shoes
Sketches to handbags
Sketches to shoes"]
machinelearning  art  drawing  via:meetar  deeplearning  neuralnetworks 
february 2017 by robertogreco
Remarks at the SASE Panel On The Moral Economy of Tech
"I am only a small minnow in the technology ocean, but since it is my natural habitat, I want to make an effort to describe it to you.

As computer programmers, our formative intellectual experience is working with deterministic systems that have been designed by other human beings. These can be very complex, but the complexity is not the kind we find in the natural world. It is ultimately always tractable. Find the right abstractions, and the puzzle box opens before you.

The feeling of competence, control and delight in discovering a clever twist that solves a difficult problem is what makes being a computer programmer sometimes enjoyable.

But as anyone who's worked with tech people knows, this intellectual background can also lead to arrogance. People who excel at software design become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.

Today we are embarked on a great project to make computers a part of everyday life. As Marc Andreessen memorably frames it, "software is eating the world". And those of us writing the software expect to be greeted as liberators.

Our intentions are simple and clear. First we will instrument, then we will analyze, then we will optimize. And you will thank us.

But the real world is a stubborn place. It is complex in ways that resist abstraction and modeling. It notices and reacts to our attempts to affect it. Nor can we hope to examine it objectively from the outside, any more than we can step out of our own skin.

The connected world we're building may resemble a computer system, but really it's just the regular old world from before, with a bunch of microphones and keyboards and flat screens sticking out of it. And it has the same old problems.

Approaching the world as a software problem is a category error that has led us into some terrible habits of mind.

BAD MENTAL HABITS

First, programmers are trained to seek maximal and global solutions. Why solve a specific problem in one place when you can fix the general problem for everybody, and for all time? We don't think of this as hubris, but as a laudable economy of effort. And the startup funding culture of big risk, big reward encourages this grandiose mode of thinking. There is powerful social pressure to avoid incremental change, particularly any change that would require working with people outside tech and treating them as intellectual equals.

Second, treating the world as a software project gives us a rationale for being selfish. The old adage has it that if you are given ten minutes to cut down a tree, you should spend the first five sharpening your axe. We are used to the idea of bootstrapping ourselves into a position of maximum leverage before tackling a problem.

In the real world, this has led to a pathology where the tech sector maximizes its own comfort. You don't have to go far to see this. Hop on BART after the conference and take a look at Oakland, or take a stroll through downtown San Francisco and try to persuade yourself you're in the heart of a boom that has lasted for forty years. You'll see a residential theme park for tech workers, surrounded by areas of poverty and misery that have seen no benefit and ample harm from our presence. We pretend that by maximizing our convenience and productivity, we're hastening the day when we finally make life better for all those other people.

Third, treating the world as software promotes fantasies of control. And the best kind of control is control without responsibility. Our unique position as authors of software used by millions gives us power, but we don't accept that this should make us accountable. We're programmers—who else is going to write the software that runs the world? To put it plainly, we are surprised that people seem to get mad at us for trying to help.

Fortunately we are smart people and have found a way out of this predicament. Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.

Of course, people obsessed with control have to eventually confront the fact of their own extinction. The response of the tech world to death has been enthusiastic. We are going to fix it. Google Ventures, for example, is seriously funding research into immortality. Their head VC will call you a "deathist" for pointing out that this is delusional.

Such fantasies of control come with a dark side. Witness the current anxieties about an artificial superintelligence, or Elon Musk's apparently sincere belief that we're living in a simulation. For a computer programmer, that's the ultimate loss of control. Instead of writing the software, you are the software.

We obsess over these fake problems while creating some real ones.

In our attempt to feed the world to software, techies have built the greatest surveillance apparatus the world has ever seen. Unlike earlier efforts, this one is fully mechanized and in a large sense autonomous. Its power is latent, lying in the vast amounts of permanently stored personal data about entire populations.

We started out collecting this information by accident, as part of our project to automate everything, but soon realized that it had economic value. We could use it to make the process self-funding. And so mechanized surveillance has become the economic basis of the modern tech industry.

SURVEILLANCE CAPITALISM

Surveillance capitalism has some of the features of a zero-sum game. The actual value of the data collected is not clear, but it is definitely an advantage to collect more than your rivals do. Because human beings develop an immune response to new forms of tracking and manipulation, the only way to stay successful is to keep finding novel ways to peer into people's private lives. And because much of the surveillance economy is funded by speculators, there is an incentive to try flashy things that will capture the speculators' imagination, and attract their money.

This creates a ratcheting effect where the behavior of ever more people is tracked ever more closely, and the collected information retained, in the hopes that further dollars can be squeezed out of it.

Just like industrialized manufacturing changed the relationship between labor and capital, surveillance capitalism is changing the relationship between private citizens and the entities doing the tracking. Our old ideas about individual privacy and consent no longer hold in a world where personal data is harvested on an industrial scale.

Those who benefit from the death of privacy attempt to frame our subjugation in terms of freedom, just like early factory owners talked about the sanctity of contract law. They insisted that a worker should have the right to agree to anything, from sixteen-hour days to unsafe working conditions, as if factory owners and workers were on an equal footing.

Companies that perform surveillance are attempting the same mental trick. They assert that we freely share our data in return for valuable services. But opting out of surveillance capitalism is like opting out of electricity, or cooked foods—you are free to do it in theory. In practice, it will upend your life.

Many of you had to obtain a US visa to attend this conference. The customs service announced yesterday it wants to start asking people for their social media profiles. Imagine trying to attend your next conference without a LinkedIn profile, and explaining to the American authorities why you are so suspiciously off the grid.

The reality is, opting out of surveillance capitalism means opting out of much of modern life.

We're used to talking about the private and public sector in the real economy, but in the surveillance economy this boundary doesn't exist. Much of the day-to-day work of surveillance is done by telecommunications firms, which have a close relationship with government. The techniques and software of surveillance are freely shared between practitioners on both sides. All of the major players in the surveillance economy cooperate with their own country's intelligence agencies, and are spied on (very effectively) by all the others.

As a technologist, this state of affairs gives me the feeling of living in a forest that is filling up with dry, dead wood. The very personal, very potent information we're gathering about people never goes away, only accumulates. I don't want to see the fire come, but at the same time, I can't figure out a way to persuade other people of the great danger.

So I try to spin scenarios.

THE INEVITABLE LIST OF SCARY SCENARIOS

One of the candidates running for President this year has promised to deport eleven million undocumented immigrants living in the United States, as well as block Muslims from entering the country altogether. Try to imagine this policy enacted using the tools of modern technology. The FBI would subpoena Facebook for information on every user born abroad. Email and phone conversations would be monitored to check for the use of Arabic or Spanish, and sentiment analysis applied to see if the participants sounded "nervous". Social networks, phone metadata, and cell phone tracking would lead police to nests of hiding immigrants.

We could do a really good job deporting people if we put our minds to it.

Or consider the other candidate running for President, the one we consider the sane alternative, who has been a longtime promoter of a system of extrajudicial murder that uses blanket surveillance of cell phone traffic, email, and social media to create lists of people to be tracked and killed with autonomous aircraft. … [more]
culture  ethics  privacy  surveillance  technology  technosolutionism  maciegceglowski  2016  computing  coding  programming  problemsolving  systemsthinking  systems  software  control  power  elonmusk  marcandreessen  siliconvalley  sanfrancisco  oakland  responsibility  machinelearning  googlevntures  vc  capitalism  speculation  consent  labor  economics  poland  dystopia  government  politics  policy  immortality 
june 2016 by robertogreco
'I Love My Label': Resisting the Pre-Packaged Sound in Ed-Tech
"I’ve argued elsewhere, drawing on a phrase by cyborg anthropologist Amber Case, that many of the industry-provided educational technologies we use create and reinforce a “templated self,” restricting the ways in which we present ourselves and perform our identities through their very technical architecture. The learning management system is a fine example of this, particularly with its “permissions” that shape who gets to participate and how, who gets to create, review, assess data and content. Algorithmic profiling now will be layered on top of these templated selves in ed-tech – the results, again: the pre-packaged student.

Indie ed-tech, much like the indie music from which it takes its inspiration, seeks to offer an alternative to the algorithms, the labels, the templates, the profiling, the extraction, the exploitation, the control. It’s a big task – an idealistic one, no doubt. But as the book Our Band Could Be Your Life, which chronicles the American indie music scene of the 1980s (and upon which Jim Groom drew for his talk on indie-ed tech last fall), notes, “Black Flag was among the first bands to suggest that if you didn’t like ‘the system,’ you should simply create one of your own.” If we don’t like ‘the system’ of ed-tech, we should create one of our own.

It’s actually not beyond our reach to do so.

We’re already working in pockets doing just that, with various projects to claim and reclaim and wire and rewire the Web so that it’s more just, more open, less exploitative, and counterintuitively perhaps less “personalized.” “The internet is shit today,” Pirate Bay founder Peter Sunde said last year. “It’s broken. It was probably always broken, but it’s worse than ever.” We can certainly say the same for education technology, with its long history of control, measurement, standardization.

We aren’t going to make it better by becoming corporate rockstars. This fundamental brokenness means we can’t really trust those who call for a “Napster moment” for education or those who hail the coming Internet/industrial revolution for schools. Indie means we don’t need millions of dollars, but it does mean we need community. We need a space to be unpredictable, for knowledge to be emergent not algorithmically fed to us. We need intellectual curiosity and serendipity – we need it from scholars and from students. We don’t need intellectual discovery to be trademarked, to a tab that we click on to be fed the latest industry updates, what the powerful, well-funded people think we should know or think we should become."
2016  audreywatters  edupunk  edtech  independent  indie  internet  online  technology  napster  history  serendipity  messiness  curiosity  control  measurement  standardization  walledgardens  privacy  data  schools  education  highered  highereducation  musicindustry  jimgroom  ambercase  algorithms  bigdata  prediction  machinelearning  machinelistening  echonest  siliconvalley  software 
march 2016 by robertogreco
Caroline Sinders
"Hi there, I'm Caroline.

I am a User Experience and Interaction Designer, researcher, interactive story teller, bad joke collector, and ridiculous pie baker. I was born in New Orleans and I am currently based in Brooklyn (and occasionally, I live in airports). Prior to graduate school, I worked in the creative world as a photographer for Harper's Bazaar Russia, Refinery 29, Style.Com, and Hypbeast as well as a marketing coordinator. My entire professional career has been in digital culture, digital imaging, and digital branding.

Sometimes I make things with Twitter and Instagram, and I play around with APIs whenever I can. I used to design stories with stills, now I love to make things move. My design approach is think of the user first and focus on problem solving through whimsy, intelligence, and intuition. My skill set is broad: I research, conceptualize, brand, wireframe, and build. I see the big picture as a system made of very tiny and very integral moving parts. I dream in wireframes and personas.

I hold a masters from NYU's Interactive Telecommunications Program, and I have a BFA in Photography and Imaging with a focus in digital media and culture from NYU. Get at me sometime, I love to meet new people."

[via: "A talk on systems design, machine learning, and designing with empathy in digital spaces

Caroline Sinders is an artist and user researcher at IBM Watson who works with language, robots, and machine learning. Her work focuses on the line between human intervention and algorithms."
https://twitter.com/ablerism/status/693961348724690944 ]
carolinesinders  via:ablerism  ux  ui  interaction  design  twitter  instagram  apis  research  digital  digitalculture  digitalbranding  digitalimaging  machinelearning  systemsdesign  empathy  bots  humanintervention  algorithms 
february 2016 by robertogreco
Random Radicals: A Fake Kanji Experiment
[via:
http://prostheticknowledge.tumblr.com/post/136754440176/random-radicals-continuation-of-project-by
"As humans, we are able to communicate with others by drawing pictures, and somehow this has evolved into modern language. The ability to express our thoughts is a very powerful tool in our society. Being able to write is generally more difficult than just being able to read, and this is especially true for the Chinese language. From personal experience, being able to write Chinese is a lot more difficult than just being able to read Chinese and requires a greater understanding of the language.

We now have machines that can help us accurately classify images and read handwritten characters. However, for machines to gain a deeper understanding of the content they are processing, they will also need to be able to generate such content. The next natural step is to have machines draw simple pictures of what they are thinking about, and develop an ability to express themselves. Seeing how machines produce drawings may also provide us with some insights into their learning process.

In this work, we have trained a machine to learn Chinese characters by exposing it to a Kanji database. The machine learns by trying to form invariant patterns of the shapes and strokes that it sees, rather than recording exactly what it sees into memory, kind of like how our own brains work. Afterwards, using its neural connections, the machine attempts to write something out, stroke-by-stroke, onto the screen."]

[See also: http://blog.otoro.net/2015/12/28/recurrent-net-dreams-up-fake-chinese-characters-in-vector-format-with-tensorflow/
via: http://prostheticknowledge.tumblr.com/post/136134267951/recurrent-net-dreams-up-fake-chinese-characters

"… I think a more interesting task is to generate data, which I view as an extension to classifying data. Like how being able to write a Chinese character demonstrate more understanding than merely knowing how to read that character, I think being able to generate content is also key to understanding that content. Being able generate a picture of a 22 year old attractive lady is much more impressive than merely being able to estimate that the this woman is likely around 22 years of age.

An example of a generative task is the translation machines developed to translate English into another language in real time. Generative art and music has been increasingly popular. Recently, there has been work on using techniques such as generative adversarial networks (GANs) to generate bitmap pictures of fake images that look like real ones, like fake cats, fake faces, fake bedrooms and even fake anime characters, and to me, those problems are a lot more exciting to work on, and a natural extension to classification problems."]

[See also: http://www.genekogan.com/works/a-book-from-the-sky.html
via: http://prostheticknowledge.tumblr.com/post/136157512956/a-book-from-the-sky-%E5%A4%A9%E4%B9%A6-another-neural-network

"A Book from the Sky 天书

Another Neural Network Chinese character project - this one by Gene Kogan which generates new Kanji from a handwritten dataset:

These images were created by a deep convolutional generative adversarial network (DCGAN) trained on a database of handwritten Chinese characters, made with code by Alec Radford based on the paper by Radford, Luke Metz, and Soumith Chintala in November 2015.

The title is a reference to the 1988 book by Xu Bing, who composed thousands of fictitious glyphs in the style of traditional Mandarin prints of the Song and Ming dynasties.

A DCGAN is a type of convolutional neural network which is capable of learning an abstract representation of a collection of images. It achieves this via competition between a “generator” which fabricates fake images and a “discriminator” which tries to discern if the generator’s images are authentic (more details). After training, the generator can be used to convincingly generate samples reminiscent of the originals.

… a DCGAN is trained on a labeled subset of ~1M handwritten simplified Chinese characters, after which the generator is able to produce fake images of characters not found in the original dataset."]
art  deeplearning  kanji  chinese  machinelearning  neuralnetworks 
january 2016 by robertogreco
B.A.S.A.A.P. – Blog – BERG [Be As Smart As A Puppy]
"Imagine a household of hunchbots.

Each of them working across a little domain within your home. Each building up tiny caches of emotional intelligence about you, cross-referencing them with machine learning across big data from the internet. They would make small choices autonomously around you, for you, with you – and do it well. Surprisingly well. Endearingly well.

They would be as smart as puppies. …

That might be part of the near-future: being surrounded by things that are helping us, that we struggle to build a model of how they are doing it in our minds. That we can’t directly map to our own behaviour. A demon-haunted world. This is not so far from most people’s experience of computers (and we’re back to Byron and Nass) but we’re talking about things that change their behaviour based on their environment and their interactions with us, and that have a certain mobility and agency in our world."
berg  berglondon  mattjones  hunch  priorityinbox  gmail  biomimicry  design  future  intelligence  uncannyvalley  adamgreenfield  everyware  ubicomp  internetofthings  data  ai  machinelearning  spimes  basaap  biomimetics  iot  from delicious
september 2010 by robertogreco

related tags

ableism  adamgreenfield  advertising  age  ageism  aggression  ai  alanturing  alexkrizhevsky  alexnet  algorithms  amazon  ambercase  apis  apple  arduino  art  artificialintelligence  artificialintelligent  at  attention  audreywatters  authority  basaap  behavior  belllabs  berg  berglondon  beteyoonchoi  bias  bigdata  biomimetics  biomimicry  blaiseaguerayarcas  bots  brianholtz  california  capitalism  captchas  care  caring  carolinesinders  carolynbrown  cesarelombroso  charlesdarwin  chemtrails  chinese  chrononet  claudeshannon  clickbait  climatechange  code  coding  collapse  colonialism  color  community  complexity  computation  computationalthinking  computing  confusion  consent  conspiracytheories  control  criminalclass  criminality  culture  curiosity  curriculum  darkage  data  decentralization  decolonization  deeplearning  democracy  deportation  deschooling  design  digital  digitalbranding  digitalculture  digitalimaging  disabilities  disability  discrimination  distributed  distribution  diversity  donaldtrump  doothelange  drawing  dystopia  echonest  economics  edtech  education  edupunk  eliezeryudkowsky  ellenullman  ellsworthkelly  elonmusk  email  embedded  empathy  engagement  enlightenment  equality  ernsthaeckel  ethics  everyware  exclusion  facebook  faception  facerecognition  faces  facts  families  feedbackloops  folkliterature  francisgalton  future  gender  generations  geoffreyhinton  georgegiddon  giambattistadellaporta  gillevi  giuseppevillella  global  globalization  gmail  google  googlevntures  government  hardfun  highered  highereducation  history  horizontality  howeteach  howweread  howwethink  humanintervention  humanism  humanities  humans  hunch  identity  ilyasutskever  immortality  imperialism  inclusion  inclusivity  independent  indie  inequality  information  infrastructure  instagram  intelligence  interaction  interdependence  internet  internetofthings  iot  iscnewton  jamesbridle  jamesweidmann  jimgroom  jofreeman  johnbardeen  johncage  johnhoward  josiahnott  journalism  judgement  julesolitski  justice  kanji  kevinkelly  labor  larrypage  lawenforcement  lcproject  libertarianism  literacy  machinelearning  machinelistening  machinevision  maciegceglowski  maciejceglowski  maciejcegłowski  marcandreessen  marginalization  matthewplummerfernandez  mattjones  measurement  mercecunningham  messiness  microcontrollers  microsoft  migration  mikeburton  modeling  moralpanic  mothers  mugshots  musicindustry  napster  narrative  neuralnetworks  newdarkage  oakland  objectivity  online  openstudioproject  optimism  p2p  p2ppublishing  p2pweb  parenting  pedagogy  personhood  perspective  pessimism  philippafawcett  photography  physiognomy  poland  policy  politics  populism  power  prediction  priorityinbox  privacy  problemsolving  process  profiling  programming  pseudoscience  publicgood  publicschools  quantification  race  racialprofiling  racism  rahelaima  raspberrypi  reading  research  responsibility  risk  riskassessment  roberthughes  samuelnorton  sanfrancisco  schoolforpoeticcomputation  schools  science  scientificracism  security  seeing  self-drivingcars  serendipity  sexism  sfpc  shermanalexie  siliconvalley  simplification  slow  society  sociology  software  speculation  spimes  standardization  stephenjaygould  storytelling  structure  structurelessness  surveillance  susceptibility  systems  systemsdesign  systemsthinking  tagging  talhassner  taxes  tcsnmy  teaching  technology  technosolutionism  testosterone  thinking  thomasclarkson  tombrdley  towatch  trust  trustworthiness  twitter  ubicomp  ui  uncannyvalley  understanding  unschooling  us  ux  vc  via:ablerism  via:clivethompson  via:meetar  vintcerf  violence  walledgardens  walterbrattain  web  williamshakespeare  williamshockley  work  xi  xiaolinwu  youtube  zeyneptufekci  zhang 

Copy this bookmark:



description:


tags: