charlesarthur + ai   265

Learning How AI Makes Decisions • PCMag UK
<p>After her neural networks failed to reveal the reasons they were mislabelling videos and pictures, [Kate] Saenko [an associate professor at the Department of Computer Science at Boston University] and a team of researchers at Boston University engaged in a project to find the parameters that influenced those decisions.

What came out of the effort was <a href="https://bdtechtalks.com/2018/10/15/kate-saenko-explainable-ai-deep-learning-rise/">RISE</a>, a method that tries to explain to interpret decisions made by AI algorithms. Short for "randomized input sampling for explanation of black-box models," RISE is a local explanation model.

When you provide an image-classification network with an image input, what it returns is a set of classes, each associated with a probability. Normally, you'd have no insight into how the AI reached that decision. But RISE provides you with a heatmap that describes which parts of the image are contributing to each of those output classes.

<img src="https://c-3sux78kvnkay76x24gyykzyx2evisgmx2eius.g00.pcmag.com/g00/3_c-3ccc.visgm.ius_/c-3SUXKVNKAY76x24nzzvyx3ax2fx2fgyykzy.visgm.iusx2fskjogx2fosgmkyx2f185910-x78oyk-kdvrgotghrk-go-kdgsvrk-ygroktie-sgv.vtmx3fo76i.sgx78qx3dosgmk_$/$/$/$" width="100%" />

For instance, in the above image, it's clear that the network in question is mistaking brown sheep for cows, which might mean that it hasn't been trained on enough examples of brown sheep. This type of problem happens often. Using the RISE method, Saenko was able to discover that her neural networks were specifying the gender of the people in the cooking videos based on pots and pans and other objects that appeared in the background instead of examining their facial and physical features.

The idea behind RISE is to randomly obscure parts of the input image and run it through the neural network to observe how the changes affect the output weights. By repeating the masking process multiple times, RISE is able to discern which parts of the image are more important to each output class.</p>


Clever - and very usable.
ai  machinelearning  explanation 
6 days ago by charlesarthur
AI mistakes bus-side ad for famous CEO, charges her with jaywalking • Caixin Global
<p>Cities across China have debuted crime-fighting facial recognition technology to much fanfare over the past year. But some of these jaywalker-busting devices aren’t as impressive as they seem.

A facial recognition system in the city of Ningbo caught Dong Mingzhu, the chair of appliance-making giant Gree Electric, running a red light. Only it turned out not to be Dong, but rather an advertisement featuring her face on the side of a bus, local police said on Weibo Wednesday.

The police said they have upgraded their tech to avoid issues like this in the future. The real Dong, meanwhile, is embroiled in drama with an electric vehicle company. </p>
ai  jaywalking  error 
15 days ago by charlesarthur
Wanted: the ‘perfect babysitter.’ Must pass AI scan for respect and attitude • The Washington Post
Drew Harwell:
<p>When Jessie Battaglia started looking for a new babysitter for her 1-year-old son, she wanted more information than she could get from a criminal-background check, parent comments and a face-to-face interview.

So she turned to Predictim, an online service that uses “advanced artificial intelligence” to assess a babysitter’s personality, and aimed its scanners at one candidate’s thousands of Facebook, Twitter and Instagram posts.

The system offered an automated “risk rating” of the 24-year-old woman, saying she was at a “very low risk” of being a drug abuser. But it gave a slightly higher risk assessment — a 2 out of 5 — for bullying, harassment, being “disrespectful” and having a “bad attitude.”

The system didn’t explain why it had made that decision. But Battaglia, who had believed the sitter was trustworthy, suddenly felt pangs of doubt.

“Social media shows a person’s character,” said Battaglia, 29, who lives outside Los Angeles. “So why did she come in at a 2 and not a 1?”

Predictim is offering parents the same playbook that dozens of other tech firms are selling to employers around the world: artificial-intelligence systems that analyze a person’s speech, facial expressions and online history with promises of revealing the hidden aspects of their private lives…

…The systems depend on black-box algorithms that give little detail about how they reduced the complexities of a person’s inner life into a calculation of virtue or harm. And even as Predictim’s technology influences parents’ thinking, it remains entirely unproven, largely unexplained and vulnerable to quiet biases over how an appropriate babysitter should share, look and speak.</p>


Evaluating these systems is becoming more important than ever; and more difficult than ever. And you just know this is going to turn out to be subtly racist.
babysitter  ai  predictim 
16 days ago by charlesarthur
Ranking Gmail’s AI-fuelled Smart Replies • NY Mag
Christopher Bonanos:
<p>The most recognizable feature of Gmail’s newly rolled-out redesign is the so-called smart reply, wherein bots offer three one-click responses to each mail message. Say your email contains the words “you free for lunch?” The autoreplies Gmail presents will be something like “Sure!” and “Yes!” and “Looking forward to it!” The idea, especially on a small, one-hand phone screen, is that you can tap and send using one thumb, without typing. It’s not clear just how many of these prewritten options there are, or how sophisticated the machine learning behind them is. The AI is not yet sharp enough to offer genuinely useful responses like “Please, for the love of Christ, stop sending me these offers to buy those sandals whose ad I clicked on last month” or emotionally honest ones like “Hey, it would be wonderful if someone in our group cancels our drinks tonight because I would rather stay home and order dan dan noodles while watching Succession.” Until then, we’re stuck with the few dozen simple responses that appear regularly. Some are better than others. Shall we rank? </p>


≥ Ok, sounds good! ≤
≥ We should rethink this ≤
≥ I can see everything you type you know ≤

But the idea that the answers might change over time is rather interesting.
gmail  ai  replies 
18 days ago by charlesarthur
Tempted to expense that strip club as a business dinner? AI is watching • Bloomberg
Olivia Carville:
<p>One employee traveling for work checked his dog into a kennel and billed it to his boss as a hotel expense. Another charged yoga classes to the corporate credit card as client entertainment. A third, after racking up a small fortune at a strip club, submitted the expense as a steakhouse business dinner. 

These bogus expenses, which occurred recently at major U.S. companies, have one thing in common: All were exposed by artificial intelligence algorithms that can in a matter of seconds sniff out fraudulent claims and forged receipts that are often undetectable to human auditors—certainly not without hours of tedious labor.

AppZen, an 18-month-old AI accounting startup, has already signed up several big companies, including Amazon.com Inc., International Business Machine Corp., Salesforce.com Inc. and Comcast Corp. and claims to have saved its clients $40 million in fraudulent expenses. AppZen and traditional firms like Oversight Systems say their technology isn’t erasing jobs—so far—but rather freeing up auditors to dig deeper into dubious claims and educate employees about travel and expense policies.

“People don’t have time to look at every expense item,” says AppZen Chief Executive Officer Anant Kale. “We wanted to get AI to do it for them and to find things the human eye might miss.”</p>
ai  expenses 
25 days ago by charlesarthur
AI is not “magic dust” for your company, says Google’s cloud AI boss • Technology Review
Will Knight interviews Andrew Knight, ex-Carnegie-Mellon University:
<p><strong>Q: Like you, lots of AI researchers are being sucked into big companies. Isn’t that bad for AI?</strong>

AK: It’s healthy for the world to have people who are thinking about 25 years into the future—and people who are saying “What can we do right now?”

There’s one project at Carnegie Mellon that involves a 70-foot-tall robot designed to pick up huge slabs of concrete and rapidly create levees against major flooding. It’s really important for the world that there are places that are doing that—but it’s kind of pointless if that’s all that’s going on in artificial intelligence.

While I’ve been at Carnegie Mellon, I’ve had hundreds of meetings with principals in large organizations and companies who are saying, “I am worried my business will be completely replaced by some Silicon Valley startup. How can I build something to counter that?”

I can’t think of anything more exciting than being at a place that is not just doing AI for its own sake anymore, but is determined to bring it out to all these other stakeholders who need it.

<strong>Q: How big of a technology shift is this for businesses?</strong>

AK: It’s like electrification. And it took about two or three decades for electrification to pretty much change the way the world was. Sometimes I meet very senior people with big responsibilities who have been led to believe that artificial intelligence is some kind of “magic dust” that you sprinkle on an organization and it just gets smarter. In fact, implementing artificial intelligence successfully is a slog.

When people come in and say “How do I actually implement this artificial-intelligence project?” we immediately start breaking the problems down in our brains into the traditional components of AI—perception, decision making, action (and this decision-making component is a critical part of it now; you can use machine learning to make decisions much more effectively)—and we map those onto different parts of the business. One of the things Google Cloud has in place is these building blocks that you can slot together.

Solving artificial-intelligence problems involves a lot of tough engineering and math and linear algebra and all that stuff. It very much isn’t the magic-dust type of solution.</p>

But tell me more about the 70-foot robot that moves paving slabs.
Ai  robotics  business 
28 days ago by charlesarthur
In the age of A.I., is seeing still believing? • The New Yorker
Joshua Rothman on the rise of "deep fakes":
<p>As alarming as synthetic media may be, it may be more alarming that we arrived at our current crises of misinformation—Russian election hacking; genocidal propaganda in Myanmar; instant-message-driven mob violence in India—without it. Social media was enough to do the job, by turning ordinary people into media manipulators who will say (or share) anything to win an argument. The main effect of synthetic media may be to close off an escape route from the social-media bubble. In 2014, video of the deaths of Michael Brown and Eric Garner helped start the Black Lives Matter movement; footage of the football player Ray Rice assaulting his fiancée catalyzed a reckoning with domestic violence in the National Football League. It seemed as though video evidence, by turning us all into eyewitnesses, might provide a path out of polarization and toward reality. With the advent of synthetic media, all that changes. Body cameras may still capture what really happened, but the aesthetic of the body camera—its claim to authenticity—is also a vector for misinformation. “Eyewitness video” becomes an oxymoron. The path toward reality begins to wash away.

In the early days of photography, its practitioners had to argue for its objectivity. In courtrooms, experts debated whether photos were reflections of reality or artistic products; legal scholars wondered whether photographs needed to be corroborated by witnesses. It took decades for a consensus to emerge about what made a photograph trustworthy. Some technologists wonder if that consensus could be reëstablished on different terms. Perhaps, using modern tools, photography might be rebooted…

…Citron and Chesney indulge in a bit of sci-fi speculation. They imagine the “worst-case scenario,” in which deepfakes prove ineradicable and are used for electioneering, blackmail, and other nefarious purposes. In such a world, we might record ourselves constantly, so as to debunk synthetic media when it emerges. “The vendor supplying such a service and maintaining the resulting data would be in an extraordinary position of power,” they write; its database would be a tempting resource for law-enforcement agencies. Still, if it’s a choice between surveillance and synthesis, many people may prefer to be surveilled. Truepic, McGregor told me, had already had discussions with a few political campaigns. “They say, ‘We would use this to just document everything for ourselves, as an insurance policy.’ ”</p>
ai  images  deception 
4 weeks ago by charlesarthur
Why Big Tech pays poor Kenyans to teach self-driving cars • BBC News
Dave Lee went to the slum of Kibera, on the east side of Nairobi, Kenya:
<p>Brenda does this work for Samasource, a San Francisco-based company that counts Google, Microsoft, Salesforce and Yahoo among its clients. Most of these firms don't like to discuss the exact nature of their work with Samasource - as it is often for future projects - but it can be said that the information prepared here forms a crucial part of some of Silicon Valley's biggest and most famous efforts in AI.

It's the kind of technological progress that will likely never be felt in a place like Kibera. As Africa's largest slum, it has more pressing problems to solve, such as a lack of reliable clean water, and a well-known sanitation crisis.

But that's not to say artificial intelligence can't have a positive impact here. We drove to one of Kibera's few permanent buildings, found near a railway line that, on this rainy day, looked thoroughly decommissioned by mud, but has apparently been in regular use since its colonial inception.

Almost exactly a year ago, this building was the dividing line between stone-throwing rioters and the military. Today, it's a thriving hub of activity: a media school and studio, something of a cafeteria, and on the first floor, a room full of PCs. Here, Gideon Ngeno teaches around 25 students the basics of using a personal computer.

What's curious about this process is that digital literacy is high, even in Kibera, where smartphones are common and every other shop is selling chargers and accessories, which people buy using the mobile money system MPesa.</p>


Terrific story, pointing out the contradictions - "magic" tech enabled by low-paid humans in distant countries who receive low pay because high pay would distort the market, but who are even so given the money and knowledge to break out of poverty. You could call it "good capitalism".
ai  recognition  kenya 
5 weeks ago by charlesarthur
Chelsea is using our AI research for smarter football coaching • The Conversation
Varuna de Silva is a lecturer at the Institute for Digital Technologies at Loughborough University:
<p>The best footballers aren’t necessarily the ones with the best physical skills. The difference between success and failure in football often lies in the ability to make the right split-second decisions on the field about where to run and when to tackle, pass or shoot. So how can clubs help players train their brains as well as their bodies?

My colleagues and I are working with Chelsea FC academy to develop a system to measure these decision-making skills using artificial intelligence (AI) – a kind of robot coach or scout, if you will. We’re doing this by analysing several seasons of data that tracks players and the ball throughout each game, and developing a computer model of different playing positions. The computer model provides a benchmark to compare the performance of different players. This way we can measure the performance of individual players independent of the actions of other players.

We can then visualise what might have happened if the players had made a different decision in any case. TV commentators are always criticising player actions, saying they should have done something else without any real way of testing the theory. But our computer model can show just how realistic these suggestions might be.</p>


Tricky to do, because every situation is unique - and when something similar arises, how do you know if it's sufficiently similar or different to do something else? Possibly pointing this out is something good managers have done instinctively for years. Now it's the AIs' turn.
ai  football  coaching 
5 weeks ago by charlesarthur
An AI lie detector is going to start questioning travellers in the EU • Gizmodo
Melanie Ehrenkranz:
<p>The virtual border control agent [in Hungary, Latvia and Greece] will ask travellers questions after they’ve passed through the checkpoint. Questions include, “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” according to New Scientist. The system reportedly records travelers’ faces using AI to analyze 38 micro-gestures, scoring each response. The virtual agent is reportedly customized according to the traveler’s gender, ethnicity, and language.

For travelers who pass the test, they will receive a QR code that lets them through the border. If they don’t, the virtual agent will reportedly get more serious, and the traveler will be handed off to a human agent who will asses their report. But, according to the New Scientist, this pilot program won’t, in its current state, prevent anyone’s ability to cross the border.

This is because the program is very much in the experimental phases. In fact, the automated lie-detection system was modeled after another system created by some individuals from iBorderCtrl’s team, but it was only tested on 30 people.</p>


Hmm. 30 people? Feels like this is going to have some teething problems.
ai  immigration  customs  borders 
5 weeks ago by charlesarthur
AIs trained to help with sepsis treatment, fracture diagnosis • Ars Technica
John Timmer:
<p>The new research isn't intended to create an AI that replaces these doctors; rather, it's intended to help them out.

The team recruited 18 orthopedic surgeons to diagnose over 135,0000 images of potential wrist fractures, and then it used that data to train their algorithm, a deep-learning convolutional neural network. The algorithm was used to highlight areas of interest to doctors who don't specialize in orthopedics. In essence, it was helping them focus on areas that are mostly likely to contain a break.

In the past, trials like this have resulted in over-diagnosis, where doctors would recommend further tests for something that's harmless. But in this case, the accuracy went up as false positives went down. The sensitivity (or ability) to identify fractures went from 81% up to 92%, while the specificity (or ability to make the right diagnosis) rose from 88% to 94%. Combined, these results mean that ER docs would have seen their misdiagnosis rate drop by nearly half.

Neither of these involved using the software in a context that fully reflects medically relevant circumstances. Both ER doctors and those treating sepsis (who may be one and the same) will normally have a lot of additional concerns and distractions, so it may be a challenge to integrate AI use into their process. </p>


That is the point, isn't it: it's great when you're not trying to figure out which of 15 different possible wrong things is wrong with the patient.
ai  sepsis  fracture  doctor 
6 weeks ago by charlesarthur
Apple: the second-best tech company in the world • The Outline
Joshua Topolsky:
<p>Apple’s lack of data (and its inability or unwillingness to blend large swaths of data) actually seems to be one of the issues driving its slippage in software innovation. While Google is using its deep pool of user data to do astounding things like screen calls or make reservations for users with AI, map the world in more detail, identify objects and describe them in real-time, and yes — make its cameras smarter, faster, and better looking — Apple devices seem increasingly disconnected from the world they exist in (and sometimes even their own platforms).

As both Amazon and Google have proven in the digital assistant and voice computing space, the more things you know about your users, the better you can actually serve them. Apple, on the other hand, wants to keep you inside its tools, safe from the potential dangers of data misuse or abuse certainly, but also marooned on a narrow island, sanitized and distanced from the riches that data can provide when used appropriately.</p>


I'm willing to be corrected, but I don't think it's deep pools of user data that Google's using for Call Screening or Duplex. It's AI systems which have been taught on quite different sets of data from email. (I don't know what they have been taught on.) Certainly, user data makes maps better, and the data from Google Photos does - that's probably a key input to the photo system on the Pixel 3.

But that data does exist, and whether Apple starts to use it more broadly is a key question for the future. It's the collision of questions: can you improve the camera (and other systems) without embedded AI? At present the answer seems to be no. (Though might that be just because when everything's getting AI, getting AI seems like the only answer.)
apple  innovation  ai 
6 weeks ago by charlesarthur
AI Art at Christie’s sells for $432,500 - The New York Times
Gabe Cohn:
<p>Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.

“Edmond de Belamy, from La Famille de Belamy” sold for $432,500 including fees, over 40 times Christie’s initial estimate of $7,000-$10,000. The buyer was an anonymous phone bidder.

The portrait, by the French art collective Obvious, was marketed by Christie’s as the first portrait generated by an algorithm to come up for auction. It was inspired by a sale earlier this year, in which the French collector Nicolas Laugero Lasserre bought a portrait directly from the collective for about 10,000 euros, or about $11,400.</p>


GPU rig got surpassed by ASICS? Get it painting instead. (Though the picture that was auctioned <a href="https://media.npr.org/assets/img/2012/09/20/513259474_13195159_wide-360d295b5726058b589b84b5d341f077b1cde4a7.jpg?s=1400">did look a bit like this human-generated one</a> to me.)
ai  painting 
6 weeks ago by charlesarthur
No, AI won’t solve the fake news problem • The New York Times
Gary Marcus (a professor of psychology) and Ernest Davis (a professor of computer science):
<p>To get a handle on what automated fake-news detection would require, consider an article posted in May on the far-right website WorldNetDaily, or WND. The article reported that a decision to admit girls, gays and lesbians to the Boy Scouts had led to a requirement that condoms be available at its “global gathering.” A key passage consists of the following four sentences:
<p>The Boy Scouts have decided to accept people who identify as gay and lesbian among their ranks. And girls are welcome now, too, into the iconic organization, which has renamed itself Scouts BSA. So what’s next? A mandate that condoms be made available to ‘all participants’ of its global gathering.</p>


Was this account true or false? Investigators at the fact-checking site Snopes determined that the report was “mostly false.” But determining how it went afoul is a subtle business beyond the dreams of even the best current A.I.

First of all, there is no telltale set of phrases. “Boy Scouts” and “gay and lesbian,” for example, have appeared together in many true reports before. Then there is the source: WND, though notorious for promoting conspiracy theories, publishes and aggregates legitimate news as well. Finally, sentence by sentence, there are a lot of true facts in the passage: Condoms have indeed been available at the global gathering that scouts attend, and the Boy Scouts organization has indeed come to accept girls as well as gays and lesbians into its ranks.

What makes the article “mostly false” is that it implies a causal connection that doesn’t exist. It strongly suggests that the inclusion of gays and lesbians and girls led to the condom policy (“So what’s next?”). But in truth, the condom policy originated in 1992 (or even earlier) and so had nothing to do with the inclusion of gays, lesbians or girls, which happened over just the past few years.</p>
facebook  ai  news 
6 weeks ago by charlesarthur
Five ways Google Pixel 3 camera pushes the boundaries of computational photography • Digital Photography Review
Rishi Sanyal:
<p>With the launch of the Google Pixel 3, smartphone cameras have taken yet another leap in capability. I had the opportunity to sit down with Isaac Reynolds, Product Manager for Camera on Pixel, and Marc Levoy, Distinguished Engineer and Computational Photography Lead at Google, to learn more about the technology behind the new camera in the Pixel 3.

One of the first things you might notice about the Pixel 3 is the single rear camera. At a time when we're seeing companies add dual, triple, even quad-camera setups, one main camera seems at first an odd choice.

But after speaking to Marc and Isaac I think that the Pixel camera team is taking the correct approach – at least for now. Any technology that makes a single camera better will make multiple cameras in future models that much better, and we've seen in the past that a single camera approach can outperform a dual camera approach in Portrait Mode, particularly when the telephoto camera module has a smaller sensor and slower lens, or lacks reliable autofocus [like the Galaxy S9].</p>

This isn't actually a test of the Pixel 3. Plenty of interesting things here; will they come to the wider range of Android, though? The Pixel is a fraction of a fraction of Android sales.

We're also approaching the point where it's only the low-light pictures that show substantial differences between generations. (Thanks stormyparis for the link.)
computation  photograph  ai  ml 
7 weeks ago by charlesarthur
Amazon scraps secret AI recruiting tool that showed bias against women • Reuters
Jeffrey Dastin:
<p>The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters.

Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon, some of the people said.

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.</p>


So more accurate to say that the AI tool <em>revealed</em> bias against women. But then kept on doing the same: it would penalise those CVs which included "women's". Eventually they realised they couldn't get it right.
amazon  ai  bias  gender 
8 weeks ago by charlesarthur
Imaginary worlds dreamed by BigGAN • Letting neural networks be weird
Janelle Shane:
<p>These are some of the most amazing generated images I’ve ever seen. Introducing BigGAN, a neural network that generates high-resolution, sometimes photorealistic, imitations of photos it’s seen. None of the images below are real - they’re all generated by BigGAN.

<img src="https://66.media.tumblr.com/499e3b82d08a759215bd48e2b2aec08d/tumblr_inline_pfw4pdle8s1rl9zu7_500.png" width="100%" />

The BigGAN paper is still in review so we don’t know who the authors are, but as part of the review process a <a href="https://openreview.net/pdf?id=B1xsqj09Fm">preprint</a> and <a href="https://drive.google.com/drive/folders/1lWC6XEPD0LT5KUnPXeve_kWeY-FxH002">some data</a> were posted online. It’s been causing a buzz in the machine learning community. For generated images, their 512x512 pixel resolution is high, and they scored impressively well on a standard benchmark known as Inception. They were able to scale up to huge processing power (512 TPUv3′s), and they’ve also introduced some strategies that help them achieve both photorealism and variety. (They also told us what *didn’t* work, which was nice of them.) Some of the images are so good that the researchers had to check the original ImageNet dataset to make sure it hadn’t simply copied one of its training images - it hadn’t.

Now, the images above were selected for the paper because they’re especially impressive. BigGAN does well on common objects like dogs and simple landscapes where the pose is pretty consistent, and less well on rarer, more-varied things like crowds. But the researchers also posted a huge set of example BigGAN images and some of the less photorealistic ones are the most interesting.</p>


Keep reading, though, and you'll encounter some truly weird images. The clocks are in some ways the oddest: familiar yet wrong. How long before entire films are being generated like this? It would be like a waking dream.
machinelearning  ai 
10 weeks ago by charlesarthur
Crypto mining giant Bitmain reveals heady growth as it files for IPO • TechCrunch
Jon Russell:
<p>After months of speculation, Bitmain — the world’s largest provider of crypto miners — has opened the inner details of its business after it submitted <a href="http://www.hkexnews.hk/APP/SEHK/2018/2018092406/Documents/SEHK201809260017.pdf">its IPO prospectus</a> with the Stock Exchange of Hong Kong. And some of the growth numbers are insane.

The document doesn’t specify how much five-year-old Bitmain is aiming to raise from its listing — that’ll come later — but it does lift the lid on the incredible business growth that the company saw as the crypto market grew massively in 2017. Although that also comes with a question: can that growth continue in this current bear market?

The company grossed more than $2.5bn in revenue last year, a near-10X leap on the $278m it claims for 2016. Already, it said revenue for the first six months of this year surpassed $2.8bn.

Bitmain is best known for its ‘Antminer’ devices — which allow the owner to mine for Bitcoin and other cryptocurrencies — and that accounts for most of its revenue: 77% in 2016, 90% in 2017, and 94% in the first half of 2018.</p>


Great that bitcoin has finally got rid of all that nasty centralisation.
china  ai  bitcoin  cryptocurrency 
10 weeks ago by charlesarthur
Child abuse algorithms: from science fiction to cost-cutting reality • The Guardian
David Pegg and Niamh McIntyre:
<p>Machine learning systems built to mine massive amounts of personal data have long been used to predict customer behaviour in the private sector.

Computer programs assess how likely we are to default on a loan, or how much risk we pose to an insurance provider.

Designers of a predictive model have to identify an “outcome variable”, which indicates the presence of the factor they are trying to predict.

For child safeguarding, that might be a child entering the care system.

They then attempt to identify characteristics commonly found in children who enter the care system. Once these have been identified, the model can be run against large datasets to find other individuals who share the same characteristics.

The Guardian obtained details of all predictive indicators considered for inclusion in Thurrock council’s child safeguarding system. They include history of domestic abuse, youth offending and truancy.

More surprising indicators such as rent arrears and health data were initially considered but excluded from the final model. In the case of both Thurrock, a council in Essex, and the London borough of Hackney, families can be flagged to social workers as potential candidates for the Troubled Families programme. Through this scheme councils receive grants from central government for helping households with long-term difficulties such as unemployment.

Such systems inevitably raise privacy concerns. Wajid Shafiq, the chief executive of Xantura, the company providing predictive analytics work to both Thurrock and Hackney, insists that there is a balance to be struck between privacy rights and the use of technology to deliver a public good.

“The thing for me is: can we get to a point where we’ve got a system that gets that balance right between protecting the vulnerable and protecting the rights of the many?” said Shafiq. “It must be possible to do that, because if we can’t we’re letting down people who are vulnerable.”</p>
ai  machinelearning  children  abuse 
11 weeks ago by charlesarthur
This AI predicts obesity prevalence—all the way from space • Singularity Hub
Marc Prosser:
<p>A research team at the University of Washington has trained an artificial intelligence system to spot obesity—all the way from space. The system used a convolutional neural network (CNN) to analyze 150,000 satellite images and look for correlations between the physical makeup of a neighborhood and the prevalence of obesity.

The team’s results, presented in JAMA Network Open, showed that features of a given neighborhood could explain close to two-thirds (64.8 percent) of the variance in obesity. Researchers found that analyzing satellite data could help increase understanding of the link between peoples’ environment and obesity prevalence. The next step would be to make corresponding structural changes in the way neighborhoods are built to encourage physical activity and better health.

Convolutional neural networks (CNNs) are particularly adept at image analysis, object recognition, and identifying special hierarchies in large datasets.

Prior to analyzing 150,000 high-resolution satellite images of Bellevue, Seattle, Tacoma, Los Angeles, Memphis, and San Antonio, the researchers trained the CNN on 1.2 million images from the ImageNet database. The categorizations were correlated with obesity prevalence estimates for the six urban areas from census tracts gathered by the 500 Cities project.</p>


Seriously? "Yo momma so big she can be seen from SPACE."
obesity  ai  space 
12 weeks ago by charlesarthur
I've seen the future of consumer AI, and it doesn't have one • The Register
Andrew Orlowski went to IFA, the poor lad:
<p>If ever there was a solution looking for a problem, it's ramming AI into gadgets to show of a company's machine learning prowess. For the consumer it adds unreliability, cost and complexity, and the annoyance of being prompted.

How is this so? There are clearly some use cases where, empirically, the statistical predictions made by neural networks has improved the output - speech recognition is a clear example. There are 44 English phonemes: overlapping nets help add valuable context that produce more accurate guesses (and remember, this is all about guessing). And then... there are some use cases that aren't improved. These turn out to be quite numerous.

In Berlin, I saw two desperate armies converging on the battlefield of consumer AI: white-goods manufacturers looking to add value and margin, and technology companies looking to get into new areas of consumer electronics. LG and Samsung are both, with decades of white goods and tech behind them. As you might expect, both are smitten by AI, LG even more so than its bigger rival, and their vast floor space touted this loudly.

For LG it's a fairly indiscriminate application of AI - with everything rebranded "ThinQ" and fairly limited in what it can do.

LG, Google and Innit trumpeted a smart kitchen. How is it smart? Well, there's "voice control, step-by-step guided cooking, and automated expert cook programs". We learn that "consumers may have had to open up six or seven apps to get the help they need cooking, including nutrition information, recipes, shopping lists, how-to videos, and remote control apps for various devices", but now they can "enjoy a single elegant journey".

How is it smart, though?

For example, LG says, if a fridge "knows" there's a chicken in it, you select a recipe and the oven comes on to start roasting. Most of my very limited number of chicken recipes were learned years ago, however, and when I'm browsing for new ideas, I don't necessarily want to start cooking right away. And perhaps like me you need to clear the oven of ancient metalware and possibly flammable material before it's safe to turn on. I wondered how many fires AI will start?

I suppose a connected oven will tell you, and hopefully the fire brigade, that your house is on fire. The AI at the smart insurer can then hike your premiums.</p>


He isn't wrong.
ai  machinelearning 
september 2018 by charlesarthur
Deep Angel and the Aesthetics of Absence • Deep Angel
Deep Angel is an MIT project which uses AI to subtract objects from pictures, rather than adding fakes:
<p>If the future of media is manipulation, then the antidote to this future is a Zen kind of emptiness. Not "nothingness" nor a "void," but rather the non-limitation and nondefinition of the infinite. With Deep Angel’s artificial intelligence, you become an active participant in the chaos of media creation. You can erase objects from photographs. Like Joseph Stalin, you can treat history as a malleable fiction, disappear unwanted artifacts, and develop a new world order. But, be careful. Once you know how to erase history, your view on history might change. The reassuring illusion of photography as fact will vanish. Seemingly paradoxically, a truth emerges from the revelations of falsehoods…

…Deep Angel is powered by a neural network architecture that builds upon Mask R-CNN and Deep Fill to create an end-to-end targeted object removal pipeline.</p>
fakes  ai  deepangel 
september 2018 by charlesarthur
AI camera shootout: LG V30S vs Huawei P20 Pro vs Google Pixel 2 • Android Authority
Robert Triggs tries out the "AI" photo tweaks for colour profiles and post-processing (and has lots of photos to prove it):
<p>it’s a mixed bag across all of the devices we tested. LG and Huawei’s tweaks ranged from subtle to overbearing. Most of the time, it’s preferable to leave the AI setting off. Many of the changes could be imitated at leisure afterwards if you really want them. Google’s HDR+ implementation is very different and clearly helps to compensate for the rare occasions when the camera’s exposure is a little off. It also offers improved dynamic range over other cameras, but this sometimes comes at the cost of drab colors. Overall, it’s the most subtle and consistent of the technologies.

LG definitely offers the most basic AI camera technology of the three. It does little more than color profile and filter switching. Google’s HDR+ is much more useful for general image enhancements. Huawei’s P20 Pro appears to do a bit of both.

Getting an AI camera to even detect the desired scene can be tricky, as there is only a limited range of options to pick from. LG’s software spits out plenty of words for what it’s looking at, but often this won’t result in a change of settings. Huawei’s is similarly finicky, struggling to tell the difference between Flowers and Greenery settings, and constantly switching in and out of the Blue Sky option. Google’s tech is better in this regard because it’s always available should you need it, but often subtle enough not to be missed if it doesn’t trigger.</p>


To me, the AI photos look worse in pretty much every case.
ai  camera  android 
september 2018 by charlesarthur
Facebook is rating the trustworthiness of its users on a scale from zero to 1 • The Washington Post
Elizabeth Dwoskin:
<p>Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to 1.

The previously unreported ratings system, which Facebook has developed over the past year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors.

Facebook developed its reputation assessments as part of its effort against fake news, Tessa Lyons, the product manager who is in charge of fighting misinformation, said in an interview. The company, like others in tech, has long relied on its users to report problematic content — but as Facebook has given people more options, some users began falsely reporting items as untrue, a new twist on information warfare for which it had to account.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.

A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk.</p>
facebook  algorithm  ai  rating 
august 2018 by charlesarthur
What algorithmic art can teach us about artificial intelligence • The Verge
James Vincent:
<p>[Computational design lecturer and artist Tom] White says his motivation is primarily to deconstruct what we think of as machine perception. In other words: to explain the algorithmic gaze. Take the example of the cello print in White’s series “The Treachery of ImageNet.” If you know what you’re looking for, you can see shapes that represent the instrument (a cluster of straight parallel lines bracketed by curves). But there’s also a confusing shape looming behind it. White says these shapes are there because the algorithms were trained using pictures of cellos with cellists holding them. Because the algorithm has no prior knowledge of the world — no understanding of what an instrument is or any concept of music or performance — it naturally grouped the two together. After all, that’s what it’s been asked to do: learn what’s in the picture.

This sort of mistake is common in machine learning, and it demonstrates a number of important lessons. It shows how critical training data is: give an AI system the wrong data to learn from, and it’ll learn the wrong thing. It also demonstrates that no matter how “clever” these systems seem, they possess a brittle intelligence that only understands a slice of the world — and even that, imperfectly. White’s latest prints for the Nature Morte gallery, for example, are abstract smears of color designed to be flagged as “inappropriate content” by Google’s algorithms. The same algorithms used to filter what humans see around the world.

Still, White says that he doesn’t see his artwork as a warning. “I’m just trying to present the algorithms as they are,” he says. “But I admit it’s sometimes alarming that these machines we’re relying on have such a different take on how objects in the world are grounded.”</p>


White's <a href="https://medium.com/artists-and-machine-intelligence/perception-engines-8a46bc598d57">original posting is on Medium</a>.
ai  art  machinelearning  artificialintelligence 
august 2018 by charlesarthur
The rise of Chinese voice assistants and the race to commoditize smart speakers • CB Insights
<p>Neither Amazon Echo nor Google Home have penetrated China.

Apart from the tight regulations US tech companies face there, Chinese natural language processing is complex (with 130 spoken dialects and 30 written languages), making speech recognition a huge challenge.

Among US big tech, only Apple’s Siri supports Mandarin on the iPhone. The company’s Homepod smart speaker only supports English, and is not available in China.

This leaves a huge market underserved by US companies, and local players are capitalizing on it.

<img src="https://o.aolcdn.com/images/dims?crop=789%2C514%2C0%2C0&quality=85&format=jpg&resize=1600%2C1042&image_uri=http%3A%2F%2Fo.aolcdn.com%2Fhss%2Fstorage%2Fmidas%2F5c9c8de9c6bfa1bcce9b86cbfe548118%2F206605379%2FNova3.png&client=a1acac3e1b3290917d92&signature=3c1509b46713e93c9f20549dd49695aa802210a8" width="100%" />

Smart voice is one of the Chinese government’s four main focus areas in its first wave of AI applications throughout the country. (Read about its focus on healthcare, smart cities, and autonomous vehicles here.)

China’s big tech has stepped up here in a big way. Alibaba sold its Tmall Genie smart speakers for $15 in China on Single’s Day, the country’s annual shopping extravaganza on November 11. Baidu recently slashed the price of one its smart speakers in China from $39 to $14.

These low prices are making it nearly impossible for smaller companies to compete.</p>


Apple might have a chance there: open market. At the top end, at least.
china  ai  alibaba  baidu  smartspeaker 
august 2018 by charlesarthur
Artificial intelligence 'did not miss a single urgent case' • BBC News
Fergus Walsh:
<p>A team at DeepMind, based in London, created an algorithm, or mathematical set of rules, to enable a computer to analyse optical coherence tomography (OCT), a high resolution 3D scan of the back of the eye.

Thousands of scans were used to train the machine how to read the scans. Then, artificial intelligence was pitted against humans. The computer was asked to give a diagnosis in the cases of 1,000 patients whose clinical outcomes were already known.

The same scans were shown to eight clinicians - four leading ophthalmologists and four optometrists. Each was asked to make one of four referrals: urgent, semi-urgent, routine and observation only.

Artificial intelligence performed as well as two of the world's leading retina specialists, with an error rate of only 5.5%. Crucially, the algorithm did not miss a single urgent case.

The results, published in the journal Nature Medicine , were described as "jaw-dropping" by Dr Pearse Keane, consultant ophthalmologist, who is leading the research at Moorfields Eye Hospital.

He told the BBC: "I think this will make most eye specialists gasp because we have shown this algorithm is as good as the world's leading experts in interpreting these scans."

Artificial intelligence was able to identify serious conditions such as wet age-related macular degeneration (AMD), which can lead to blindness unless treated quickly. Dr Keane said the huge number of patients awaiting assessment was a "massive problem".</p>


Contrast this with IBM's Watson, trying to solve cancer and doing badly. This has a better data set, clearer pathways to disease, and is better understood generally. Part of doing well with AI is choosing the correct limits to work within.

And this won't replace the doctors; it will just be a pre-screen.
moorfields  eye  deepmind  ai 
august 2018 by charlesarthur
Why Is Google Translate spitting out sinister religious prophecies? • Motherboard
Jon Christian:
<p>On Twitter, people have blamed the strange translations on ghosts and demons. Users on a subreddit called TranslateGate have speculated that some of the strange outputs might be drawn from text gathered from emails or private messages.

“Google Translate learns from examples of translations on the web and does not use ‘private messages’ to carry out translations, nor would the system even have access to that content,” said Justin Burr, a Google spokesperson, in an email. “This is simply a function of inputting nonsense into the system, to which nonsense is generated.”

When Motherboard provided Google with an example of the eerie messages, its translation disappeared from Google Translate.

There are several possible explanations for the strange outputs. It’s possible that the sinister messages are the result of disgruntled Google employees, for instance, or that mischievous users are abusing the “Suggest an edit” button, which accepts suggestions for better translations of a given text.

Andrew Rush, an assistant professor at Harvard who studies natural language processing and computer translation, said that internal quality filters would probably catch that type of manipulation, however. It’s more likely, Rush said, that the strange translations are related to a change Google Translate made several years ago, when it started using a technique known as “neural machine translation.”

In neural machine translation, the system is trained with large numbers of texts in one language and corresponding translations in another, to create a model for moving between the two. But when it’s fed nonsense inputs, Rush said, the system can “hallucinate” bizarre outputs—not unlike the way Google’s DeepDream identifies and accentuates patterns in images.</p>


Another theory: Google did some training using the Bible, as translated into different languages. Notice that bit where the weird translation disappears when notified to Google PR. This is either (a) preventing others confirming it or (b) improving the system when notified by users. Pick your preference.
ai  google  translate  machinelearning 
july 2018 by charlesarthur
Evolutionary algorithm outperforms deep-learning machines at video games • MIT Technology Review
<p>Many genomes [of evolving code, where "good" code is reused] ended up playing entirely new gaming strategies, often complex ones. But they sometimes found simple ones that humans had overlooked.

For example, when playing Kung Fu Master, the evolutionary algorithm discovered that the most valuable attack was a crouch-punch. Crouching is safer because it dodges half the bullets aimed at the player and also attacks anything nearby. The algorithm’s strategy was to repeatedly use this maneuver with no other actions. In hindsight, using the crouch-punch exclusively makes sense.

That surprised the human players involved in the study. “Employing this strategy by hand achieved a better score than playing the game normally, and the author now uses crouching punches exclusively when attacking in this game,” say Wilson and co.

Overall, the evolved code played many of the games well, even outperforming humans in games such as Kung Fu Master. Just as significantly, the evolved code is just as good as many deep-learning approaches and outperforms them in games like Asteroids, Defender, and Kung Fu Master.

It also produces a result more quickly. “While the programs are relatively small, many controllers are competitive with state-of-the-art methods for the Atari benchmark set and require less training time,” say Wilson and co.

The evolved code has another advantage. Because it is small, it is easy to see how it works. By contrast, a well-known problem with deep-learning techniques is that it is sometimes impossible to know why they have made particular decisions, and this can have practical and legal ramifications.</p>
ai  algorithm  algorithms  deeplearning 
july 2018 by charlesarthur
Medical AI safety: we have a problem • Luke Oakden-Rayner
<p>There are also systems where the line gets a bit blurry. An FDA approved system to detect atrial fibrillation in ECG halter monitors from Cardiologs highlights possible areas of concern to doctors, but the final judgement is on them. The concern here is that if this system is mostly accurate, are doctors really going to spend time painstakingly looking through hours of ECG traces? The experience from mammography is that computer advisers might even worsen patient outcomes, as unexpected as that may be. Here is a pertinent quote from Kohli and Jha, reflecting on decades of follow-up studies for systems that appeared to perform well in multi-reader testing:
<p>Not only did CAD increase the recalls without improving cancer detection, but, in some cases, even decreased sensitivity by missing some cancers, particularly non-calcified lesions. CAD could lull the novice reader into a false sense of security. Thus, CAD had both lower sensitivity and lower specificity, a non-redeeming quality for an imaging test.</p>


These sort of systems can clearly have unintended and unexpected consequences, but the differences in outcomes are often small enough that they take years to become apparent. This doesn’t mean we ignore these risks, just that the risk of disaster is fairly low.

Now we come to the tipping point.

A few months ago the FDA approved a new AI system by IDx, and it makes independent medical decisions. This system can operate in a family doctor’s office, analysing the photographs of patients’ retinas, and deciding whether that patient needs a referral to an ophthalmologist.</p>


This is where "move fast and break things" isn't the right approach, he points out.
medicine  ai  machinelearning 
july 2018 by charlesarthur
Apple combines machine learning and Siri teams under Giannandrea • TechCrunch
Matthew Panzarino:
<p>Apple is creating a new AI/ML team that brings together its Core ML and Siri teams under one leader in John Giannandrea.

Apple confirmed this morning that the combined Artificial Intelligence and Machine Learning team, which houses Siri, will be led by the recent hire, who came to Apple this year after an eight-year stint at Google, where he led the Machine Intelligence, Research and Search teams. Before that he founded Metaweb Technologies and Tellme.

The internal structures of the Siri and Core ML teams will remain the same, but they will now answer to Giannandrea. Apple’s internal structure means that the teams will likely remain integrated across the org as they’re wedded to various projects, including developer tools, mapping, Core OS and more. ML is everywhere, basically.</p>


The real surprise is more that this wasn't done sooner, but maybe they needed him to find his way around.
apple  ai  siri 
july 2018 by charlesarthur
The rise of 'pseudo-AI': how tech firms quietly use humans to do bots' work • The Guardian
Olivia Solon:
<p>In 2016, Bloomberg highlighted the plight of the humans <a href="https://www.bloomberg.com/news/articles/2016-04-18/the-humans-hiding-behind-the-chatbots">spending 12 hours a day pretending to be chatbots</a> for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognise these objects itself.

In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence.</p>
Ai  bots  humans 
july 2018 by charlesarthur
Neural network trained on UKC logbooks: the results • UKClimbing
Natalie Berry:
<p>We recently shared the work of Janelle Shane, who trained a neural network on a database of route names from Joshua Tree (5,633) and Boulder, Colorado (4,527). The results were both amusing and baffling. We wondered how the generated names might differ if we provided Janelle with our much larger database of 432,000 route names, which we split by country.

A reminder of what a neural network is, for those who are unsure:

'A neural network is a type of computer program that learns by example, rather than being told exactly how to solve a problem. Based on thousands of examples of route names, it had to figure out the rules that let it generate more like them. At a low temperature* setting, it will generate names that it thinks are very quintessential - they'll end up a bit repetitive, but it will mostly be correct. At a higher temperature setting, it will be more daring when it generates names, going with less common sounds and phrases.'

* Temperature is a hyperparameter of LSTMs (and neural networks generally) used to control the randomness of predictions by scaling the logits before applying softmax...apparently...</p>


The names are wonderfully realistic: The Stuff, Rocket Sheep, Ramp of Lies, Strangershine, Candy Storm, The Dog Sand, Holy Mess, Left Hand Monster, The Scratching One, The Angel's Crack, Suckstone Gully, The Folly Cloud, and many more. For those who don't know: in rock climbing, if you are the first ever to climb a route, you get to name it. British route names tend to the sardonic. (There's a [human-named] route called Strawberries; nearby, a subsequent one called Dream Topping. There's Lord of the Flies; and Lord of the Mince Pies. Elsewhere there's one called Comes The Dervish, whose derivation I've never understood.)

It's lovely to see this work loop around to UKClimbing: in 1995, when I was trying to figure out this "world wide web" thing, I created a web page with a listing of indoor climbing walls in the UK. Soon after, some other climbers got in touch and said they were looking to create a website - climbing in the UK? UKClimbing? - and wanted to include the indoor walls listing. But the grand aim was to have a listing for every route in the UK, and perhaps abroad too. Turns out there are more than 150,000 routes in the UK, though we didn't know that at the time - nobody did.

We crowdsourced a lot of it; and a lot of our experiences in trying to create lat/long pairings from postcodes (for the climbing walls, so you could figure which was the nearest to you) led to my advocacy for the Free Our Data project so that we could include maps, tide times (which matter, a lot, for sea cliff climbing) and location data without busting our tiny budget.
rockclimbing  climbing  neuralnetworks  machinelearning  ai 
june 2018 by charlesarthur
Babylon claims its chatbot beats GPs at medical exam • BBC News
Jen Copestake:
<p>The chatbot AI has been tested on what Babylon said was a representative set of questions from the Membership of the Royal College of General Practitioners exam.

The MRCGP is the final test set for trainee GPs to be accredited by the organisation. Babylon said that the first time its AI sat the exam, it achieved a score of 81%. It added that the average mark for human doctors was 72%, based on results logged between 2012 and 2017.

But the RCGP said it had not provided Babylon with the test's questions and had no way to verify the claim. "The college examination questions that we actually use aren't available in the public domain," added Prof Martin Marshall, one of the RCGP's vice-chairs.

Babylon said it had used example questions published directly by the college and that some had indeed been made publicly available. "We would be delighted if they could formally share with us their examination papers so I could replicate the exam exactly. That would be great," Babylon chief executive Ali Parsa told the BBC.</p>


Anyone remember expert systems? Back in the 1980s, they were going to take doctors' jobs too. Didn't. This could be useful as a backup, or assistant.
doctor  chatbot  ai 
june 2018 by charlesarthur
Talking to Google Duplex: Google’s human-like phone AI feels revolutionary • Ars Technica
Ron Amadeo got invited to a restaurant to be the head waiter (for phone calls):
<p>this was much more than I was expecting: Google PR, Google engineers, restaurant staff, and several other journalists were intently watching and listening to me take this call over the speaker. I was nervous. I've never taken a restaurant reservation in my life, let alone one with an audience and an engineering crew monitoring every utterance. And you know what? I sucked at taking this reservation. And Duplex was fine with it.

Duplex patiently waited for me to awkwardly stumble through my first ever table reservation while I sloppily wrote down the time and fumbled through a basic back and forth about Google's reservation for four people at 7pm on Thursday. Today's Google Assistant requires authoritative, direct, perfect speech in order to process a command. But Duplex handled my clumsy, distracted communication with the casual disinterest of a real person. It waited for me to write down its reservation requirements, and when I asked Duplex to repeat things I didn't catch the first time ("A reservation at what time?"), it did so without incident. When I told this robocaller the initial time it wanted wasn't available, it started negotiating times; it offered an acceptable time range and asked for a reservation somewhere in that time slot. I offered seven o'clock and Google accepted.

From the human end, Duplex's voice is absolutely stunning over the phone. It sounds real most of the time, nailing most of the prosodic features of human speech during normal talking. The bot "ums" and "uhs" when it has to recall something a human might have to think about for a minute. It gives affirmative "mmhmms" if you tell it to hold on a minute. Everything flows together smoothly, making it sound like something a generation better than the current Google Assistant voice.

One of the strangest (and most impressive) parts of Duplex is that there isn't a single "Duplex voice." For every call, Duplex would put on a new, distinct personality. Sometimes Duplex come across as male; sometimes female. Some voices were higher and younger sounding; some were nasally, and some even sounded cute.</p>


The people who took part were all very impressed. But Google says it will have humans to act as backup, just in case.
google  ai  duplex 
june 2018 by charlesarthur
How computers could make your customer-service calls more human • WSJ
Daniela Hernandez and Jennifer Strong:
<p>Cogito is one of several companies developing analytics tools that give agents feedback about how conversations with customers are going. Its software measures in real time the tone of an agent’s voice, their speech rate, and how much each person is talking, according to Dr. Place. “We measure the conversational dance,” he says.

That dance is sometimes out of sync, such as when an agent speaks too quickly or too much, cuts a customer off, has extended periods of silence or sounds tired.

When the software detects these mistakes, a notification pops up on a window on an agent’s screen to coax them to change their strategy. The alerts are useful not just for the agents, but also for their supervisors, Cogito says.

When insurer MetLife Inc. started testing the software about nine months ago, Emily Baker, a 39-year-old supervisor at a call center in Warwick, R.I., thought: “Why do I need this artificial intelligence to allow me to be more human? How much more human can I be?”

But the program has come in handy when coaching new agents, she says, especially those with little experience. One of her 14 agents said the software noticed he wasn’t speaking with enough energy, so it prompted him with a message to pep up plus a coffee-cup icon, she says.

Tiredness can come off as lack of confidence, Ms. Baker says, and it’s important for clients to “feel confident about the service we’re providing” because callers are often going through potentially life-changing events. The call center where Ms. Baker works is focused on disability insurance.</p>


Machines to watch over us, and correct us when we aren't good enough with each other.
callcentre  cogito  ai  software 
june 2018 by charlesarthur
The machine fired me • Idiallo
Ibrahim Diallo found himself fired - but nobody could explain why or by who:
<p>Once the order for employee termination is put in, the system takes over. All the necessary orders are sent automatically and each order completion triggers another order. For example, when the order for disabling my key card is sent, there is no way of it to be re-enabled. Once it is disabled, an email is sent to security about recently dismissed employees. Scanning the key card is a red flag. The order to disable my Windows account is also sent. There is also one for my JIRA account. And on and on. There is no way to stop the multi-day long process. I had to be rehired as a new employee. Meaning I had to fill up paperwork, set up direct deposit, wait for Fedex to ship a new key card.

But at the end of the day the question is still, why was I terminated in the first place?

I was on a three-year contract and had only worked for eight months. Just before I was hired, this company was acquired by a much larger company and I joined during the transition. My manager at the time was from the previous administration. One morning I came to work to see that his desk had been wiped clean, as if he was disappeared. As a full time employee, he had been laid off. He was to work from home as a contractor for the duration of a transition. I imagine due to the shock and frustration, he decided not to do much work after that. Some of that work included renewing my contract in the new system.

I was very comfortable at the job. I had learned the in-and-out of all the systems I worked on. I had made friends at work. I had created a routine around the job. I became the go-to guy. I was comfortable.

When my contract expired, the machine took over and fired me.

A simple automation mistake(feature) caused everything to collapse. I was escorted out of the building like a thief, I had to explain to people why I am not at work, my coworkers became distant (except my manager who was exceptionally supportive). Despite the great opportunity it was for me to work at such a big company, I decided to take the next opportunity that presented itself.

What I called job security was only an illusion. I couldn't help but imagine what would have happened if I had actually made a mistake in this company. Automation can be an asset to a company, but there needs to be a way for humans to take over if the machine makes a mistake. I missed three weeks of pay because no one could stop the machine.</p>
Automation  ai 
june 2018 by charlesarthur
How to stealthily poison neural network chips in the supply chain • The Register
Thomas Claburn:
<p>"Hardware Trojans can be inserted into a device during manufacturing by an untrusted semiconductor foundry or through the integration of an untrusted third-party IP," [Clemson University researchers Joseph Clements and Yingjie Lao] <a href="https://arxiv.org/pdf/1806.05768.pdf">explain in their pape</a>r. "Furthermore, a foundry or even a designer may possibly be pressured by the government to maliciously manipulate the design for overseas products, which can then be weaponized."

The purpose of such deception, the researchers explain, would be to introduce hidden functionality – a Trojan – in chip circuitry. The malicious code would direct a neural network to classify a selected input trigger in a specific way while remaining undetectable in test data.

"For example, an adversary in a position to profit from excessive or improper sale of specific pharmaceutics could inject hardware Trojans on a device for diagnosing patients using neural network models," they suggest. "The attacker could cause the device to misdiagnose selected patients to gain additional profit."

They claim they were able to prototype their scheme by altering only 0.03% of the neurons in one layer of a seven-layer convolutional neural network.

Clements and Lao say they believe adversarial training combined with hardware Trojan detection represent a promising approach to defending against their threat scenario. The adversarial training would increase the number of network network neurons that would have to be altered to inject malicious behavior, thereby making the Trojan large enough potentially to detect.</p>
ai  neuralnetwork  hacking 
june 2018 by charlesarthur
DeepMind AI learns to reconstruct scenes from images • Axios
Alison Snyder:
<p>The system uses a pair of images of a virtual 3D scene taken from different angles to create a representation of the space. A separate “generation” network then predicts what the scene will look like from a different viewpoint it hasn’t seen before.

<img src="https://images.axios.com/y45y0eERxtSGN81oWFvXP9w2Ve0=/2018/06/14/1528991704052.gif" width="100%" />

• After training the generative query network (GQN) on millions of images, it could use one image to determine the identity, position and color of objects as well as shadows and other aspects of perspective, the authors wrote.

• That ability to understand the scene's structure is the "most fascinating" part of the study, wrote the University of Maryland's Matthias Zwicker, who wasn't involved in the research.

• The DeepMind researchers also tested the AI in a maze and reported the network can accurately predict a scene with only partial information.

• A virtual robotic arm could also be controlled by the GQN to reach a colored object in a scene.</p>


<a href="http://science.sciencemag.org/content/360/6394/1204.full">Full paper at Science</a>.
deepmind  ai  3d  vision 
june 2018 by charlesarthur
UK report warns DeepMind Health could gain ‘excessive monopoly power’ • TechCrunch
Natasha Lomas:
<p>The <a href="https://deepmind.com/blog/deepmind-health-response-independent-reviewers-report-2018/">DeepMind Health Independent Reviewers’ 2018 report</a> flags a series of risks and concerns, as they see it, including the potential for DeepMind Health to be able to “exert excessive monopoly power” as a result of the data access and streaming infrastructure that’s bundled with provision of the Streams app — and which, contractually, positions DeepMind as the access-controlling intermediary between the structured health data and any other third parties that might, in the future, want to offer their own digital assistance solutions to the Trust.

While the underlying FHIR (aka, fast healthcare interoperability resource) deployed by DeepMind for Streams uses an open API, the contract between the company and the Royal Free Trust funnels connections via DeepMind’s own servers, and prohibits connections to other FHIR servers. A commercial structure that seemingly works against the openness and interoperability DeepMind’s co-founder Mustafa Suleyman has claimed to support.

“There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position,” the reviewers write.</p>


Once you begin to rely on an AI black box, you're at risk of being tied even more closely to a provider. It's rather like the lock that IBM used to have in a long-gone past of mainframe computing.
deepmind  mainframe  ai  blackbox 
june 2018 by charlesarthur
AI at Google: our principles • Google blog
Sundar Pichai:
<p>We will assess AI applications in view of the following objectives. We believe that AI should:
- be socially beneficial
- avoid creating or reinforcing bias
- be built and tested for safety
- be accountable to people
- incorporate privacy design principles
- uphold high standards of scientific excellence
- be made available for uses that accord with these principles</p>


There's plenty more - each point is expanded, but those are the bullets. He also sets out the applications that Google <em>won't</em> pursue.
google  ai  principles 
june 2018 by charlesarthur
Some quick thoughts on the public discussion regarding facial recognition and Amazon Rekognition this past week • AWS News Blog
Matt Wood is general manager of AI at Amazon Web Services:
<p>Amazon Rekognition is a service we announced in 2016. It makes use of new technologies – such as deep learning – and puts them in the hands of developers in an easy-to-use, low-cost way. Since then, we have seen customers use the image and video analysis capabilities of Amazon Rekognition in ways that materially benefit both society (e.g. preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families, and building educational apps for children), and organizations (enhancing security through multi-factor authentication, finding images more easily, or preventing package theft). Amazon Web Services (AWS) is not the only provider of services like these, and we remain excited about how image and video analysis can be a driver for good in the world, including in the public sector and law enforcement.

There have always been and will always be risks with new technology capabilities. Each organization choosing to employ technology must act responsibly or risk legal penalties and public condemnation. AWS takes its responsibilities seriously. But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future. The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm.</p>


That's true, but there were plenty of restrictions on who you could sell computers to - Iran, Iraq, Syria, North Korea, China, and so on. The concerns over Rekognition are about who gets to use it; exactly like those computer export restrictions.
amazon  ai  recognition 
june 2018 by charlesarthur
Leaked emails show Google expected lucrative military drone ai work to grow exponentially • The Intercept
Lee Fang:
<p>Google has sought to quash the internal dissent in conversations with employees. Diane Greene, the chief executive of Google’s cloud business unit, speaking at a company town hall meeting following the revelations, claimed that the contract was “only” for $9 million, according to the New York Times, a relatively minor project for such a large company.

Internal company emails obtained by The Intercept tell a different story. The September emails show that Google’s business development arm expected the military drone artificial intelligence revenue to ramp up from an initial $15 million to an eventual $250 million per year.

In fact, one month after news of the contract broke, the Pentagon allocated an additional $100 million to Project Maven.

The internal Google email chain also notes that several big tech players competed to win the Project Maven contract. Other tech firms such as Amazon were in the running, one Google executive involved in negotiations wrote. (Amazon did not respond to a request for comment.) Rather than serving solely as a minor experiment for the military, Google executives on the thread stated that Project Maven was “directly related” to a major cloud computing contract worth billions of dollars that other Silicon Valley firms are competing to win.

The emails further note that Amazon Web Services, the cloud computing arm of Amazon, “has some work loads” related to Project Maven.</p>


But now it isn't going to renew the contract. Employee pressure can make a difference, which is heartening.
ai  google  military 
june 2018 by charlesarthur
AI winter is well on its way • Piekniewski's blog
Filip Piekniewski is sceptical on the AI/ML front:
<p>One of the key slogans repeated about deep learning is that it scales almost effortlessly. We had the AlexNet in 2012 which had ~60M parameters, we probably now have models with at least 1000x that number right? Well probably we do, the question however is - are these things 1000x as capable? Or even 100x as capable? A study by openAI comes in handy:

<img src="https://i2.wp.com/blog.piekniewski.info/wp-content/uploads/2018/05/compute_diagram-log@2x-3.png?w=1280&ssl=1" width="100%" />

So in terms of applications for vision we see that VGG and Resnets saturated somewhat around one order of magnitude of compute resources applied (in terms of number of parameters it is actually less). Xception is a variation of google inception architecture and actually only slightly outperforms inception on ImageNet, arguably actually slightly outperforms everyone else, because essentially AlexNet solved ImageNet. So at 100 times more compute than AlexNet we pretty much saturated architectures in terms of vision, or image classification to be precise. Neural machine translation is a big effort by all the big web search players and no wonder it takes all the compute it can take (and yet google translate still sucks, though has gotten arguably better). The latest three points on that graph, interestingly show reinforcement learning related projects, applied to games by Deepmind and OpenAI. Particularly AlphaGo Zero and slightly more general AlphaZero take ridiculous amount of compute, but are not applicable in the real world applications because much of that compute is needed to simulate and generate the data these data hungry models need. OK, so we can now train AlexNet in minutes rather than days, but can we train a 1000x bigger AlexNet in days and get qualitatively better results? Apparently not...</p>


I'm not sure I agree with him on all of this, but refuting it isn't trivial. The point is, Google/DeepMind tends to go a long time in submarine mode, then pop up with something big. Just because you can't see the submarine doesn't mean it isn't making progress - perhaps a lot.
ai  machinelearning  deeplearning 
may 2018 by charlesarthur
Smartphone AI: separating hype and reality • CCS Insight Research
Geoff Blaber:
<p>With artificial intelligence firmly at the peak of the hype curve, the industry must be collectively conscious that technologies deliver tangible benefits rather than an empty claim of intelligence. This should be easy given that artificial intelligence isn't a new phenomenon. What is new is the way solutions are being marketed expressly under the banner of artificial intelligence.

The advent of dedicated accelerators for artificial intelligence workloads is a mixed blessing. Even defining these is difficult because of architectural similarities to digital signal processors (DSPs). Artificial intelligence is becoming pervasive in smartphones, spanning everything from power management to predictive user interface, natural language processing, object detection, facial recognition… the list is endless. For these tasks to be entirely efficient, it's not realistic that they run exclusively on the CPU or even the graphics processing unit (GPU). Equally, developers need to have the tools to fully maximize the resources available.

This is highly reminiscent of the early days of the smartphone CPU core wars. Adding more cores created little impact beyond marketing hype until developers began writing to those cores to create multithreaded apps.

The approach taken by Qualcomm is noteworthy as it contrasts with that of Apple, HiSilicon and MediaTek, all of which are positioning a single, dedicated accelerator for artificial intelligence. Instead, Qualcomm is emphasizing its heterogeneous approach that comprises its Hexagon DSP, Adreno GPU and Kryo CPU. The Qualcomm AI Engine consists of these cores alongside software frameworks and tools to accelerate artificial intelligence app development using the platform.</p>


The idea that AI-on-your-phone would be the "next big thing" is, I'm happy to point out, what I forecast in <a href="http://tedxhilversum.com/index.php/2015/11/12/charles-arthur-the-future-is-in-your-phone/">my TedX talk in Hilversum</a> back in November 2015. (I was explaining how "selfies" became so big and peaked in 2014.)
ai  smartphone  tedx 
may 2018 by charlesarthur
Preliminary report released for crash involving pedestrian, Uber Technologies test vehicle • NTSB
<p>The <a href="https://goo.gl/2C6ZCH">report</a> states data obtained from the self-driving system shows the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact, when the vehicle was traveling 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that emergency braking was needed to mitigate a collision. According to Uber emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

In the report the NTSB said the self-driving system data showed the vehicle operator engaged the steering wheel less than a second before impact and began braking less than a second after impact. The vehicle operator said in an NTSB interview that she had been monitoring the self-driving interface and that while her personal and business phones were in the vehicle neither were in use until after the crash.

All aspects of the self-driving system were operating normally at the time of the crash, and there were no faults or diagnostic messages.</p>


It doesn't do emergency braking when it's under computer control, but it doesn't alert the "driver" either. That's all sorts of wrong. It's a pity that someone had to die for this huge error to become apparent.
ai  safety  uber  selfdrivingcar 
may 2018 by charlesarthur
Self-driving cars are here • Medium
Andrew Ng of Drive.ai, which is introducing self-driving cars in Frisco, Texas in July:
<p>It is every self-driving company’s responsibility to ensure safety. We believe the self-driving car industry should adopt these practices:

• Self-driving cars should be made visually distinctive, so that people can quickly recognize them. Even with great AI technology, it is safer if everyone recognizes our cars. After examining multiple designs, we found that a bright orange design is clearly recognizable to pedestrians and drivers.

We deliberately prioritized recognizability over beauty, since it is recognizability that enhances safety.

• While a human driver would make eye contact with a pedestrian to let them know it is safe to cross, a driverless car cannot communicate the same way. Thus, a self-driving car must have other ways to communicate with people around it. Drive.ai is using exterior panels to do this.

• Self-driving car companies should engage with local government to provide practical education programs. Just as school buses, delivery trucks, and emergency vehicles behave differently from regular cars, so too are self-driving cars a different class of vehicle with their own behaviors. It has unique strengths (such as no distracted driving) and limitations (such as inability to make eye contact or understand hand gestures). It’s important to increase the public’s awareness of self-driving through media, unique signage, and dedicated pickup and dropoff zones. We also ask members of the local community to be lawful in their use of public roads and to be considerate of self-driving cars so that we can improve transportation together.</p>


OK, but what about <a href="https://www.theinformation.com/articles/uber-finds-deadly-accident-likely-caused-by-software-set-to-ignore-objects-on-road">people who seem like plastic bags</a>?
ai  cars  selfdrivingcar  drive 
may 2018 by charlesarthur
AI generates new Doom levels for humans to play • MIT Technology Review
<p>[Edoardo Giacomello and colleagues at the Politecnico di Milano in Italy] say it is indeed possible to create compelling Doom levels in this automated way, and that the technique has significant potential to change the way game content is created.

The team’s approach is relatively straightforward. They begin with 1,000 Doom levels taken from a repository called the Video Game Level Corpus, which includes all the official levels from Doom and Doom 2 as well as more than 9,000 levels created by the gaming community.

The team then processed each level to generate a set of images that represent its most important features, such as the walkable area, walls, floor height, objects, and so on. They also created a vector that captured important features of the level in numerical form, such as the size, area, and perimeter of rooms, the number of rooms, and so on.

Then they used a deep-learning technique called a generative adversarial network to study the data and learn how to generate new levels.

The results show just how powerful this technique is. After some 36,000 iterations, the deep-learning networks were able to produce levels of good quality. “Our results show that generative adversarial networks can capture intrinsic structure of DOOM levels and appears to be a promising approach to level generation in first person shooter games,” say Giacomello and co.</p>


Makes sense; much cheaper and it seems like a crazy thing to spend time getting humans to design something when they aren't needed. Though you could imagine that the AI might come up with an impossible level, which would only be discovered on trying to play it.
doom  ai  games 
may 2018 by charlesarthur
Your Instagram #Dogs and #Cats are training Facebook's AI • WIRED
Tom Simonite:
<p>An artificial intelligence experiment of unprecedented scale disclosed by Facebook Wednesday offers a glimpse of one such use case. It shows how our social lives provide troves of valuable data for training machine-learning algorithms. It’s a resource that could help Facebook compete with Google, Amazon, and other tech giants with their own AI ambitions.

Facebook researchers describe using 3.5 billion public Instagram photos—carrying 17,000 hashtags appended by users—to train algorithms to categorize images for themselves. It provided a way to sidestep having to pay humans to label photos for such projects. The cache of Instagram photos is more than 10 times the size of a giant training set for image algorithms disclosed by Google last July.

Having so many images for training helped Facebook’s team set a new record on a test that challenges software to assign photos to 1,000 categories including cat, car wheel, and Christmas stocking. Facebook says that algorithms trained on 1 billion Instagram images correctly identified 85.4 percent of photos on the test, known as ImageNet; the previous best was 83.1 percent, set by Google earlier this year.

Image-recognition algorithms used on real-world problems are generally trained for narrower tasks, allowing greater accuracy; ImageNet is used by researchers as a measure of a machine learning system’s potential. Using a common trick called transfer learning, Facebook could fine-tune its Instagram-derived algorithms for specific tasks. The method involves using a large dataset to imbue a computer vision system with some basic visual sense, then training versions for different tasks using smaller and more specific datasets.

As you would guess, Instagram hashtags skew towards certain subjects, such as #dogs, #cats, and #sunsets. Thanks to transfer learning they could still help the company with grittier problems. CEO Mark Zuckerberg told Congress this month that AI would help his company improve its ability to remove violent or extremist content. The company already uses image algorithms that look for nudity and violence in images and video.</p>
facebook  instagram  ai  dog  cat  training 
may 2018 by charlesarthur
Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian
Mattha Busby:
<p>Current and former gambling industry employees have described how people’s betting habits are scrutinised and modelled to manipulate their future behaviour.

“The industry is using AI to profile customers and predict their behaviour in frightening new ways,” said Asif, a digital marketer who previously worked for a gambling company. “Every click is scrutinised in order to optimise profit, not to enhance a user’s experience.”

“I’ve often heard people wonder about how they are targeted so accurately and it’s no wonder because its all hidden in the small print.”

Publicly, gambling executives boast of increasingly sophisticated advertising keeping people betting, while privately conceding that some are more susceptible to gambling addiction when bombarded with these type of bespoke ads and incentives.

Gamblers’ every click, page view and transaction is scientifically examined so that ads statistically more likely to work can be pushed through Google, Facebook and other platforms…

…“I never cease to be amazed at how low the gambling industry is prepared to go to exploit those who have indicated an interest in gambling,” says Carolyn Harris, a Labour MP who has campaigned for gambling reform.

“The industry is geared to get people addicted to something that will cause immense harm, not just to society but to individuals and their families. They are parasitical leeches and I will offer no apology for saying that.”</p>


Completely agree with Harris.
ai  gambling 
may 2018 by charlesarthur
Letting neural networks be weird: when algorithms surprise us • Ai Weirdness
Janelle Shane:
<p>machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.

But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and <a href="http://aiweirdness.com/post/171451900302/do-neural-nets-dream-of-electric-sheep">kept labeling empty green fields as containing sheep</a>.

<img src="https://78.media.tumblr.com/f8f13fd86e3453be3ee8744f94c0995f/tumblr_inline_p720zk5db01rl9zu7_500.jpg" width="100%" />

When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.

So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The <a href="https://arxiv.org/pdf/1803.03453.pdf">paper is well worth reading</a>, as are the original references, but here are several of my favorite examples.</p>


There are so many, but I think my favourite is:
<p>In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.</p>


"OK, engaging machine learning autopilot for landing..."
artificialintelligence  algorithms  learning  ai 
april 2018 by charlesarthur
Google works out a fascinating, slightly scary way for AI to isolate voices in a crowd • Ars Technica
Jeff Dunn:
<p>The company says this tech works on videos with a single audio track and can isolate voices in a video algorithmically, depending on who's talking, or by having a user manually select the face of the person whose voice they want to hear.

Google says the visual component here is key, as the tech watches for when a person's mouth is moving to better identify which voices to focus on at a given point and to create more accurate individual speech tracks for the length of a video.

<a href="https://research.googleblog.com/2018/04/looking-to-listen-audio-visual-speech.html">According to the blog post</a>, the researchers developed this model by gathering 100,000 videos of "lectures and talks" on YouTube, extracting nearly 2,000 hours worth of segments from those videos featuring unobstructed speech, then mixing that audio to create a "synthetic cocktail party" with artificial background noise added.

Google then trained the tech to split that mixed audio by reading the "face thumbnails" of people speaking in each video frame and a spectrogram of that video's soundtrack. The system is able to sort out which audio source belongs to which face at a given time and create separate speech tracks for each speaker. Whew.</p>


Creepy machine learning! Let's continue that thread...
google  ai  audio 
april 2018 by charlesarthur
Facebook uses AI to predict your future actions for advertisers, says confidential document • The Intercept
Sam Biddle:
<p>The recent document, described as “confidential,” outlines a new advertising service that expands how the social network sells corporations’ access to its users and their lives: Instead of merely offering advertisers the ability to target people based on demographics and consumer preferences, Facebook instead offers the ability to target them based on how they will behave, what they will buy, and what they will think. These capabilities are the fruits of a self-improving, artificial intelligence-powered prediction engine, first unveiled by Facebook in 2016 and dubbed “FBLearner Flow.”

One slide in the document touts Facebook’s ability to “predict future behavior,” allowing companies to target people on the basis of decisions they haven’t even made yet. This would, potentially, give third parties the opportunity to alter a consumer’s anticipated course.

Here, Facebook explains how it can comb through its entire user base of over 2 billion individuals and produce millions of people who are “at risk” of jumping ship from one brand to a competitor. These individuals could then be targeted aggressively with advertising that could pre-empt and change their decision entirely — something Facebook calls “improved marketing efficiency.” This isn’t Facebook showing you Chevy ads because you’ve been reading about Ford all week — old hat in the online marketing world — rather Facebook using facts of your life to predict that in the near future, you’re going to get sick of your car. Facebook’s name for this service: “loyalty prediction.”</p>


AI for everything!
machinelearning  facebook  ai  advertising  artificialintelligence 
april 2018 by charlesarthur
Artwork personalization at Netflix • Medium
Ashok Chandrashekar, Fernando Amat, Justin Basilico and Tony Jebara, on the Netflix Techblog:
<p>For many years, the main goal of the Netflix personalized recommendation system has been to get the right titles in front each of our members at the right time. With a catalog spanning thousands of titles and a diverse member base spanning over a hundred million accounts, recommending the titles that are just right for each member is crucial. But the job of recommendation does not end there. Why should you care about any particular title we recommend? What can we say about a new and unfamiliar title that will pique your interest? How do we convince you that a title is worth watching? Answering these questions is critical in helping our members discover great content, especially for unfamiliar titles. One avenue to address this challenge is to consider the artwork or imagery we use to portray the titles. If the artwork representing a title captures something compelling to you, then it acts as a gateway into that title and gives you some visual “evidence” for why the title might be good for you.


<img src="https://cdn-images-1.medium.com/max/1600/0*038O1qN_N7lC3CGD." width="100%" />
A Netflix homepage without artwork. This is how historically our recommendation algorithms viewed a page.

<img src="https://cdn-images-1.medium.com/max/1600/1*xwD8rVHPapbfmrl6AIbQbA.png" width="100%" />
Artwork for Stranger Things that each receive over 5% of impressions from our personalization algorithm. Different images cover a breadth of themes in the show to go beyond what any single image portrays.</p>


Breathtaking.
ai  netflix  algorithms  marketing 
april 2018 by charlesarthur
AI will cut huge chunks out of banking compliance workforce and London high streets might die • Computer Weekly
Karl Flinders:
<p>behind the scenes AI is increasingly being used to carry out important work in the background helping banks comply with regulations. When AI replaces people in compliance we could really see huge job cuts and cost savings for banks.

This takes me to an article I wrote yesterday about HSBC using software from a big data startup, which includes AI, to help it automate the monitoring of transactions to flush out money laundering. An example of how AI can replace compliance resources.

Lowering costs is becoming more and more important amid the fintech revolution.

At the recent Innovate Finance Global Summit in London Anne Boden, CEO at challenger bank Starling said the big battle in banking involves the cost base rather than innovation. All traditional banks can innovate. They have huge budgets so there is nothing stopping them creating the same fintech services as challengers. They are already doing it. But rather than having hundreds of staff they have tens of thousands. As a result the new players have a huge advantage in terms of cost base.

When John Cryan, who was sacked as CEO at Deutsche bank, said last year that AI will take over a large number of jobs at Deutsche Bank he was probably thinking about all those compliance bods.</p>


Flinders argues that those compliance bods are the ones who keep the high street going, because they buy coffee and so on. I'm not convinced about that; and I think that compliance will find a way to grow, even with AI - or especially with AI. Just because you think you've identified money laundering doesn't mean you have.
ai  banks  compliance 
april 2018 by charlesarthur
AI reporter rewrites news for your political leaning • Digital Trends
Luke Dormehl:
<p>Today, all of us live in filter bubbles online, in which the news we read is increasingly tailormade for our personal tastes. This is a problem for media companies and readers alike — and it’s one that an intriguing new online news aggregator hopes to help solve.

Called <a href="https://knowherenews.com/">Knowhere</a>, the newly launched website is the work of a media-savvy entrepreneur and some Stanford-trained artificial intelligence experts. It uses machine learning tools to cover the day’s biggest stories by offering left, impartial, and right-leaning versions of each. The components of these stories are aggregated from various online news outlets and then rewritten by an AI. Each story can reportedly be written in as little as 60 seconds to 15 minutes, depending on the complexity of the piece. Once that process is completed, a human editor then reviews the story, which further trains the news-writing algorithms. The result? Not only a whip-fast news aggregation site, but one which could help break the filter-bubble problem.

“I was inspired by my father who was an investigative journalist and correspondent for the BBC throughout my childhood,” co-founder, CEO and editor-in-chief Nathaniel Barling told Digital Trends. “Each night he would bring home three papers, The Guardian, The Times, and The Telegraph. He’d ask me to read all three of them so that I could gain a balanced perspective on the day’s news.”</p>


The idea is that it shows articles which are written in all three ways - "left", "right", "impartial". To be honest, I don't see that people are going to read all three. Most people barely read one. Why not just go with the "impartial"?
ai  news 
april 2018 by charlesarthur
Apple hires Google’s AI chief • The New York Times
Jack Nicas and Cade Metz:
<p>Apple has hired Google’s chief of search and artificial intelligence, John Giannandrea, a major coup in its bid to catch up to the artificial intelligence technology of its rivals.

Apple said on Tuesday that Mr. Giannandrea will run Apple’s “machine learning and A.I. strategy,” and become one of 16 executives who report directly to Apple’s chief executive, Timothy D. Cook.

The hire is a victory for Apple, which many Silicon Valley executives and analysts view as lagging its peers in artificial intelligence, an increasingly crucial technology for companies that enable computers to handle more complex tasks, like understanding voice commands or identifying people in images.

“Our technology must be infused with the values we all hold dear,” Mr. Cook said in an email to staff members obtained by The New York Times. “John shares our commitment to privacy and our thoughtful approach as we make computers even smarter and more personal.”</p>


Wow. That's a hell of a coup. Giannandrea joined Google in 2010 from Metaweb (which Google bought). He's got to be on a gigantic options deal with some big incentives around Siri et al.
apple  google  ai 
april 2018 by charlesarthur
Amazon Alexa meets music composed by AI in DeepMusic • RAIN News
After yesterday's request for a sample of that Amazon Alexa AI-generated music, reader Alex Barredo points us to this, by Anna Washenko:
<p>The AI compositions are generated from a collection of audio samples and a neural network. None of the music has received post-production editing by a human. If you listen on an Echo Show or Echo Spot speaker, you’ll also see artwork created by AI.

Given the number of services working to aid with the speed and ease of Alexa skill creation, it’s likely that we’ll be seeing a wave of innovative and creative applications of the voice technology. AI-made music is likely just the start of how people will think to take advantage of smart speakers.

Here’s what it sounds like:

<audio class="wp-audio-shortcode" id="audio-24178-1_html5" preload="none" style="width: 100%; height: 100%;" src="http://rainnews.com/wp-content/uploads/2018/03/Alexa-deep-music.wav?_=1"><a href="http://rainnews.com/wp-content/uploads/2018/03/Alexa-deep-music.wav">http://rainnews.com/wp-content/uploads/2018/03/Alexa-deep-music.wav</a></audio>

Possibly not Grammy caliber, but interesting.</p>


I can see endless possibilities for Muzak and Spotify playlists in this.
amazon  deepmusic  ai 
march 2018 by charlesarthur
DeepMusic Alexa skill serves up AI-generated songs • MusicAlly
Stuart Dredge:
<p>Amid all the industry conversation about how smart speakers will affect the way people listen to music, the assumption has been that the music in question will be made by humans.

Here’s a new Alexa skill to make you think, though. It’s called <a href="https://www.amazon.com/dp/B07B6J18MP/ref=sr_1_6?s=digital-skills&ie=UTF8&qid=1520280170&sr=1">DeepMusic</a>, and has just launched for Alexa-powered devices like the Echo speakers.

“DeepMusic is an Alexa skill that enables you to listen to songs generated by artificial intelligence (AI). Each song was composed entirely using AI. The songs were generated using a collection of audio samples and a deep recurrent neural network. There has been no post-production editing by a human,” explains its description on Amazon’s store.

AI was also used to create the artwork shown on the screen-equipped Echo Show and Echo Spot speakers. The skill can be tested by saying ‘Alexa, open DeepMusic’ and then commands like ‘Alexa, ask DeepMusic to play a song’.</p>


We've had quite a few "AI music" links over the past few years. There was <a href="http://readwrite.com/2016/08/07/halo-brings-training-finesse-olympic-athletes-hl1/">Brain.fm</a> in August 2016, an <a href="https://www.theguardian.com/technology/2016/nov/29/its-no-christmas-no-1-but-ai-generated-song-brings-festive-cheer-to-researchers">AI-generated song</a> in November 2016, and <a href="https://arxiv.org/abs/1612.01010">DeepBach</a> in December 2016. If anyone wants to let us know how DeepMusic sounds, we'd love a review.
amazon  alexa  deepmusic  ai 
march 2018 by charlesarthur
The seven-year itch: how Apple’s marriage to Siri turned sour • The Information
Aaron Tilley and Kevin McLaughlin:
<p>The Topsy team [acquired by Apple in 2013] ultimately grew into a massive organization under Mr. Stasior that now nearly rivals the number of employees on the Siri team, said one former employee. Topsy CEO Vipul Ved Prakash continues to lead that search group and reports directly to Mr. Stasior.

Uniting the existing Siri team with the expanding search unit under Mr. Stasior proved troublesome. Members of the Topsy team expressed a reluctance to work with a Siri team they viewed as slow and bogged down by the initial infrastructure that had been patched up but never completely replaced since it launched.

“There was a feeling that, ‘Why don’t we just start over and build what we need to build, and then worry about reconciling those two later?’” said a former member of the search team. “They’re still reconciling it.”

Core Siri and Spotlight are powered by a combination of both Topsy's technology and Siri Data Services, which is based on older search technology ported over from iTunes search but modified for Siri and launched in 2013, said the former employee. Siri Data Services deals with things like Wikipedia, stocks and movie showtimes, while Topsy sorts through Twitter, news and web results. The Siri Data Services team was eventually lumped into the Topsy team under Mr. Prakash with the plan to integrate all of the tech into a single stack. But they're based on two different programming languages and are tricky to reconcile.

The difficulty integrating the search teams led to some embarrassing outcomes. Users could get completely different responses to the same question based on whether they were using Siri or Spotlight—which were powered by two different search technologies built by two different teams.</p>


This verrry long piece indicates that there's a hell of a lot of competing groups, and no overarching view of quite how to fix Siri - nor quite what it should be. We all know what we want Siri to do. But it seems like there are conflicting ideas on how to get there.
apple  ai  siri 
march 2018 by charlesarthur
Artificial intelligence could identify gang crimes—and ignite an ethical firestorm • Science
Matthew Hutson:
<p>…the partially generative algorithm reduced errors by close to 30%, the team reported at the Artificial Intelligence, Ethics, and Society (AIES) conference this month in New Orleans, Louisiana. The researchers have not yet tested their algorithm’s accuracy against trained officers.

It’s an “interesting paper,” says Pete Burnap, a computer scientist at Cardiff University who has studied crime data. But although the predictions could be useful, it’s possible they would be no better than officers’ intuitions, he says. Haubert agrees, but he says that having the assistance of data modeling could sometimes produce “better and faster results.” Such analytics, he says, “would be especially useful in large urban areas where a lot of data is available.”

But researchers attending the AIES talk raised concerns during the Q&A afterward. How could the team be sure the training data were not biased to begin with? What happens when someone is mislabeled as a gang member? Lemoine asked rhetorically whether the researchers were also developing algorithms that would help heavily patrolled communities predict police raids.

Hau Chan, a computer scientist now at Harvard University who was presenting the work, responded that he couldn’t be sure how the new tool would be used. “I’m just an engineer,” he said. Lemoine quoted a lyric from a song about the wartime rocket scientist Wernher von Braun, in a heavy German accent: “Once the rockets are up, who cares where they come down?” Then he angrily walked out.

Approached later for comment, Lemoine said he had talked to Chan to smooth things over. “I don’t necessarily think that we shouldn’t build tools for the police, or that we should,” Lemoine said (commenting, he specified, as an individual, not as a Google representative). “I think that when you are building powerful things, you have some responsibility to at least consider how could this be used.”

Two of the paper’s senior authors spent nearly 20 minutes deflecting such questions during a later interview. “It’s kind of hard to say at the moment,” said Jeffrey Brantingham, an anthropologist at the University of California, Los Angeles. “It’s basic research.” Milind Tambe, a computer scientist at the University of Southern California in Los Angeles, agreed. Might a tool designed to classify gang crime be used to, say, classify gang crime? They wouldn’t say.</p>
ai  police  ethics  machinelearning 
march 2018 by charlesarthur
Skyknit: how an AI took over an adult knitting community • The Atlantic
Alexis C. Madrigal on how Janelle Shane set machine learning to work on existing knitting patterns to create new ones:
<p>here’s the first 4 rows from one set of instructions that the neural net generated and named “fishcock.”

fishcock

row 1 (rs): *k3, k2tog, [yo] twice, ssk, repeat from * to last st, k1.
row 2: p1, *p2tog, yo, p2, repeat from * to last st, k1.
row 3: *[p1, k1] twice, repeat from * to last st, p1.
row 4: *p2, k1, p3, k1, repeat from * to last 2 sts, p2.

The network was able to deduce the concept of numbered rows, solely from the texts basically being composed of rows. The system was able to produce patterns that were just on the edge of knittability. But they required substantial “debugging,” as Shane put it.

One user, bevbh, described some of the errors as like “code that won’t compile.” For example, bevbh gave this scenario: “If you are knitting along and have 30 stitches in the row and the next row only gives you instructions for 25 stitches, you have to improvise what to do with your remaining five stitches.”

But many of the instructions that were generated were flawed in complicated ways. They required the test knitters to apply a lot of human skill and intelligence. For example, here is the user BellaG, narrating her interpretation of the fishcock instructions, which I would say is just on the edge of understandability, if you’re not a knitter:

“There’s not a number of stitches that will work for all rows, so I started with 15 (the repeat done twice, plus the end stitch). Rows two, four, five, and seven didn’t have enough stitches, so I just worked the pattern until I got to the end stitch and worked that as written,” she posted to the forum. “Double yarn-overs can’t be just knit or just purled on the recovery rows; you have to knit one and purl the other, so I did that when I got to the double yarn-overs on rows two and six."

<img src="https://cdn.theatlantic.com/assets/media/img/posts/2018/03/fishcock/f0ba27c58.jpg" width="100%" /><br /><em>Fishcock: this is what it looks like</em>

</p>
Ai  machinelearning  knitting 
march 2018 by charlesarthur
Google is helping the Pentagon build AI for drones • Gizmodo
Kate Conger and Dell Cameron:
<p>Google has partnered with the United States Department of Defense to help the agency develop artificial intelligence for analyzing drone footage, a move that set off a firestorm among employees of the technology giant when they learned of Google’s involvement.

Google’s pilot project with the Defense Department’s Project Maven, an effort to identify objects in drone footage, has not been previously reported, but it was discussed widely within the company last week when information about the project was shared on an internal mailing list, according to sources who asked not to be named because they were not authorized to speak publicly about the project.

Some Google employees were outraged that the company would offer resources to the military for surveillance technology involved in drone operations, sources said, while others argued that the project raised important ethical questions about the development and use of machine learning.

Google’s Eric Schmidt summed up the tech industry’s concerns about collaborating with the Pentagon at a talk last fall. “There’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly,” he said. While Google says its involvement in Project Maven is not related to combat uses, the issue has still sparked concern among employees, sources said…

…The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, <a href="https://thebulletin.org/project-maven-brings-ai-fight-against-isis11374">according to Greg Allen</a>, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.</p>
ai  google  drone 
march 2018 by charlesarthur
AI breakthrough: otter.ai app can transcribe your meetings in real time, for free • ZDNet
Jason Hiner:
<p>When we sat down to talk about it in a tiny meeting room in the back corner of Fira Barcelona's Hall 2, Sam Liang placed his iPhone on the table and tapped the record button in the Otter app. As the CEO of AISense – the company behind Otter.ai – Liang started explaining how the 15-person startup from Los Altos, CA took a different approach to understand audio data than Amazon Alexa, Google Assistant, and the other companies working on speech recognition.

As Liang gave his pitch, Otter started spitting out text – with roughly a 2-3 second delay. And since Liang had set up our meeting in the app beforehand, the software automatically recognized when his teammate Seamus McAteer chimed in with his own comments or I interrupted with follow-up questions.

While Otter's natural language processing wasn't perfect by any means – punctuation is missing, words are misunderstood, speakers are sometimes misidentified – it's remarkably close, especially considering its speed and the fact that the app is free.

"Our technology is quite different," said Liang, in his interview with ZDNet. "We call it 'Ambient Voice Intelligence' and we use the word ambient to indicate that this is working in the background... Your brain can only remember 10-20% of the information [from a meeting]... So we thought we can help people capture that information and then search for it really fast."

The search is the best feature. Once the recording is finished, the app's machine learning automatically creates about 10 keywords so that you know what the meeting was about. And you can start searching the full text right away. Also useful is that once you hone in on a keyword, you can hit the play button to listen to the section of the audio where it occurred.

The next best feature of the app is that you can share recorded meetings. So, if you have a meeting and a colleague can't attend, you can send them the transcript and audio afterward, so that they can find the stuff that's relevant to them.</p>

This is the holy grail for journalists who don't want to do tedious, tedious transcription of important (and unimportant) interviews. Search in particular is really big. It's on the App Store.
Ai  otter  voice  transcription 
march 2018 by charlesarthur
Do neural nets dream of electric sheep? • AI Weirdness
:
<p>
Are neural networks just hyper-vigilant, finding sheep everywhere? No, as it turns out. They only see sheep where they expect to see them. They can find sheep easily in fields and mountainsides, but as soon as sheep start showing up in weird places, it becomes obvious how much the algorithms rely on guessing and probabilities.

Bring sheep indoors, and they’re labeled as cats. Pick up a sheep (or a goat) in your arms, and they’re labeled as dogs.<br />
<img src="http://78.media.tumblr.com/ede99c6c2672e1d7bbb02ac821c4f6d7/tumblr_inline_p4yzicpI1u1rl9zu7_500.jpg" width="100%" />

Paint them orange, and they become flowers.<br />
<img src="http://78.media.tumblr.com/faa58e57385061ce025287c8df52d01d/tumblr_inline_p4z80jmki41rl9zu7_500.jpg" width="100%" />

Put the sheep on leashes, and they’re labeled as dogs. Put them in cars, and they’re dogs or cats. If they’re in the water, they could end up being labeled as birds or even polar bears.

And if goats climb trees, they become birds. Or possibly giraffes. (It turns out that Microsoft Azure is somewhat notorious for seeing giraffes everywhere due to a rumored overabundance of giraffes in the original dataset)

<img src="http://78.media.tumblr.com/fe9479226e911192f3479e0aee5aa4b9/tumblr_inline_p4yz6ri0zP1rl9zu7_500.jpg" width="100%" />
</p>
mistakes  ai 
march 2018 by charlesarthur
Ai facial recognition works better for white skin - because it's being trained that way • World Economic Forum
Larry Hardesty:
<p>Three commercially released facial-analysis programs from major technology companies demonstrate both skin-type and gender biases, according to a new paper researchers from MIT and Stanford University will present later this month at the Conference on Fairness, Accountability, and Transparency.

In the researchers’ experiments, the three programs’ error rates in determining the gender of light-skinned men were never worse than 0.8%. For darker-skinned women, however, the error rates ballooned — to more than 20% in one case and more than 34% in the other two.

The findings raise questions about how today’s neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated. For instance, according to the paper, researchers at a major US technology company claimed an accuracy rate of more than 97% for a face-recognition system they’d designed. But the data set used to assess its performance was more than 77% male and more than 83% white.

“What’s really important here is the method and how that method applies to other applications,” says Joy Buolamwini, a researcher in the MIT Media Lab’s Civic Media group and first author on the new paper. “The same data-centric techniques that can be used to try to determine somebody’s gender are also used to identify a person when you’re looking for a criminal suspect or to unlock your phone. And it’s not just about computer vision. I’m really hopeful that this will spur more work into looking at [other] disparities.”</p>

Would love to know which big American company that was.
Race  ai  gender  Facialrecognition 
february 2018 by charlesarthur
Artificial intelligence poses risks of misuse by hackers, researchers say • Reuters
Eric Auchard:
<p>The study, published on Wednesday by 25 technical and public policy researchers from Cambridge, Oxford and Yale universities along with privacy and military experts, sounded the alarm for the potential misuse of AI by rogue states, criminals and lone-wolf attackers.

The researchers said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks. The study focuses on plausible developments within five years.

“We all agree there are a lot of positive applications of AI,” Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute. “There was a gap in the literature around the issue of malicious use.”

Artificial intelligence, or AI, involves using computers to perform tasks normally requiring human intelligence, such as taking decisions or recognizing text, speech or visual images.

It is considered a powerful force for unlocking all manner of technical possibilities but has become a focus of strident debate over whether the massive automation it enables could result in widespread unemployment and other social dislocations.

The 98-page paper cautions that the cost of attacks may be lowered by the use of AI to complete tasks that would otherwise require human labor and expertise. New attacks may arise that would be impractical for humans alone to develop or which exploit the vulnerabilities of AI systems themselves.</p>

I deal with this in a chapter in my forthcoming book Cyber Wars. It's concerning.
Hacking  future  ai 
february 2018 by charlesarthur
How to become a centaur • MIT Journal of Design and Science
Nicky Case on the idea of "centaurs" - humans using AI, for example in chess tournaments where the human, advised by the AI, picks a move:
<p>won’t AI eventually get better at the dimensions of intelligence we excel at? Maybe. However, consider the “No Free Lunch” theorem, which comes from the field of machine learning itself. The theorem states that no problem-solving algorithm (or “intelligence”) can out-do random chance on all possible problems: instead, an intelligence has to specialize. A squirrel intelligence specializes in being a squirrel. A human intelligence specializes in being a human. And if you’ve ever had the displeasure of trying to figure out how to keep squirrels out of your bird feeders, you know that even squirrels can outsmart humans on some dimensions of intelligence. This may be a hopeful sign: even humans will continue to outsmart computers on some dimensions.

Now, not only does pairing humans with AIs solve a technical problem — how to overcome the weaknesses of humans/AI with the strengths of AI/humans — it also solves that moral problem: how do we make sure AIs share our human goals and values?

And it’s simple: if you can’t beat ‘em, join ‘em!

The rest of this essay will be about AI’s forgotten cousin, IA: Intelligence Augmentation. The old story of AI is about human brains working against silicon brains. The new story of IA will be about human brains working with silicon brains. As it turns out, most of the world is the opposite of a chess game:

Non-zero-sum — both players can win.</p>
Centaur  Ai 
february 2018 by charlesarthur
Leaked AI-powered game revenue model paper foretells a dystopian nightmare • Tech Powerup
“Btarunr”:
<p>
An artificial intelligence (AI) will deliberately tamper with your online gameplay as you scramble for more in-game items to win. The same AI will manipulate your state of mind at every step of your game to guide you towards more micro-transactions. Nothing in-game is truly fixed-rate. The game maps out your home, and cross-references it with your online footprint, to have a socio-economic picture of you, so the best possible revenue model, and anti buyer's remorse strategy can be implemented on you. These, and more, are part of the dystopian nightmare that takes flight if a new AI-powered online game revenue model is implemented in MMO games of the near future.

The paper's slide-deck and signed papers (with corrections) were leaked to the web by an unknown source, with bits of information (names, brands) redacted. It has too much information to be dismissed off hand for being a prank. It proposes leveraging AI to gather and build a socio-economic profile of a player to implement the best revenue-generation strategy. It also proposes using an AI to consistently "alter" the player's gameplay, such that the player's actions don't have the desired result leading toward beating the game, but towards an "unfair" consequence that motivates more in-game spending. The presentation spans a little over 50 slides, and is rich in text that requires little further explanation.</p>
Ai  games 
february 2018 by charlesarthur
Everyone is making AI-generated fake porn now • Motherboard
Samantha Cole:
<p>In December, Motherboard discovered a redditor named 'deepfakes' quietly enjoying his hobby: Face-swapping celebrity faces onto porn performers’ bodies. He made several convincing porn videos of celebrities—including Gal Gadot, Maisie Williams, and Taylor Swift—using a machine learning algorithm, his home computer, publicly available videos, and some spare time.

Since we first wrote about deepfakes, the practice of producing AI-assisted fake porn has exploded. More people are creating fake celebrity porn using machine learning, and the results have become increasingly convincing. Another redditor even created an app specifically designed to allow users without a computer science background to create AI-assisted fake porn. All the tools one needs to make these videos are free, readily available, and accompanied with instructions that walk novices through the process.

These are developments we and the experts we spoke to warned about in our original article. They have arrived with terrifying speed.</p>


So there are now fakes of celebrities - female celebrities so far I think? - taking showers, etc. (Perhaps someone could do a <a href="https://www.youtube.com/watch?v=SIOiqyC9vQE">Windowlicker-style video</a> to stem this.
ai  video  porn 
january 2018 by charlesarthur
Techmate: how AI rewrote the rules of chess • Financial Times
Richard Waters:
<p>Besides being pleasantly struck by the similarities he sees between AlphaZero’s game and his own, Kasparov suggests there have been some surprises from watching the software play. It’s well known, for instance, that the person who plays white, and who moves first, has an edge. But Kasparov says that AlphaZero’s victory over Stockfish has shown that the scale of that starting advantage is actually far greater than anyone had realised. It won 50 per cent of the games when it played white, compared to only 6 per cent when it played black. (The rest of the games were draws.)

Kasparov is cautious about predicting that AlphaZero has significant new chess lessons to teach, although he concedes it might encourage some players to try “a more dynamic game”. But if he seems only mildly interested in the quality of the chess, he is more forthright in his admiration for the technology. Kasparov has studied AI and written a book on it. AlphaZero, he says, is “the prototype of a flexible machine”, the kind that was dreamed of at the dawn of the computer age by two of the field’s visionaries, Alan Turing and Claude Shannon.

All computers before this, as he describes it, worked by brute force, using the intellectual equivalent of a steamroller to crack a nut. People don’t operate that way: “Humans are flexible because we know that sometimes we have to depart from the rules,” he says. In AlphaZero, he thinks he has seen the first computer in history to learn that very human trick…

…When transferred to the real world, however, the gulf between AI and the human brain looms large again. Chess, says [Stuart] Russell [who has been looking at AI and chess], has “known rules and short horizons”, and it is “fully observable, discrete, deterministic, static”. The real world, by contrast, “shares exactly none of these characteristics”.</p>


One really good point is that Stockfish, which was defeated, was programmed by people who start from the point of valuing material: capturing is good. Being a pawn up is good. (It's more subtle now.) But play like AlphaZero's is more focussed on winning than material.
chess  ai 
january 2018 by charlesarthur
How to find Wally with a neural network • Towards Data Science
Tadej Magajna:
<p>Deep learning provides yet another way to solve the Where’s Wally puzzle problem. But unlike traditional image processing computer vision methods, it works using only a handful of labelled examples that include the location of Wally in an image.

<img src="https://cdn-images-1.medium.com/max/1600/1*KKEiafrP-Y9LqsabOkEuPA.gif" width="100%" /></p>


"What did parents do before there were neural networks?"

"They put their kids to sleep by making them play Where's Wally. Damn computers."
python  wally  ai  machinelearning  tensorflow 
january 2018 by charlesarthur
Researchers made Google's image recognition AI mistake a rifle for a helicopter • WIRED
Louise Matsakis:
<p>algorithms, unlike humans, are susceptible to a specific type of problem called an “adversarial example.” These are specially designed optical illusions that fool computers into doing things like mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms.

While a panda-gibbon mix-up may seem low stakes, an adversarial example could thwart the AI system that controls a self-driving car, for instance, causing it to mistake a stop sign for a speed limit one. They’ve already been used to beat other kinds of algorithms, like spam filters.

Those adversarial examples are also much easier to create than was previously understood, according to research released Wednesday from MIT’s Computer Science and Artificial Intelligence Laboratory. And not just under controlled conditions; the team reliably fooled Google’s Cloud Vision API, a machine learning algorithm used in the real world today.</p>


There's that need for oversight, except if these things are classifying colossal numbers of objects how will we know when it makes a false negative? (The false positives should stick out a mile.)
ai  vision  adversarial 
december 2017 by charlesarthur
Artificial intelligence is killing the uncanny valley and our grasp on reality • WIRED
Sandra Upson:
<p>Progress on videos may move faster. Hany Farid, an expert at detecting fake photos and videos and a professor at Dartmouth, worries about how fast viral content spreads, and how slow the verification process is. Farid imagines a near future in which a convincing fake video of President Trump ordering the total nuclear annihilation of North Korea goes viral and incites panic, like a recast War of the Worlds for the AI era. “I try not to make hysterical predictions, but I don’t think this is far-fetched,” he says. “This is in the realm of what’s possible today.”

Fake Trump speeches are already circulating on the internet, a product of Lyrebird, the voice synthesis startup—though in the audio clips the company has shared with the public, Trump keeps his finger off the button, limiting himself to praising Lyrebird. Jose Sotelo, the company’s cofounder and CEO, argues that the technology is inevitable, so he and his colleagues might as well be the ones to do it, with ethical guidelines in place. He believes that the best defense, for now, is raising awareness of what machine learning is capable of. “If you were to see a picture of me on the moon, you would think it’s probably some image editing software,” Sotelo says. “But if you hear convincing audio of your best friend saying bad things about you, you might get worried. It’s a really new technology and a really challenging problem.”</p>
ai  uncannyvalley  fake  video 
december 2017 by charlesarthur
AI can be a tough sell in the enterprise, despite potential • WSJ
<p>Artificial intelligence and machine learning tools are expected to boost productivity across all industries in the years ahead. Yet, as many early-stage applications falter, they can run into resistance in the workplace, from the shop floor to the executive suite.

Take Monsanto Co., which expects a vast majority of its early AI and deep-learning projects to fail, says Anju Gupta, the agricultural giant’s director of digital partnerships and outreach.

A 99% failure rate with a current slate of 50-plus deep-learning projects is acceptable because “that 1% is going to bring exponential gain,” Ms. Gupta told a crowd of enterprise IT managers gathered here at an AI industry conference.

The stakes are high, according to Heath Terry, a managing director at Goldman Sachs Group Inc. It estimates that AI-enabled processes will result in up to $20bn in annual savings in the agricultural sector alone, he said.

Across the board, Goldman Sachs expects AI to add between 51 to 154 basis points to U.S. productivity by 2025, the most significant boost in productivity in decades, Mr. Terry said. Already, he adds, 13% of S&P 500 firms have mentioned AI in earnings calls, as of the second quarter, while venture capital funding for AI has doubled this year to more than $10bn.

Still, failures in early tests can risk creating a backlash to AI deployments across a company, despite the potential gains, Ms. Gupta said.</p>
ai  productivity 
december 2017 by charlesarthur
Is AlphaZero really a scientific breakthrough in AI? • Medium
Jose Camacho Collados is an AI/NLP research and international chess master:
<p>We should scientifically scrutinize alleged breakthroughs carefully, especially in the period of AI hype we live now. It is actually responsibility of researchers in this area to accurately describe and advertise our achievements, and try not to contribute to the growing (often self-interested) misinformation and mystification of the field. In fact, this early December in NIPS, arguably the most prestigious AI conference, some researchers showed important concerns about the lack of rigour of this scientific community in recent years.

In this case, given the relevance of the claims, I hope these concerns will be clarified and solved in order to be able to accurately judge the actual scientific contribution of this feat, a judgement that it is not possible to make right now. Probably with a better experimental design as well as an effort on reproducibility the conclusions would be a bit weaker as originally claimed. </p>


He has a number of questions about the AlphaZero/Stockfish matchup. Some seem a bit weak, or easily answered, but the question of reproducibility is important. Deepmind is making big claims, but this isn't how you do real science.
ai  chess  alphazero 
december 2017 by charlesarthur
AI-assisted fake porn is here and we’re all fscked • Motherboard
Samantha Cole:
<p>There’s a video of Gal Gadot having sex with her stepbrother on the internet. But it’s not really Gadot’s body, and it’s barely her own face. It’s an approximation, face-swapped to look like she’s performing in an existing incest-themed porn video.

The video was created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.

It's not going to fool anyone who looks closely. Sometimes the face doesn't track correctly and there's an uncanny valley effect at play, but at a glance it seems believable. It's especially striking considering that it's allegedly the work of one person—a Redditor who goes by the name 'deepfakes'—not a big special effects studio that can digitally recreate a young Princess Leia in Rogue One using CGI. Instead, deepfakes uses open-source machine learning tools like TensorFlow, which Google makes freely available to researchers, graduate students, and anyone with an interest in machine learning.

Like the Adobe tool that can make people say anything, and the Face2Face algorithm that can swap a recorded video with real-time face tracking, this new type of fake porn shows that we're on the verge of living in a world where it's trivially easy to fabricate believable videos of people doing and saying things they never did. Even having sex.</p>


"Not going to fool anyone who looks closely". You think people watching that sort of stuff are going to look closely?
porn  ai  fake 
december 2017 by charlesarthur
Google's AlphaZero destroys Stockfish in 100-game match • Chess.com
Mike Klein:
<p>Chess changed forever today. And maybe the rest of the world did, too.

A little more than a year after AlphaGo sensationally won against the top Go player, the artificial-intelligence program AlphaZero has obliterated the highest-rated chess engine. 

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn't stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to "learn" chess. Sorry humans, you had a good run.

That's right - the programmers of AlphaZero, housed within the DeepMind division of Google, had it use a type of "machine learning," specifically reinforcement learning. Put more plainly, AlphaZero was not "taught" the game in the traditional sense. That means no opening book, no endgame tables, and apparently no complicated algorithms dissecting minute differences between center pawns and side pawns…

…GM Peter Heine Nielsen, the longtime second of World Champion GM Magnus Carlsen, is now on board with the FIDE president in one way: aliens. As he told Chess.com, "After <a href="https://arxiv.org/pdf/1712.01815.pdf">reading the paper</a> but especially seeing the games I thought, well, I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know."</p>

The article includes one of the games. It feels quite different from how a human plays. AlphaGo seems to play as though it has all the time in the world; that it's not particularly worried by threats, but equally wants to make exchanges on its own terms. Stockfish never seems to force it. AlphaZero even shows which openings are best. Queen's Gambit and English Opening, apparently. (I prefer Bird's Opening. Get things started.)

As Eric David <a href="https://siliconangle.com/blog/2017/12/06/deepminds-alphago-mastered-chess-spare-time/">notes at Silicon Angle</a>:
<p>What makes DeepMind’s latest accomplishment is noteworthy is the fact that it conquered three games with very different rule sets using a single AI. AlphaGo Zero, the latest version of AlphaGo, began “tabula rasa” without any prior knowledge or understanding of Go, shogi or chess, but the AI managed to achieve “superhuman performance” in all three games with stunning speed. IBM spent more than 10 years perfecting Deep Blue before it successfully mastered chess. AlphaGo Zero did it in just 24 hours.</p>
chess  deepmind  ai  learning  machinelearning 
december 2017 by charlesarthur
This robot aced an exam without understanding a thing • CNBC
Ruth Umoh:
<p>The Todai Robot, for example, was able to write a 600-word essay on maritime trade in the 17th century better than most students. Noriko Arai, AI expert and member of the team that built the robot, explains in her TED Talk "Can a Robot Pass a University Entrance Exam?" that this wasn't because it possesses intelligence, but rather because it can recognize key words.

"Our robot took the sentences from the textbooks and Wikipedia, combined them together, and optimized it to produce an essay without understanding a thing," Arai says.

"We humans can understand the meaning," she says. "That is something which is very, very lacking in AI."
Over the last year, there has been increasing concern over how smart robots are becoming and the eventual eradication of certain industries. However, most of the focus has been on the loss of blue collar jobs. But according to David Lee, vice president of innovation at UPS, it's not just jobs like factory worker and truck driver at risk.

In his TED Talk titled, "Why Jobs of the Future Won't Feel Like Work," Lee says that even the smartest, highest-paid people will be affected by the "tremendous gains in the quality of analysis and decision-making because of machine learning."</p>


Its essay was marked in the top 20% of students on an entrance exam to the University of Tokyo. I'm not sure this matters. Better questions are: can it act on what it reports? Can it decide whether the content is correct or not? Synthesizing human writing is, as this demonstrates, something lots of students learn to do. What's more important is learning what to do next.
ai  robot  exam 
december 2017 by charlesarthur
The impossibility of intelligence explosion • Medium
François Chollet:
<p>What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? We cannot perform this experiment, but given the extent to which our most fundamental behaviors and early learning patterns are hard-coded, chances are this human brain would not display any intelligent behavior, and would quickly die off. Not so smart now, Mr. Brain.

What would happen if we were to put a human — brain and body — into an environment that does not feature human culture as we know it? Would Mowgli the man-cub, raised by a pack of wolves, grow up to outsmart his canine siblings? To be smart like us? And if we swapped baby Mowgli with baby Einstein, would he eventually educate himself into developing grand theories of the universe? Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any intelligence beyond basic animal-like survival behaviors. As adults, they cannot even acquire language.

If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.</p>
ai  artificialintelligence 
november 2017 by charlesarthur
« earlier      
per page:    204080120160

related tags

3d  abuse  aclu  adversarial  advertising  ai  alexa  algorithm  algorithms  alibaba  alphazero  amazon  analysis  android  anthropology  api  app  apple  apps  art  artificialintelligence  assistant  audio  automation  automotive  awareness  babysitter  bach  baidu  banks  barbie  bias  bitcoin  bixby  blackbox  blog  borders  bot  bots  brain  business  callcentre  camera  cancer  capitalism  cars  cat  censorship  Centaur  chatbot  checking  chess  children  china  chip  classification  climbing  cloud  coaching  cogito  colour  comments  compliance  computation  computer  computers  conversation  cortana  crime  cryptocurrency  culture  customs  data  deception  deepangel  deepdetect  deeplearning  deepmind  deepmusic  doctor  dog  doom  drive  driving  drone  duplex  dyson  economics  economy  education  error  ethics  europe  exam  exmachina  expenses  explanation  eye  facebook  facialrecognition  facts  fake  fakes  finance  football  fracture  future  gambling  game  games  gaydar  gender  gmail  go  google  guardian  hacking  hardware  hassabis  health  herd  home  humans  humour  ibm  images  immigration  infer  innovation  instagram  intelligence  intelligent  internet  internetofthings  ios  iot  ipad  jarvis  jaywalking  jobs  journalism  junk  kenya  knitting  lattice  law  lawyer  learning  legal  legislation  lying  m  machine  machinelearning  mainframe  malaria  market  marketing  medicine  Mercedes  microsoft  military  mistakes  ml  moderation  moorfields  motorbike  music  myth  negotiation  netflix  networks  neural  neuralnet  neuralnetwork  neuralnetworks  news  newton  nvidia  obesity  otter  outsourcing  pacman  paint  painting  perspective  photograph  photos  pindrop  placenames  poker  police  porn  predictim  prediction  principles  privacy  productivity  programming  python  race  racism  raspberrypi  rating  recognition  replies  Reviews  robot  robotics  robots  rockclimbing  safety  samsung  scale  science  scifi  search  selfdrivingcar  sepsis  sexism  siri  smartphone  smartphones  smartspeaker  snakeoil  software  song  space  speech  startup  tay  tech  technology  techpinions  tedx  tensorflow  terrorism  tinder  trading  traffic  training  transcription  translate  Travel  trucks  trump  turing  turtle  tweets  twitter  uber  uncanny  uncannyvalley  ux  video  vision  viv  voice  voicerecognition  wally  watson  waze  weave  wikipedia  word 

Copy this bookmark:



description:


tags: