ai-policy   30

The AI Cold War With China That Threatens Us All | WIRED
In May, facial-recognition cameras at Jiaxing Sports Center Stadium in Zhejiang led to the arrest of a fugitive who was attending a concert. He had been wanted since 2015 for allegedly stealing more than $17,000 worth of potatoes. Comment: interesting piece, but pieces like this lose points when they don't consider scenarios like the sub-structure of existing society losing their popularity, e.g. not considering the impact of AI/autonomous weapons weakening the grip of nation-states themselves. Under Xi, Communist Party committees within companies have expanded. Last November, China tapped Baidu, Alibaba, Tencent, and iFlytek, a Chinese voice-­recognition software company, as the inaugural members of its “AI National Team.”

Shortly before Trump’s inauguration, Jack Ma, the chair of Alibaba, pledged to create a million jobs in the United States. By September 2018, he was forced to admit that the offer was off the table, another casualty in the growing list of companies and projects that are now unthinkable.
ai-policy  chinai 
7 weeks ago by elrob
AI Nationalism — Ian Hogarth
The Chinese state appears to have recognised the importance of data to its AI nationalism efforts. China’s latest cybersecurity law mandates that data being exported out of China have to be reviewed.

China’s annual imports of semiconductor-related products are now $260 billion and have recently risen above spending on oil.

[2017] AlphaGo defeated world No.1 Kie Jie 3-0 in Wuzhen, China. Live video coverage of AlphaGo vs. Ke Jie was blocked in China.

This kind of dependency would be tantamount to a new kind of colonialism.

We can see small examples of new geopolitical relationships emerging. In March, Zimbabwe’s government signed a strategic cooperation framework agreement with a Guangzhou-based startup, CloudWalk Technology for a large-scale facial recognition program where Zimbabwe will export a database of their citizens’ faces to China, allowing CloudWalk to improve their underlying algorithms with more data and Zimbabwe to get access to CloudWalk’s computer vision technology. This is part of the much broader Belt and Road initiative of the Chinese Government.
ai-policy  industrial-policy  best-of-2018 
august 2018 by elrob
Guide to working in AI policy and strategy - 80,000 Hours
A rough rule of thumb is to aim to read three or so AI papers a week to get a sense of what’s happening in the field, the terminology people use, and to be able to discriminate between real and fake AI news. Regarding AI jargon, your aim should be for at least attaining interactional expertise – essentially, the ability to pass the AI researcher Turing Test in casual conversations at conferences, even if you couldn’t write a novel research paper yourself.
ai-policy  career 
august 2018 by elrob
Positively shaping the development of artificial intelligence - 80,000 Hours
A growing number of experts believe a revolution will occur during the 21st century through the invention of machines whose intelligence far surpasses ours.
ai-policy 
august 2018 by elrob
AI for good: Is it for real? | Nesta
A quick summary would be that the various ethics committees - notably Facebook’s – have achieved very little, while activism, and investigative journalism, have achieved quite a lot. Probably the only useful thing the members of Facebook’s committee could have done would have been a mass resignation. If nothing else, difficult questions are now being asked of the ‘data ethics’ experts who spent a lot of time discussing theoretical questions (like the ‘trolley problem’) and little on the very rea...
ai-policy 
august 2018 by elrob
Chinese Interests Take a Big Seat at the AI Governance Table
First, the government hopes that its role in standardization will generate more value out of AI technologies by facilitating data pooling and improving the interoperability of systems. The importance of standards in spurring economic development, particularly for ICTs, is pervasive in Chinese policy and industry circles. According to a popular saying, “First-tier companies make standards, second-tier companies make technology, and third-tier companies make products
china  ai-policy 
june 2018 by elrob
Google Employees Resign in Protest Against Pentagon Contract
Google has emphasized that its AI is not being used to kill, but the use of artificial intelligence in the Pentagon’s drone program still raises complex ethical and moral issues for tech workers and for academics who study the field of machine learning.

In addition to the petition circulating inside Google, the Tech Workers Coalition launched a petition in April demanding that Google abandon its work on Maven and that other major tech companies, including IBM and Amazon, refuse to work with the...
google  ethics  ai-policy 
may 2018 by elrob
Import AI: #91: European countries unite for AI grand plan; why the future of AI sensing is spatial; and testing language AI with GLUE. | Import AI
“The principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education.”

Things that make you go ‘hmmm’: Mr Jordan thanks Jeff Bezos for reading an earlier draft of the post. If there’s any company well-placed to build a global ‘intelligent infrastructure’ that dovetails into the physical world, it’s Amazon.
ai-policy  market-design 
may 2018 by elrob
The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 - YouTube
- Classification is always arbitrary and ambiguous, hard to find human classifications that aren't "of their time"
- So bias will always exist?
- Allocation bias and representation bias
- Can't think of it as only a technical problem, though it is technically a hard problem to "solve"
- Call for interdisciplinary approaches to solving problem
- FATE group at Microsoft
ai-policy  best-of-2018 
may 2018 by elrob
Remarks at the SASE Panel On The Moral Economy of Tech
First, programmers are trained to seek maximal and global solutions. Why solve a specific problem in one place when you can fix the general problem for everybody, and for all time? We don't think of this as hubris, but as a laudable economy of effort. And the startup funding culture of big risk, big reward encourages this grandiose mode of thinking. There is powerful social pressure to avoid incremental change, particularly any change that would require working with people outside tech and treating them as intellectual equals.

Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.

The reality is, opting out of surveillance capitalism means opting out of much of modern life.

We tend to imagine dystopian scenarios as one where a repressive government uses technology against its people. But what scares me in these scenarios is that each one would have broad social support, possibly majority support. Democratic societies sometimes adopt terrible policies.

We should not listen to people who promise to make Mars safe for human habitation, until we have seen them make Oakland safe for human habitation.

Techies will complain that trivial problems of life in the Bay Area are hard because they involve politics. But they should involve politics. Politics is the thing we do to keep ourselves from murdering each other.
ai-policy  scary 
april 2018 by elrob
Deep learning: Why it’s time for AI to get philosophical
The other serious risk is something I call nerd-sightedness: the inability to see value beyond one’s own inner circle. There’s a tendency in the computer-science world to build first, fix later, while avoiding outside guidance during the design and production of new technology. Both the people working in AI and the people holding the purse strings need to start taking the social and ethical implications of their work much more seriously.

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.
ai-policy 
april 2018 by elrob
The Pursuit of AI Is More Than an Arms Race - Defense One
This open, collaborative character of AI development – for which the private sector has acted as the primary engine of innovation – also renders infeasible most attempts to ban or constrain its diffusion. For that reason, traditional paradigms of arms control are unlikely to be effective if applied to this so-called arms race.
ai-policy 
april 2018 by elrob
The Lebowski Theorem of machine superintelligence
In other words, Bach imagines that Bostrom’s hypothetical paperclip-making AI would foresee the fantastically difficult and time-consuming task of turning everything in the universe into paperclips and opt to self-medicate itself into no longer wanting or caring about making paperclips, instead doing whatever the AI equivalent is of sitting around on the beach all day sipping piña coladas, a la The Big Lebowski’s The Dude.
ai-policy  funny 
april 2018 by elrob
Palantir Knows Everything About You
Palantir’s software engineers showed up at the bank on skateboards. Neckties and haircuts were too much to ask, but JPMorgan drew the line at T-shirts. The programmers had to agree to wear shirts with collars, tucked in when possible.

After their departures, JPMorgan drastically curtailed its Palantir use, in part because “it never lived up to its promised potential,” says one JPMorgan executive who insisted on anonymity to discuss the decision.

Thiel told Bloomberg in 2011 that civil libertarians ought to embrace Palantir, because data mining is less repressive than the “crazy abuses and draconian policies” proposed after Sept. 11. The best way to prevent another catastrophic attack without becoming a police state, he argued, was to give the government the best surveillance tools possible, while building in safeguards against their abuse.

The company’s early data mining dazzled venture investors, who valued it at $20 billion in 2015. But Palantir has never reported a profit. It operates less like a conventional software company than like a consultancy, deploying roughly half its 2,000 engineers to client sites.

Palantir says its Privacy and Civil Liberties Team watches out for inappropriate data demands, but it consists of just 10 people in a company of 2,000 engineers.

Similarly, the court’s 2014 decision in Riley v. California found that cellphones contain so much personal information that they provide a virtual window into the owner’s mind, and thus necessitate a warrant for the government to search. Chief Justice John Roberts, in his majority opinion, wrote of cellphones that “with all they contain and all they may reveal, they hold for many Americans ‘the privacies of life.’” Justice Louis Brandeis, 86 years earlier, wrote a searing dissent in a wiretap case that seems to perfectly foresee the advent of Palantir.

When whole communities are algorithmically scraped for pre-crime suspects, data is destiny
palantir  privacy  ai-policy 
april 2018 by elrob
Prediction Machines
And they offer a motivating example that would require pretty advanced tech: At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them.

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve.

AI can lead to a strategic [business] change if three factors are present: (1) there is a core trade-off in the business model (e.g., shop-then-ship versus ship-then-shop); (2) the trade-off is influenced buy uncertainty; and (3) an AI tool that reduces uncertainty tips the scales of the trade-off so that the optimal strategy changes from one side of the trade to the other.

If the precision machine is an input that you can take off the shelf, then you can treat it like most companies treat energy and purchase it from the market, as long as AI is not core to your strategy. In contrast, if prediction machines are to be the center of your company’s strategy, then you need to control the data to improve the machine, so both the data and the prediction machine must be in house
ai-policy  books 
april 2018 by elrob
OpenAI Charter
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
ai-policy 
april 2018 by elrob
Text Embedding + Bias: Google
Human data encodes human biases by default. Being aware of this is a good start, and the conversation around how to handle it is ongoing. At Google, we are actively researching unintended bias analysis and mitigation strategies because we are committed to making products that work well for everyone. In this post, we'll examine a few text embedding models, suggest some tools for evaluating certain forms of bias, and discuss how these issues matter when building applications.
NLP  ai-policy 
april 2018 by elrob

related tags

ai  archive  best-of-2018  bias  blog  books  career  china  chinai  chollet  ethics  facebook  funny  gender  google  industrial-policy  machine-learning  market-design  mybesttweets  nesta  nisti  nlp  palantir  pocket  privacy  scary  tweetstorms 

Copy this bookmark:



description:


tags: