ai-policy   24

Google Employees Resign in Protest Against Pentagon Contract
Google has emphasized that its AI is not being used to kill, but the use of artificial intelligence in the Pentagon’s drone program still raises complex ethical and moral issues for tech workers and for academics who study the field of machine learning.

In addition to the petition circulating inside Google, the Tech Workers Coalition launched a petition in April demanding that Google abandon its work on Maven and that other major tech companies, including IBM and Amazon, refuse to work with the...
google  ethics  ai-policy 
10 days ago by elrob
Import AI: #91: European countries unite for AI grand plan; why the future of AI sensing is spatial; and testing language AI with GLUE. | Import AI
“The principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education.”

Things that make you go ‘hmmm’: Mr Jordan thanks Jeff Bezos for reading an earlier draft of the post. If there’s any company well-placed to build a global ‘intelligent infrastructure’ that dovetails into the physical world, it’s Amazon.
ai-policy  market-design 
20 days ago by elrob
The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017 - YouTube
- Classification is always arbitrary and ambiguous, hard to find human classifications that aren't "of their time"
- So bias will always exist?
- Allocation bias and representation bias
- Can't think of it as only a technical problem, though it is technically a hard problem to "solve"
- Call for interdisciplinary approaches to solving problem
- FATE group at Microsoft
ai-policy  best-of-2018 
22 days ago by elrob
Remarks at the SASE Panel On The Moral Economy of Tech
First, programmers are trained to seek maximal and global solutions. Why solve a specific problem in one place when you can fix the general problem for everybody, and for all time? We don't think of this as hubris, but as a laudable economy of effort. And the startup funding culture of big risk, big reward encourages this grandiose mode of thinking. There is powerful social pressure to avoid incremental change, particularly any change that would require working with people outside tech and treating them as intellectual equals.

Machine learning is like money laundering for bias. It's a clean, mathematical apparatus that gives the status quo the aura of logical inevitability. The numbers don't lie.

The reality is, opting out of surveillance capitalism means opting out of much of modern life.

We tend to imagine dystopian scenarios as one where a repressive government uses technology against its people. But what scares me in these scenarios is that each one would have broad social support, possibly majority support. Democratic societies sometimes adopt terrible policies.

We should not listen to people who promise to make Mars safe for human habitation, until we have seen them make Oakland safe for human habitation.

Techies will complain that trivial problems of life in the Bay Area are hard because they involve politics. But they should involve politics. Politics is the thing we do to keep ourselves from murdering each other.
ai-policy  scary 
27 days ago by elrob
Deep learning: Why it’s time for AI to get philosophical
The other serious risk is something I call nerd-sightedness: the inability to see value beyond one’s own inner circle. There’s a tendency in the computer-science world to build first, fix later, while avoiding outside guidance during the design and production of new technology. Both the people working in AI and the people holding the purse strings need to start taking the social and ethical implications of their work much more seriously.

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.
ai-policy 
27 days ago by elrob
The Pursuit of AI Is More Than an Arms Race - Defense One
This open, collaborative character of AI development – for which the private sector has acted as the primary engine of innovation – also renders infeasible most attempts to ban or constrain its diffusion. For that reason, traditional paradigms of arms control are unlikely to be effective if applied to this so-called arms race.
ai-policy 
28 days ago by elrob
The Lebowski Theorem of machine superintelligence
In other words, Bach imagines that Bostrom’s hypothetical paperclip-making AI would foresee the fantastically difficult and time-consuming task of turning everything in the universe into paperclips and opt to self-medicate itself into no longer wanting or caring about making paperclips, instead doing whatever the AI equivalent is of sitting around on the beach all day sipping piña coladas, a la The Big Lebowski’s The Dude.
ai-policy  funny 
4 weeks ago by elrob
Palantir Knows Everything About You
Palantir’s software engineers showed up at the bank on skateboards. Neckties and haircuts were too much to ask, but JPMorgan drew the line at T-shirts. The programmers had to agree to wear shirts with collars, tucked in when possible.

After their departures, JPMorgan drastically curtailed its Palantir use, in part because “it never lived up to its promised potential,” says one JPMorgan executive who insisted on anonymity to discuss the decision.

Thiel told Bloomberg in 2011 that civil libertarians ought to embrace Palantir, because data mining is less repressive than the “crazy abuses and draconian policies” proposed after Sept. 11. The best way to prevent another catastrophic attack without becoming a police state, he argued, was to give the government the best surveillance tools possible, while building in safeguards against their abuse.

The company’s early data mining dazzled venture investors, who valued it at $20 billion in 2015. But Palantir has never reported a profit. It operates less like a conventional software company than like a consultancy, deploying roughly half its 2,000 engineers to client sites.

Palantir says its Privacy and Civil Liberties Team watches out for inappropriate data demands, but it consists of just 10 people in a company of 2,000 engineers.

Similarly, the court’s 2014 decision in Riley v. California found that cellphones contain so much personal information that they provide a virtual window into the owner’s mind, and thus necessitate a warrant for the government to search. Chief Justice John Roberts, in his majority opinion, wrote of cellphones that “with all they contain and all they may reveal, they hold for many Americans ‘the privacies of life.’” Justice Louis Brandeis, 86 years earlier, wrote a searing dissent in a wiretap case that seems to perfectly foresee the advent of Palantir.

When whole communities are algorithmically scraped for pre-crime suspects, data is destiny
palantir  privacy  ai-policy 
4 weeks ago by elrob
Prediction Machines
And they offer a motivating example that would require pretty advanced tech: At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them.

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve.

AI can lead to a strategic [business] change if three factors are present: (1) there is a core trade-off in the business model (e.g., shop-then-ship versus ship-then-shop); (2) the trade-off is influenced buy uncertainty; and (3) an AI tool that reduces uncertainty tips the scales of the trade-off so that the optimal strategy changes from one side of the trade to the other.

If the precision machine is an input that you can take off the shelf, then you can treat it like most companies treat energy and purchase it from the market, as long as AI is not core to your strategy. In contrast, if prediction machines are to be the center of your company’s strategy, then you need to control the data to improve the machine, so both the data and the prediction machine must be in house
ai-policy  books 
4 weeks ago by elrob
OpenAI Charter
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
ai-policy 
5 weeks ago by elrob
Text Embedding + Bias: Google
Human data encodes human biases by default. Being aware of this is a good start, and the conversation around how to handle it is ongoing. At Google, we are actively researching unintended bias analysis and mitigation strategies because we are committed to making products that work well for everyone. In this post, we'll examine a few text embedding models, suggest some tools for evaluating certain forms of bias, and discuss how these issues matter when building applications.
NLP  ai-policy 
5 weeks ago by elrob
The Great AI Paradox
Good for Tegmark for being willing to have some fun. But a thought experiment that turns dozens of complex things into trivialities isn’t a rigorous analysis of the future of computing. In his story, Prometheus isn’t just doing computational statistics; it’s somehow made the leap to using common sense and perceiving social nuances.

Elsewhere in the book, Tegmark says the “near-term opportunities for AI to benefit humanity” are “spectacular”—“if we can manage to make it robust and unhackable.” Unhackable! That’s a pretty big “if.” But it’s just one of many problems in our messy world that keep technological progress from unfolding as uniformly, definitively, and unstoppably as Tegmark imagines.
books  ai-policy 
7 weeks ago by elrob
What Worries Me about AI
This data, in theory, allows the entities that collect it to build extremely accurate psychological profiles of both individuals and groups. Your opinions and behavior can be cross-correlated with that of thousands of similar people, achieving an uncanny understanding of what makes you tick — probably more predictive than what yourself could achieve through mere introspection (for instance, Facebook “likes” enable algorithms to better assess your personality that your own friends could). This data makes it possible to predict a few days in advance when you will start a new relationship (and with whom), and when you will end your current one. Or who is at risk of suicide. Or which side you will ultimately vote for in an election, even while you’re still feeling undecided. And it’s not just individual-level profiling power — large groups can be even more predictable, as aggregating data points erases randomness and individual outliers.
best-of-2018  ai-policy  chollet  facebook 
7 weeks ago by elrob
Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear
But we make decisions in areas that we don’t fully understand every day — often very successfully — from the predicted economic impacts of policies to weather forecasts to the ways in which we approach much of science in the first place. We either oversimplify things or accept that they’re too complex for us to break down linearly, let alone explain fully. It’s just like the black box of A.I.: Human intelligence can reason and make arguments for a given conclusion, but it can’t explain the complex, underlying basis for how we arrived at a particular conclusion. Think of what happens when a couple get divorced because of one stated cause — say, infidelity — when in reality there’s an entire unseen universe of intertwined causes, forces and events that contributed to that outcome. Why did they choose to split up when another couple in a similar situation didn’t? Even those in the relationship can’t fully explain it. It’s a black box.
ai-policy 
9 weeks ago by elrob
AI Can Be Made Legally Accountable for Its Decisions
So when should explanations be given? Essentially, when the benefit outweighs the cost. “We find that there are three conditions that characterize situations in which society considers a decision-maker is obligated—morally, socially, or legally—to provide an explanation,” say Doshi-Velez and co.

The team say the decision must have an impact on a person other than the decision maker. There must be value to knowing if the decision was made erroneously. And there must be some reason to believe that an error has occurred (or will occur) in the decision-making process.
ai-policy 
9 weeks ago by elrob

related tags

ai  archive  best-of-2018  bias  blog  books  china  chollet  ethics  facebook  funny  gender  google  machine-learning  market-design  mybesttweets  nesta  nisti  nlp  palantir  pocket  privacy  scary  tweetstorms 

Copy this bookmark:



description:


tags: