barbarafister + ai   87

AI Ethics: DNV GL Exec on Why Women Are Key to Ethics Research
In this edition of Behind the Scenes, we spotlight the role of women in technology. Toolbox catches up with Dr. Asun Lera St.Clair, Senior Principal Scientist at DNV GL, on how new AI technologies should be monitored and regulated, why greater diversity is a necessity for AI ethics research, why trust is fundamental for the scalability of AI technologies, and the biggest trends in AI in the 2020s.
gender  AI  tech&society  ethics  AlgoReport 
10 days ago by barbarafister
AI is An Ideology, Not A Technology | WIRED
At its core, "artificial intelligence" is a perilous belief that fails to recognize the agency of humans.
AlgoReport  AI  tech&society 
11 days ago by barbarafister
Overview ‹ AI + Ethics Curriculum for Middle School — MIT Media Lab
This project seeks to develop an open source curriculum for middle school students on the topic of artificial intelligence. Through a series of lessons and activities, students learn technical concepts—such as how to train a simple classifier—and the ethical implications those technical concepts entail, such as algorithmic bias.
AI  AlgoReport  K12  ethics  curricula  discrimination 
14 days ago by barbarafister
Spotlight on Artificial Intelligence and Freedom of Speech
Today, algorithms and AI are used for a wide range of interventions, such as spam filters, detection of copyright infringements, chatbots,(editorial) data-analysis,or content ranking and distribution.Additionally, they have been deployed in policing not only online speech but also offline public spaces, for example with the help of smart video surveillance systems using facial recognition technology.9However, their impact on freedom of expression, both positive and negative, is still severely under-explored. While responsible implementation can benefit society, there is a genuine risk that commercial, political or state interests could have a deteriorating effect on human rights, in particular freedom of expression and media freedom.10Therefore, it is crucial to understand better the human rights implications of their use, and to ensure that algorithms and AI do not censor or have a chilling effect on free speech
freespeech  AI  socialmedia  humanrights  AlgoReport 
15 days ago by barbarafister
Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society
One way of carving up the broad "AI ethics and society" research space that has emerged in recent years is to distinguish between "near-term" and "long-term" research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.
ethics  AI  AlgoReport 
15 days ago by barbarafister
The Second Wave of Algorithmic Accountability – Law and Political Economy
While the first wave of algorithmic accountability focuses on improving existing systems, a second wave of research has asked whether they should be used at all—and, if so, who gets to govern them.
AlgoReport  accountability  AI 
21 days ago by barbarafister
Consumer Autonomy Violations and the Coming AI Backlash | INSEAD Knowledge
The profoundly beneficial impact of AI-based systems may be blunted in the 2020s, if Big Tech isn’t careful.
agency  privacy  regulation  AI  AlgoReport 
21 days ago by barbarafister
Tip: Machine learning solutions for journalists | Tip of the day
Still scratching your head about using AI in your newsroom? Here are some of the techniques commonly used in the Quartz investigations team and AI studio.
journalism  AlgoReport  AI  machinelearning 
21 days ago by barbarafister
Fairness in algorithmic decision-making
A significant new challenge with these machine learning systems, however, is ascertaining when and how they could introduce bias into the decision-making process.
machinelearning  AI  discrimination  AlgoReport  algorithms 
22 days ago by barbarafister
Chinese Hospitals Deploy AI to Help Diagnose Covid-19 | WIRED
Software that reads CT lung scans had been used primarily to detect cancer. Now it's retooled to look for signs of pneumonia caused by coronavirus.
coronavirus  AI  healthcare  AlgoReport 
22 days ago by barbarafister
How to make artificial intelligence in newsrooms more ethical | Media news
From correcting algorithms that discriminate certain groups of people to fighting filter bubbles that endanger democracy, we need to remind ourselves who is in charge of machine learning.alalg
algorithms  discrimination  AI  ethics  journalism  AlgoReport 
24 days ago by barbarafister
Digital dystopia: how algorithms punish the poor | Technology | The Guardian
In an exclusive global series, the Guardian lays bare the tech revolution transforming the welfare system worldwide – while penalising the most vulnerable
algorithms  artificialintelligence  poverty  AI  AlgoReport 
24 days ago by barbarafister
What AI still can’t do - MIT Technology Review
Artificial intelligence won’t be very smart if computers don’t grasp cause and effect. That’s something even humans have trouble with.
algorithms  AI  AlgoReport 
24 days ago by barbarafister
Dynamics of AI Principles: The Big Picture – AI ETHICS LAB
Interactive map of AI ethics statements from companies, organizations, and government agencies, with summaries and links to documents.
AI  ethics  AlgoReport 
25 days ago by barbarafister
The Age of the Algorithm - 99% Invisible
Computer algorithms now shape our world in profound and mostly invisible ways. They predict if we’ll be valuable customers and whether we’re likely to repay a loan. They filter what we see on social media, sort through resumes, and evaluate job performance. They inform prison sentences and monitor our health. Most of these algorithms have been created with good intentions. The goal is to replace subjective judgments with objective measurements. But it doesn’t always work out like that. [podcast]
AI  algorithms  AlgoReport  discrimination 
29 days ago by barbarafister
What fairness can learn from AI | Harvard Business School Digital Initiative
In this tech talk from the Harvard Business School Executive Education Advanced Management Program, David Weinberger of the Berkman Klein Center explores fairness in the world of AI, and asks: what counts as relevant, and what trade-offs are necessary to be “fair”?
ethics  AI  AlgoReport 
29 days ago by barbarafister
Fair Warning — Real Life
For as long as there has been AI research, there have been credible critiques about the risks of AI boosterism - Abeba Birhane
algorithms  AI  technoutopianism  technodeterminism  tech&society  AlgoReport 
4 weeks ago by barbarafister
Anatomy of an AI System
The Amazon Echo as an anatomical map of human labor, data and planetary resources
AI  tech&society  AlgoReport  radlib 
4 weeks ago by barbarafister
Frontiers | Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective | Big Data
As of 2020, the Public Employment Service Austria (AMS) makes use of algorithmic profiling of job seekers to increase the efficiency of its counseling process and the effectiveness of active labor market programs.
(re)entering the labor market.
.... the paper sheds light on the coproduction of (semi)automated managerial practices in employment agencies and the framing of unemployment under austerity politics.
AI  employment  discrimination  algorithms  austerity  AlgoReport  government 
5 weeks ago by barbarafister
The messy, secretive reality behind OpenAI’s bid to save the world
The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism.
AI  AlgoReport 
5 weeks ago by barbarafister
Emotion AI researchers say overblown claims give their work a bad name - MIT Technology Review
Perhaps you’ve heard of AI conducting interviews. Or maybe you’ve been interviewed by one yourself. Companies like HireVue claim their software can analyze video interviews to figure out a candidate’s “employability score.” ... But many of these promises are unsupported by scientific consensus.
AI  algorithms  discrimination  AlgoReport  ethics 
5 weeks ago by barbarafister
Algorithmic Injustices and Relational Ethics w/ Abeba Birhane - #348
The inherent nature of so much of modern machine learning is to make predictions. An ethical approach to AI demands that we ask hard questions about those impacted by these predictions and assess the “harm of categorization.”
algorithms  machinelearning  ethics  AlgoReport  AI 
5 weeks ago by barbarafister
[2002.05193] A Hierarchy of Limitations in Machine Learning
This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society. Machine learning modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them, and consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning.
algorithms  AI  machinelearning  AlgoReport 
6 weeks ago by barbarafister
Researchers Created AI That Hides Your Emotions From Other AI - VICE
As smart speaker makers such as Amazon improve emotion-detecting AI, researchers are coming up with ways to protect our privacy.
privacy  voiceassistants  AlgoReport  AI  solutions  surveillance 
7 weeks ago by barbarafister
ey research questions include: What type of information is used as training data? Who generates and collects it and for what purpose? What segments of society does it reflect? Who and what does it exclude? And how does that affect the functioning of AI systems themselves?

The Data Genesis program’s goal is to answer and demystify these questions
AI  data  AlgoReport 
8 weeks ago by barbarafister
Why asking an AI to explain itself can make things worse
Creating neural networks that are more transparent can lead us to over-trust them. The solution might be to change how they explain themselves.
machinelearning  tech&society  AI  algorithms  AlgoReport  communication 
8 weeks ago by barbarafister
AI reflections in 2019 | Nature Machine Intelligence
There is no shortage of opinions on the impact of artificial intelligence and deep learning. We invited authors of Comment and Perspective articles that we published in roughly the first half of 2019 to look back at the year and give their thoughts on how the issue they wrote about developed.
AI  tech&society  AlgoReport 
8 weeks ago by barbarafister
Automating Society – Taking Stock of Automated Decision-Making in the EU
Systems for automated decision-making or decision support (ADM) are on the rise in EU countries: Profiling job applicants based on their personal emails in Finland, allocating treatment for patients in the public health system in Italy, sorting the unemployed in Poland, automatically identifying children vulnerable to neglect in Denmark, detecting welfare fraud in the Netherlands, credit scoring systems in many EU countries – the range of applications has broadened to almost all aspects of daily life.
AlgoReport  AI  algorithms 
8 weeks ago by barbarafister
Black-Boxed Politics: - Katarzyna Szymielewicz - Medium
It is a common mistake made by non-expert commentators and journalists to apply the same ‘black box’ narrative to simple and complex systems alike. As a result, designers of simple systems also get excused for the lack of transparency. In many cases, the public is kept in the dark not because the inner workings of the system are obscure but because transparency would threaten trade secrets or expose controversial choices made by the owners of the ‘AI system’.

This dynamic is a good reason in itself to question the ‘black box’ narrative and educate the public, so that not all statistical models land in the same black box. Bearing in mind then that today’s AI, and indeed the only type of AI that is on any realistic horizon of development, is nothing more than advanced statistical models, let’s first examine how non-technical factors can turn potentially interpretable AI systems into black boxes.
radlib  ethics  machinelearning  AlgoReport  transparency  AI 
9 weeks ago by barbarafister
Principled Artificial Intelligence | Berkman Klein Center
The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these "AI principles," there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.
AI  ethics  AlgoReport 
9 weeks ago by barbarafister
There's a new obstacle to landing a job after college: Getting approved by AI - CNN
College career centers used to prepare students for job interviews by helping them learn how to dress appropriately or write a standout cover letter. These days, they're also trying to brace students for a stark new reality: They may be vetted for jobs in part by artificial intelligence.
algorithms  hiring  AI  highered  AlgoReport 
10 weeks ago by barbarafister
Should colleges really be putting smart speakers in dorms?
Administrators say installing listening devices like Alexa in student bedrooms and hallways could help lower dropout rates. Not everyone agrees.
education  edtech  tech&society  surveillance  privacy  radlib  AlgoReport  AI 
10 weeks ago by barbarafister
We’re fighting fake news AI bots by using more AI. That’s a mistake. - MIT Technology Review
Facebook and others are battling complex disinformation with AI-driven defences. But this can only get us so far, argues an expert on high-tech propaganda.
ethics  AI  AlgoReport 
11 weeks ago by barbarafister
021219 AI-driven Personalization in Digital Media final WEB.pdf
This paper seeks to outline the implications of the adoption of AI, and more specifically of ML, by the old ‘gatekeepers’ – the legacy media – as well as by the new, algorithmic, media – the digital intermediaries – focusing on personalization. Data-driven personalization, despite demonstrating commercial benefits for the companies that deploy it, as well as a purported convenience for consumers, can have individual and societal implications that convenience simply cannot counterbalance. Nor are citizens necessarily complacent with regard to targeting, as has been suggested. According to an interim report on online targeting released by the UK’s Centre for Data Ethics and Innovation (CDEI), ‘people’s attitudes towards targeting change when they understand more of how it works and how pervasive it is’.
AI  machinelearning  journalism  AlgoReport 
11 weeks ago by barbarafister
How Big Tech Manipulates Academia to Avoid Regulation
'The discourse of “ethical AI” was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.'
ethics  AI  algorithms  radlib 
december 2019 by barbarafister
Algorithmic Injustices: Towards a Relational Ethics
It has become trivial to point out how decision-making processes in various social, political and economical sphere are assisted by automated systems. Improved efficiency, the hallmark of these systems, drives the mass scale integration of automated systems into daily life. However, as a robust body of research in the area of algorithmic injustice shows, algorithmic tools embed and perpetuate societal and historical biases and injustice. In particular, a persistent recurring trend within the literature indicates that society's most vulnerable are disproportionally impacted. When algorithmic injustice and bias is brought to the fore, most of the solutions on offer 1) revolve around technical solutions and 2) do not focus centre disproportionally impacted groups. This paper zooms out and draws the bigger picture. It 1) argues that concerns surrounding algorithmic decision making and algorithmic injustice require fundamental rethinking above and beyond technical solutions, and 2) outlines a way forward in a manner that centres vulnerable groups through the lens of relational ethics.
algorithms  ethics  AI  AlgoReport 
december 2019 by barbarafister
A tug-of-war over biased AI - Axios
The idea that AI can replicate or amplify human prejudice, once argued mostly at the field's fringes, has been thoroughly absorbed into its mainstream: Every major tech company now makes the necessary noise about "AI ethics."

Yes, but: A critical split divides AI reformers. On one side are the bias-fixers, who believe the systems can be purged of prejudice with a bit more math. (Big Tech is largely in this camp.) On the other side are the bias-blockers, who argue that AI has no place at all in some high-stakes decisions.
AI  bias  algorithms  ethics  tech&society  PILprimer  radlib  AlgoReport 
december 2019 by barbarafister
AI Now -2019 Report
Report with analysis of year's developments and recommendations.
tech&society  AI  PILprimer  radlib  AlgoReport 
december 2019 by barbarafister
Biased Algorithms Are Easier to Fix Than Biased People - The New York Times
Racial discrimination by algorithms or by people is harmful — but that’s where the similarities end.
algorithms  bias  discrimination  AI  PILPrimer  radlib  AlgoReport 
december 2019 by barbarafister
Andrea Guzman - syllabus for Topics in Journalism & Society: AI, Automation & Journalism
journalism  AI  syllabus 
december 2019 by barbarafister
VB Special Issue: Power in AI | VentureBeat
Arguably more than any massive transformational technological epoch, AI has required more scrutiny of its ethical implications because of its breadth, real or perceived lack of explainability, and the uniquely dramatic impact it can have on people’s daily lives.

But ultimately, when we talk about ethics in AI, so often what we’re really talking about is power — who wields it, who doesn’t, and what that means for humanity. [intro to a special issue]
PILprimer  AI  AlgoReport 
november 2019 by barbarafister
Is AI Bias a Corporate Social Responsibility Issue?
By Mutale Nkonde "Algorithms cannot be trained to understand social context...the decisions made using dirty data are fed back into the training datasets and are then used to evaluate new information. This could create a toxic feedback loop, in which decisions based on historical biases continue to be made in perpetuity."
algorithms  bias  discrimination  AI  PILprimer  AlgoReport 
november 2019 by barbarafister
The problem with metrics is a big problem for AI ·
Goodhart’s Law states that “When a measure becomes a target, it ceases to be a good measure.” At their heart, what most current AI approaches do is to optimize metrics. The practice of optimizing metrics is not new nor unique to AI, yet AI can be particularly efficient (even too efficient!) at doing so.

This is important to understand, because any risks of optimizing metrics are heightened by AI.
discrimination  PILprimer  metrics  AI  AlgoReport 
october 2019 by barbarafister
Preparing Today’s Students for an AI Future - The Chronicle of Higher Education
Argues colleges should include discussions of AI in courses across the curriculum.
PILprimer  AI  AlgoReport 
october 2019 by barbarafister
Remarks at the SASE Panel On The Moral Economy of Tech
Machine learning is like money laundering for bias. ... The connected world we're building may resemble a computer system, but really it's just the regular old world from before, with a bunch of microphones and keyboards and flat screens sticking out of it. And it has the same old problems.

Approaching the world as a software problem is a category error that has led us into some terrible habits of mind.
AI  tech&society  machinelearning  PILprimer  algorithms  AlgoReport 
october 2019 by barbarafister
Can you make AI fairer than a judge? Play our courtroom algorithm game - MIT Technology Review
The US criminal legal system uses predictive algorithms to try to make the judicial process less biased. But there’s a deeper problem.
algorithms  PILprimer  bias  AI  COMPAS  sentencing  AlgoReport 
october 2019 by barbarafister
Machine Learning, Archives and Special Collections: A high level view
This brief article is an attempt to provide some reasonably sober and concrete sense of what actual and relevant changes might occur within the next decade or so, without going into technical details, and what these changes might imply for the practices of archives and special collections, or cultural memory organizations more broadly.
PILprimer  AI  archives  machinelearning  AlgoReport 
october 2019 by barbarafister
Unpacking “Ethical AI” - Data & Society: Points
This reading list is meant for anyone who wants to get a better sense of the landscape surrounding “ethical tech.” It features some of the more trenchant critiques of AI technologies and some early studies of the responses to those techniques. Hopefully, it can be a basis for understanding what some of the central concerns about AI technologies are, and how they’re being addressed.
tech&society  radlib  PILprimer  ethics  infoethics  AI  readinglist  AlgoReport 
september 2019 by barbarafister
See how an AI system classifies you based on your selfie - The Verge
Ask your standard recognition bot to do something novel, like analyze and label a photograph using only its acquired knowledge, and you’ll get some comically nonsensical results. That’s the fun behind ImageNet Roulette, a nifty web tool built as part of an ongoing art exhibition on the history of image recognition systems.
PILprimer  tech&society  radlib  AI  facialrecognition  AlgoReport 
september 2019 by barbarafister
Excavating Training Sets
The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.

Datasets aren’t simply raw materials to feed algorithms, but are political interventions.
PILprimer  radlib  AlgoReport  tech&society  AI  facialrecognition  bias  diversity 
september 2019 by barbarafister
The Great White Robot God - David Golumbia - Medium
Many parts of digital culture are closely tied to right-wing politics, and so are many other parts of culture. Even given this general truism, specific segments of that culture evidence right-wing politics in specific ways. The culture of bitcoin is permeated with far-right economic “theories” that don’t show up in such direct form in, say, GamerGate.
In AGI, we see a particular overvaluation of “general intelligence” as not merely the mark of human being, but of human value: everything that is worth anything in being human is captured by “rationality” or “logic,” and soon enough, a quasi-religious revelation will occur that will make that undeniably — transcendentally — true. In other words, God will appear and tell us that white people have been right all along: the thing that they claim they have more of than anyone else will turn out to be the thing that matters more than anything else, the thing that according to which we should ultimately be evaluated, the thing that will save our souls.
radlib  tech&society  AGI  AI  whitesupremacy 
september 2019 by barbarafister
'Colorblind' Artificial Intelligence Just Reproduces Racism | HuffPost
discrimination doesn’t have to be deliberate or even conscious in order to be harmful. And the “colorblind” approach will not undo discrimination, it will entrench it. If we simply add AI technology on top of unjust social systems, without considering how they automate and speed up those very same systems, we only make injustice run more smoothly. And, crucially, we bestow upon it a gloss of fairness and impartiality it does not deserve ― which will make reforming it that much harder. (Jessie Daniel)
AI  privacy  tech&society  racism  PILprimer  AlgoReport 
september 2019 by barbarafister
The Hidden Costs of Automated Thinking | The New Yorker
[after describing a drug, the underlying mechanism of which is unknown] "...No one can say how it works. This approach to discovery—answers first, explanations later—accrues what I call intellectual debt. It’s possible to discover what works without knowing why it works, and then to put that insight to use immediately, assuming that the underlying mechanism will be figured out later. In some cases, we pay off this intellectual debt quickly. But, in others, we let it compound, relying, for decades, on knowledge that’s not fully known." AI is like this - our understanding is far behind its development, so we go deeper and deeper into intellectual debt to those who control the AI.
PILprimer  algorithms  intellectualdebt  machinelearning  AI  AlgoReport 
july 2019 by barbarafister
Don’t let industry write the rules for AI - Benkler
Algorithmic-decision systems touch every corner of our lives: medical treatments and insurance; mortgages and transportation; policing, bail and parole; newsfeeds and political and commercial advertising. Because algorithms are trained on existing data that reflect social inequalities, they risk perpetuating systemic injustice unless people consciously design countervailing measures. For example, AI systems to predict recidivism might incorporate differential policing of black and white communities, or those to rate the likely success of job candidates might build on a history of gender-biased promotions.

Inside an algorithmic black box, societal biases are rendered invisible and unaccountable.
algorithms  AI  PILprimer  AlgoReport 
may 2019 by barbarafister
AI’s white guy problem isn’t going away - MIT Technology Review
Tech companies are built—and tech products are designed—with a “fantasy belief” that they exist independently of the sexism, racism, and societal context around them.
AI  tech&society  diversity  radlib  PILprimer  AlgoReport 
april 2019 by barbarafister
MIT finally gives a name to the sum of all AI fears | ZDNet
Rather than simply being scared of “intelligent machines,” say researchers at MIT’s Media Lab, society needs to study algorithms with a multi-disciplinary approach akin to the field of ethology.
algorithms  AI  ethology  PILprimer  AlgoReport 
april 2019 by barbarafister
AI bias: fixing facial recognition technology doesn’t make it fair - Vox
Human bias can seep into AI systems. Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s; researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to black people; a study found that mortgage algorithms discriminate against Latino and African American borrowers.

The tech industry knows this, and some companies, like IBM, are releasing “debiasing toolkits” to tackle the problem. These offer ways to scan for bias in AI systems — say, by examining the data they’re trained on — and adjust them so that they’re fairer.

But that technical debiasing is not enough, and can potentially result in even more harm, according to a new report from the AI Now Institute.

The three authors say we need to pay attention to how the AI systems are used in the real world even after they’ve been technically debiased. And we need to accept that some AI systems should not be designed at all.
bias  diversity  algorithms  AI  PILprimer  AlgoReport 
april 2019 by barbarafister
This is how AI bias really happens—and why it’s so hard to fix - MIT Technology Review
Bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect it.
bias  AI  algorithms  PILprimer  AlgoReport 
february 2019 by barbarafister
The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence
"In short, the preoccupation with narrow computational puzzles distracts us from the far more important issue of the colossal asymmetry between societal cost and private gain in the rollout of automated systems. It also denies us the possibility of asking: Should we be building these systems at all?" – Julia Powles & Data & Society Affiliate Helen Nissenbaum,
AI  newcourse  bias  algorithms 
december 2018 by barbarafister
« earlier      
per page:    204080120160

Copy this bookmark: