What HBR Gets Wrong About Algorithms and Bias · fast.ai

14 bookmarks. First posted by Frieda.Mendelsohn august 2018.

Favorite tweet: math_rachel

Many of the most chilling stories of algorithmic bias don’t involve a meaningful appeals process, perhaps because people incorrectly assume algorithms won't make mistakes.https://t.co/zGxUlJ8xhs pic.twitter.com/5PSzi0uzkM

— Rachel Thomas (@math_rachel) August 9, 2018

IFTTT  twitter  favorite 
august 2018 by tswaterman
“algorithms are often implemented without any appeals method in place (due to the misconception that algorithms are objective, accurate, and won’t make mistakes); algorithms are often used at a much larger scale than human decision makers, in many cases, replicating an identical bias at scale (part of the appeal of algorithms is how cheap they are to use); users of algorithms may not understand probabilities or confidence intervals (even if these are provided), and may not feel comfortable overriding the algorithm in practice (even if this is technically an option); instead of just focusing on the least-terrible existing option, it is more valuable to ask how we can create better, less biased decision-making tools by leveraging the strengths of humans and machines working together”
AI  algorithms  2018 
august 2018 by Preoccupations
Humans vs. machines is not a helpful framing
ai  algorithms  bias 
august 2018 by jomc
What HBR Gets Wrong About Algorithms and Bias Written: 07 Aug 2018 by Rachel Thomas The Harvard Business Review recently published an article, Want Less-Biased…
from instapaper
august 2018 by kohlmannj
“algorithms can often exacerbate underlying societal problems”
links  dig101  syllabus 
august 2018 by samplereality