machine_learning   13821

« earlier    

Hello World | W. W. Norton & Company
"If you were accused of a crime, who would you rather decide your sentence—a mathematically consistent algorithm incapable of empathy or a compassionate human judge prone to bias and error? What if you want to buy a driverless car and must choose between one programmed to save as many lives as possible and another that prioritizes the lives of its own passengers? And would you agree to share your family’s full medical history if you were told that it would help researchers find a cure for cancer?
"These are just some of the dilemmas that we are beginning to face as we approach the age of the algorithm, when it feels as if the machines reign supreme. Already, these lines of code are telling us what to watch, where to go, whom to date, and even whom to send to jail. But as we rely on algorithms to automate big, important decisions—in crime, justice, healthcare, transportation, and money—they raise questions about what we want our world to look like. What matters most: Helping doctors with diagnosis or preserving privacy? Protecting victims of crime or preventing innocent people being falsely accused?
"Hello World takes us on a tour through the good, the bad, and the downright ugly of the algorithms that surround us on a daily basis. Mathematician Hannah Fry reveals their inner workings, showing us how algorithms are written and implemented, and demonstrates the ways in which human bias can literally be written into the code. By weaving in relatable, real world stories with accessible explanations of the underlying mathematics that power algorithms, Hello World helps us to determine their power, expose their limitations, and examine whether they really are improvement on the human systems they replace."
to:NB  books:noted  data_mining  machine_learning  prediction 
4 days ago by cshalizi
[1809.04578] Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability
Algorithmic predictions are increasingly used to aid, or in some cases supplant, human decision-making, and this development has placed new demands on the outputs of machine learning procedures. To facilitate human interaction, we desire that they output prediction functions that are in some fashion simple or interpretable. And because they influence consequential decisions, we also desire equitable prediction functions, ones whose allocations benefit (or at the least do not harm) disadvantaged groups.
We develop a formal model to explore the relationship between simplicity and equity. Although the two concepts appear to be motivated by qualitatively distinct goals, our main result shows a fundamental inconsistency between them. Specifically, we formalize a general framework for producing simple prediction functions, and in this framework we show that every simple prediction function is strictly improvable: there exists a more complex prediction function that is both strictly more efficient and also strictly more equitable. Put another way, using a simple prediction function both reduces utility for disadvantaged groups and reduces overall welfare. Our result is not only about algorithms but about any process that produces simple models, and as such connects to the psychology of stereotypes and to an earlier economics literature on statistical discrimination.
sendhil.mullainathan  algorithmic_fairness  machine_learning 
4 days ago by rvenkat
How do we capture structure in relational data?
despite its prevalence, graph structure is often discarded when applying machine learning...
articles  graph_databases  machine_learning 
5 days ago by gmisra

« earlier    

related tags

?  2018  a-i  academia  adversarial  ai  ai_assisted_fake_porn  algorithmic_fairness  algorithms  algotrading  android_pie  api  approximation  arm  art  article  articles  artificial_intelligence  audio  aws  bayesian  bias  blog  body  books  books:noted  calculus  causal_inference  children  classifiers  cognitive_science  collections  computer_vision  coreml  creativity  creepshots  dance  darknet  dask  data-science  data  data_mining  data_models  data_science  data_sets  datascience  datasets  decision_tree  deep-learning  deep_learning  deepfakes  deepfakesasaservice  deeplearning  digressions  domestic_abuse  embedding  embeddings  engineering  ethics  example  explanation  facebook  fake_facts  flask  from:the_intercept  future  futurism  gce  gcp  generative  google  google_cloud_platform  graph_databases  graphical_model  graphics  have_read  humor  ibm  ifttt  image  interpretability  jupyter  katie  keras  kernel_methods  language  latex  learn  learning  lecture  legal  linear  links  logic  machinelearning  markdown  math  mathematics  matrix  ml  model_selection  models  music  mvp  mxnet  natural-language  natural_language_processing  naughtyamerica  neural_networks  newswire  nlp  nlproc  openscience  optimization  photos  pmr  podcast  police  porn  prediction  privacy  programming  prolog  python  quantum_computing  r  rahimi.ali  random_projections  randomness  recht.benjamin  recognition  reference  reproducibility  research  resources  rest  revenge_porn  science  security  selfdriving  sendhil.mullainathan  sound  spark  statistics  stats  substitution_of_humans  surveillance  systematic_reviews  talks  tensorflow  testing  text_classification  textmining  this_is_fine  to:nb  tools  tr-2018-08  tumblr  uk  unread  video  video_evidence  visualization 

Copy this bookmark: