ethics   42919

« earlier    

Peter Campbell · Why does it take so long to mend an escalator? · LRB 7 March 2002
Models and monitored performance are essential management tools. They can give answers to questions like ‘Are we doing as well as our competitors?’ ‘As we did last year?’ ‘As we could?’ Profit, turnover, customer satisfaction and so on can be compared and targets set. When the calculations are used to define outcomes – and when these outcomes cannot be measured simply – the potential for argument and for the distortion of the system to meet the model’s requirements (which is what seems to have happened in the case of hospitals) is considerable.
funny  policy  politics  logistics  projectmanagement  global  london  ethics  datadecisions  analytics 
7 hours ago by jcberk
Waarom sciencefictionserie Black Mirror ons zoveel vertelt over het héden | De Volkskrant
"Want: zinniger dan blijven hangen in angstvisioenen is het om na te denken hoe we techniek op een goede manier kunnen integreren in ons bestaan."
Tech  ethics  philosophy 
9 hours ago by yorickdupon
Execution | America Has Stopped Being a Civilized Nation - The New York Times
Another problem with this execution is Tennessee’s new protocol for lethal injection. The first drug administered in an execution is supposed to put the inmate to sleep so he can’t feel the effects of the other two drugs: the one that causes paralysis and the one that stops the heart. But midazolam, the sedative in Tennessee’s execution cocktail, doesn’t always render complete unconsciousness. It’s possible for the inmate to feel the effects of the next two drugs, and what he feels is akin to being suffocated and burned alive at the same time.

The United States Supreme Court had declined to delay the execution, but Justice Sonia Sotomayor strongly dissented: “In refusing to grant Irick a stay, the court today turns a blind eye to a proven likelihood that the state of Tennessee is on the verge of inflicting several minutes of torturous pain on an inmate in its custody,” Justice Sotomayor wrote. “If the law permits this execution to go forward in spite of the horrific final minutes that Irick may well experience, then we have stopped being a civilized nation and accepted barbarism.”
Execution  Death_Penalty  Drugs  Ethics  Law  Politics 
yesterday by mcbakewl
[1808.00023] The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes---like race, gender, and their proxies---are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area.
machine_learning  algorithms  bias  ethics  privacy  review  for_friends 
yesterday by rvenkat
Q: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible? A: Because Keynote Speakers Make Bad Life Decisions and Are Poor Role Models | USENIX
“Some people enter the technology industry to build newer, more exciting kinds of technology as quickly as possible. My keynote will savage these people and will burn important professional bridges, likely forcing me to join a monastery or another penance-focused organization. In my keynote, I will explain why the proliferation of ubiquitous technology is good in the same sense that ubiquitous Venus weather would be good, i.e., not good at all. Using case studies involving machine learning and other hastily-executed figments of Silicon Valley’s imagination, I will explain why computer security (and larger notions of ethical computing) are difficult to achieve if developers insist on literally not questioning anything that they do since even brief introspection would reduce the frequency of git commits. At some point, my microphone will be cut off, possibly by hotel management, but possibly by myself, because microphones are technology and we need to reclaim the stark purity that emerges from amplifying our voices using rams’ horns and sheets of papyrus rolled into cone shapes. I will explain why papyrus cones are not vulnerable to buffer overflow attacks, and then I will conclude by observing that my new start-up papyr.us is looking for talented full-stack developers who are comfortable executing computational tasks on an abacus or several nearby sticks” - this looks amazing, via Tom Carden
ethics  security  via:tomcarden  technology  futility 
yesterday by danhon
Leverage Points: Places to Intervene in a System - The Donella Meadows Project
A just-beneath-the-surface look at a hierarchical list of leverage points in complex systems.
philosophy  ethics  systems  design  worldbuilding 
2 days ago by dogrover
Google employee unrest
Concerns from staff about ethical quandries, including AI and search in China
google  china  ai  ethics 
2 days ago by nelson
Ethical OS Helps Tech Startups Avert Moral Disasters | WIRED
A new guidebook shows tech companies companies that it's possible to predict future changes to humans' relationship with technology, and that they can tweak their products so they'll do less damage when those eventual days arrive.
futurology  technology  ethics 
2 days ago by cmananian

« earlier    

related tags

2018  341webmgmt  @4me280  abuse  academia  acceptableusepolicy  accessibility  activism  activismmodel  advertising  advocacy  afghanistan  ai  algorithms  amazon  analytics  android  apple  archives  article  asia  attention  bestpractices  bias  bigdata  bioethics  blog  breakpoint  bu  bullying  business  capitalism  career  case-studies  censorship  china  christian-history  cloning  codeofconduct  codeofpractice  collectiondevelopment  comics  communication  community  computer  computers  computerscience  computing  consequences  corruption  culture  data  datadecisions  datagovernance  dataportability  death_penalty  debatable  decisionmaking  decolonial  design  digitalcuration  digitalpreservation  diversity  diy  doctors  dogs  drugs  dystopia  economy  education  ehealth  empathy  employment  environment  ethical-theories  execution  facebook  fairness  fakenews  feminism  football  for_friends  forefront  freedom  friends  friendship  funny  futility  future  futurology  game  gaming  genetics  germany  global  google  government  grim_meathook_future  health  hii  history  hn  how  how_to  human-body  humanities  humanrights  ibm  ieee  industry  influence  innovation  internet  iphone  irb  irony  isis  issue  j410  jamesmickens  jhistory  job  jobs  journalism  judaism  justice  law  life  linkedin  location  logistics  london  longreads  love  machine_learning  machinelearning  management  math  maximizing  media  medicine  methods  microsoft  mobile  money  monteiro  morality  morals  mpg  neuroscience  of  oralhistory  overprosecution  philosophy  photography  platforms  policy  politics  privacy  product  productmanagement  professional  programming  projectmanagement  psychology  racism  readinglist  religion  reproduction  research  review  science  scientific_misconduct  security  silicon_valley  snark  socialmedia  society  sociology  software  southkorea  spinoza  startup  statistics  stories  strategy  surveillance  sustainability  systems  talk  tech  techaddiction  technology-is-eating-us  technology  technomessianism  terrorism  the  to  trends2018  trust  twitter  uk  usa  usenix  ux  values  vanityfair  video  vitamind  warren  webarchiving  webdesign  weft  wikipedia  wisdom  women  work  worldbuilding  youtube 

Copy this bookmark:



description:


tags: