mike + facebook   8

Build a Better Monster: Morality, Machine Learning, and Mass Surveillance
This is the text version of "Build a Better Monster: Morality, Machine Learning, and Mass Surveillance", a talk I gave on April 18, 2017, at the Emerging Technologies for the Enterprise conference in Philadelphia.
facebook  advertising  google  politics  Privacy  cryptography 
5 days ago by mike
facebookresearch/StarSpace: Learning embeddings for classification, retrieval and ranking.
Learning embeddings for classification, retrieval and ranking. CPU-friendly.
facebook  nlp  word2vec 
24 days ago by mike
Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective – Facebook Research
Machine learning sits at the core of many essential products and services at Facebook. This paper describes the hardware and software infrastructure that supports machine learning at global scale. Facebook’s machine learning workloads are extremely diverse: services require many different types of models in practice. This diversity has implications at all layers in the system stack. In addition, a sizable fraction of all data stored at Facebook flows through machine learning pipelines, presenting significant challenges in delivering data to high-performance distributed training flows. Computational requirements are also intense, leveraging both GPU and CPU platforms for training and abundant CPU capacity for real-time inference. Addressing these and other emerging challenges continues to require diverse efforts that span machine learning algorithms, software, and hardware design.
facebook  hardware  infrastructure  machinelearning 
8 weeks ago by mike
Algorithms, clickworkers, and the befuddled fury around Facebook Trends | Social Media Collective
We prefer the idea that algorithms run on their own, free of the messy bias, subjectivity, and political aims of people. It’s a seductive and persistent myth, one Facebook has enjoyed and propagated. But its simply false.

I’ve already commented on this, and many of those who study the social implications of information technology have made this point abundantly clear (including Pasquale, Crawford, Ananny, Tufekci, boyd, Seaver, McKelvey, Sandvig, Bucher, and nearly every essay on this list). But it persists: in statements made by Facebook, in the explanations offered by journalists, even in the words of Facebook’s critics.

If you still think algorithms are neutral because they’re not people, here’s a list, not even an exhaustive one, of the human decisions that have to be made to produce something like Facebook’s Trending Topics (which, keep in mind, pales in scope and importance to Facebook’s larger algorothmic endeavor, the “news feed” listing your friends’ activity). Some are made by the engineers designing the algorithm, others are made by curators who turn the output of the algorithm into something presentable. If your eyes start to glaze over, that’s the point; read any three points and then move on, they’re enough to dispel the myth. Ready?
facebook  bias 
may 2016 by mike

Copy this bookmark: