amy + facebook   8

facebookresearch/fastText: Library for fast text representation and classification.
fastText is a library for efficient learning of word representations and sentence classification.
nlp  machine_learning  Facebook  classification 
june 2018 by amy
Facebook announces major AI commitment to CIFAR : CIFAR
Facebook announced a major investment with CIFAR today, a result of the Institute’s leadership in the field of artificial intelligence (AI). The US$2.625 million investment over five years will continue Facebook’s support of CIFAR’s Learning in Machines & Brains program, and will also fund a Facebook-CIFAR Chair in Artificial Intelligence at the Montreal Institute for Learning Algorithms (MILA).

Facebook made the announcement Friday at a ceremony at McGill University in Montreal, attended by Prime Minister Justin Trudeau and CIFAR President & CEO Alan Bernstein. Facebook also announced funding for a Facebook AI Research (FAIR) Lab to be headed by Joëlle Pineau, a CIFAR Senior Fellow in the Learning in Machines & Brains program, and an artificial intelligence researcher at McGill. Pineau will be joined at FAIR by CIFAR Associate Fellow Pascal Vincent, an associate professor at the University of Montreal.
machine_learning  Facebook 
september 2017 by amy
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour – Facebook Research
Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves ∼90% scaling efficiency when moving from 8 to 256 GPUs. This system enables us to train visual recognition models on internet-scale data with high efficiency.
machine_learning  Facebook 
june 2017 by amy
How You Can Help Netflix Integrate With Facebook In The U.S. | Fast Company
& cannot currently integrate in the US. There's a bill to change that. You can help via
Facebook  Netflix  from twitter
july 2011 by amy

Copy this bookmark:



description:


tags: