Improving Language Understanding with Unsupervised Learning


14 bookmarks. First posted by juancampa 13 days ago.


Our system works in two stages; first we train a transformer model on a very large amount of data in an unsupervised manner — using language modeling as a training signal — then we fine-tune this model on much smaller supervised datasets to help it solve specific tasks. via Pocket
IFTTT  Pocket 
10 days ago by roolio
We’ve obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we’re also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training. These results provide a convincing example that pairing supervised learning methods with unsupervised pre-training works very well; this is an idea that many have explored in the past, and we hope our result motivates further research into applying this idea on larger and more diverse datasets.

Unsupervised pre-training + fine-tuning actually works! (in NLP)
nlp  machine_learning 
12 days ago by amy
Dataset Task SOTA Ours SNLI Textual Entailment 89.3 89.9 MNLI Matched Textual Entailment 80.6 82.1 MNLI Mismatched Textual Entailment 80.1 81.4 SciTail Textual…
from instapaper
13 days ago by hustwj
Today’s theme: All you really need is an insane amounts of NLP data

Google Brain:

OpenAI:
from twitter_favs
13 days ago by kleinsound
Favorite tweet: OpenAI

Fine-tuning an unsupervised pretrained transformer to set new state-of-the-art on diverse language tasks: https://t.co/Fi9ba7JnMj

— OpenAI (@OpenAI) June 11, 2018

http://twitter.com/OpenAI/status/1006238175247794176
IFTTT  twitter  favorite 
13 days ago by tswaterman
Fine-tuning an unsupervised pretrained transformer to set new state-of-the-art on diverse language tasks:
from twitter_favs
13 days ago by juancampa