natural-language-processing   401

« earlier    

snipsco/snips-nlu: Snips Python library to extract meaning from text
GitHub is where people build software. More than 27 million people use GitHub to discover, fork, and contribute to over 80 million projects.
python  natural-language-processing  library 
15 days ago by hschilling
[1805.02356] Multimodal Machine Translation with Reinforcement Learning
Multimodal machine translation is one of the applications that integrates computer vision and language processing. It is a unique task given that in the field of machine translation, many state-of-the-arts algorithms still only employ textual information. In this work, we explore the effectiveness of reinforcement learning in multimodal machine translation. We present a novel algorithm based on the Advantage Actor-Critic (A2C) algorithm that specifically cater to the multimodal machine translation task of the EMNLP 2018 Third Conference on Machine Translation (WMT18). We experiment our proposed algorithm on the Multi30K multilingual English-German image description dataset and the Flickr30K image entity dataset. Our model takes two channels of inputs, image and text, uses translation evaluation metrics as training rewards, and achieves better results than supervised learning MLE baseline models. Furthermore, we discuss the prospects and limitations of using reinforcement learning for machine translation. Our experiment results suggest a promising reinforcement learning solution to the general task of multimodal sequence to sequence learning.
natural-language-processing  machine-learning  multitask-learning  rather-interesting  algorithms  performance-measure  to-write-about 
23 days ago by Vaguery
CS124: From Languages to Information (Winter 2018)
The online world has a vast array of unstructured information in the form of language and social networks. Learn how to make sense of it and how to interact with humans via language, from answering questions to giving advice!
course  natural-language-processing 
6 weeks ago by doneata
CMPSCI 585: Introduction to Natural Language Processing – String Edit Distance
In this homework assignment you will modify, extend or apply the provided Python code for calculating string edit distance, run your new program on data, and write a short report about your experiences. There are several suggested tasks below. You only need do one task. But don't be limited by the list below. I you are free to come up with your own task. The exact assignment is up to your own interests and creativity.
programming  assignments  python  natural-language-processing 
6 weeks ago by doneata
Stemming and lemmatization
For grammatical reasons, documents are going to use different forms of a word, such as organize, organizes, and organizing. Additionally, there are families of derivationally related words with similar meanings, such as democracy, democratic, and democratization. In many situations, it seems as if it would be useful for a search for one of these words to return documents that contain another word in the set.

The goal of both stemming and lemmatization is to reduce inflectional forms and sometimes derivationally related forms of a word to a common base form. For instance:

am, are, is $\Rightarrow$ be
car, cars, car's, cars' $\Rightarrow$ car

The result of this mapping of text will be something like:

the boy's cars are different colors $\Rightarrow$
the boy car be differ color
datascience  nlp  natural-language-processing 
6 weeks ago by dustinvenegas
Voyages in sentence space
Imagine a sentence. “I went looking for adventure.”

Imagine another one. “I never returned.”

Now imagine a sentence gradient between them—not a story, but a smooth interpolation of meaning. This is a weird thing to ask for! I’d never even bothered to imagine an interpolation between sentences before encountering the idea in a recent academic paper. But as soon as I did, I found it captivating, both for the thing itself—a sentence… gradient?—and for the larger artifact it suggested: a dense cloud of sentences, all related; a space you might navigate and explore.
natural-language-processing  rather-interesting  via:several  algorithms  feature-extraction  interpolation  to-write-about  consider:embedding-space 
11 weeks ago by Vaguery
[1709.04109] Empower Sequence Labeling with Task-Aware Neural Language Model
Linguistic sequence labeling is a general modeling approach that encompasses a variety of problems, such as part-of-speech tagging and named entity recognition. Recent advances in neural networks (NNs) make it possible to build reliable models without handcrafted features. However, in many cases, it is hard to obtain sufficient annotations to train these models. In this study, we develop a novel neural framework to extract abundant knowledge hidden in raw texts to empower the sequence labeling task. Besides word-level knowledge contained in pre-trained word embeddings, character-aware neural language models are incorporated to extract character-level knowledge. Transfer learning techniques are further adopted to mediate different components and guide the language model towards the key knowledge. Comparing to previous methods, these task-specific knowledge allows us to adopt a more concise model and conduct more efficient training. Different from most transfer learning methods, the proposed framework does not rely on any additional supervision. It extracts knowledge from self-contained order information of training sequences. Extensive experiments on benchmark datasets demonstrate the effectiveness of leveraging character-level knowledge and the efficiency of co-training. For example, on the CoNLL03 NER task, model training completes in about 6 hours on a single GPU, reaching F1 score of 91.71±0.10 without using any extra annotation.
natural-language-processing  deep-learning  neural-networks  nudge-targets  consider:feature-discovery  consider:representation  to-write-about 
11 weeks ago by Vaguery
[1611.05896] Answering Image Riddles using Vision and Reasoning through Probabilistic Soft Logic
In this work, we explore a genre of puzzles ("image riddles") which involves a set of images and a question. Answering these puzzles require both capabilities involving visual detection (including object, activity recognition) and, knowledge-based or commonsense reasoning. We compile a dataset of over 3k riddles where each riddle consists of 4 images and a groundtruth answer. The annotations are validated using crowd-sourced evaluation. We also define an automatic evaluation metric to track future progress. Our task bears similarity with the commonly known IQ tasks such as analogy solving, sequence filling that are often used to test intelligence.
We develop a Probabilistic Reasoning-based approach that utilizes probabilistic commonsense knowledge to answer these riddles with a reasonable accuracy. We demonstrate the results of our approach using both automatic and human evaluations. Our approach achieves some promising results for these riddles and provides a strong baseline for future attempts. We make the entire dataset and related materials publicly available to the community in ImageRiddle Website (this http URL).
machine-learning  image-processing  natural-language-processing  deep-learning  puzzles  rather-interesting  to-write-about  nudge-targets  consider:looking-to-see  consider:integrating-NLP 
march 2018 by Vaguery
Abigail See
I’m Abi, a PhD student advised by Professor Chris Manning in the Stanford Natural Language Processing group. I aim to use this blog to communicate technical concepts in an accessible way. There is more information about me on my academic webpage.
people  natural-language-processing  blog 
march 2018 by doneata
[1710.02271] Unsupervised Extraction of Representative Concepts from Scientific Literature
This paper studies the automated categorization and extraction of scientific concepts from titles of scientific articles, in order to gain a deeper understanding of their key contributions and facilitate the construction of a generic academic knowledgebase. Towards this goal, we propose an unsupervised, domain-independent, and scalable two-phase algorithm to type and extract key concept mentions into aspects of interest (e.g., Techniques, Applications, etc.). In the first phase of our algorithm we propose PhraseType, a probabilistic generative model which exploits textual features and limited POS tags to broadly segment text snippets into aspect-typed phrases. We extend this model to simultaneously learn aspect-specific features and identify academic domains in multi-domain corpora, since the two tasks mutually enhance each other. In the second phase, we propose an approach based on adaptor grammars to extract fine grained concept mentions from the aspect-typed phrases without the need for any external resources or human effort, in a purely data-driven manner. We apply our technique to study literature from diverse scientific domains and show significant gains over state-of-the-art concept extraction techniques. We also present a qualitative analysis of the results obtained.
natural-language-processing  POS-tagging  algorithms  data-fusion  machine-learning  text-mining  nudge-targets  consider:feature-discovery 
february 2018 by Vaguery
[1703.00607] Dynamic Word Embeddings for Evolving Semantic Discovery
Word evolution refers to the changing meanings and associations of words throughout time, as a byproduct of human language evolution. By studying word evolution, we can infer social trends and language constructs over different periods of human history. However, traditional techniques such as word representation learning do not adequately capture the evolving language structure and vocabulary. In this paper, we develop a dynamic statistical model to learn time-aware word vector representation. We propose a model that simultaneously learns time-aware embeddings and solves the resulting "alignment problem". This model is trained on a crawled NYTimes dataset. Additionally, we develop multiple intuitive evaluation strategies of temporal word embeddings. Our qualitative and quantitative tests indicate that our method not only reliably captures this evolution over time, but also consistently outperforms state-of-the-art temporal embedding approaches on both semantic accuracy and alignment quality.
time-series  digital-humanities  natural-language-processing  representation  nudge  to-write-about  to-do 
february 2018 by Vaguery
Dynamic word embeddings for evolving semantic discovery | the morning paper
Prior approaches to solving this problem first use independent learning as per our straw man, and then post process the embeddings in an alignment phase to try and match them up. But Yao et al. have found a way to learn temporal embeddings in all time slices concurrently, doing away with the need for a separate alignment phase. The experimental results suggests that this yields better outcomes that the prior two-step methods, and the approach is also robust against data sparsity (it will tolerate time slices where some words are rarely present or even missing).
digital-humanities  time-series  rather-interesting  to-write-about  natural-language-processing  representation  nudge  consider:generalization 
february 2018 by Vaguery
[1705.08432] Question-Answering with Grammatically-Interpretable Representations
We introduce an architecture, the Tensor Product Recurrent Network (TPRN). In our application of TPRN, internal representations learned by end-to-end optimization in a deep neural network performing a textual question-answering (QA) task can be interpreted using basic concepts from linguistic theory. No performance penalty need be paid for this increased interpretability: the proposed model performs comparably to a state-of-the-art system on the SQuAD QA task. The internal representation which is interpreted is a Tensor Product Representation: for each input word, the model selects a symbol to encode the word, and a role in which to place the symbol, and binds the two together. The selection is via soft attention. The overall interpretation is built from interpretations of the symbols, as recruited by the trained model, and interpretations of the roles as used by the model. We find support for our initial hypothesis that symbols can be interpreted as lexical-semantic word meanings, while roles can be interpreted as approximations of grammatical roles (or categories) such as subject, wh-word, determiner, etc. Fine-grained analysis reveals specific correspondences between the learned roles and parts of speech as assigned by a standard tagger (Toutanova et al. 2003), and finds several discrepancies in the model's favor. In this sense, the model learns significant aspects of grammar, after having been exposed solely to linguistically unannotated text, questions, and answers: no prior linguistic knowledge is given to the model. What is given is the means to build representations using symbols and roles, with an inductive bias favoring use of these in an approximately discrete manner.
natural-language-processing  deep-learning  recurrent-neural-networks  rather-interesting  representation  to-write-about  machine-learning 
february 2018 by Vaguery

« earlier    

related tags

#iot  @samsung  aesthetics  affect  ai  algorithms  amazon  amusing  api  approximation  architecture  article  artificial-intelligence  assignments  audio-video  auto  benchmarks  bias  bioinformatics  bixby-2.0  bixby-samsung  bixby-sdk  blog-post  blog  botnik  branding  classification  connected-devices  conservatism  consider:architecture  consider:cause-and-effect  consider:embedding-space  consider:feature-discovery  consider:generalization  consider:generative-art  consider:impossible-tasks  consider:integrating-nlp  consider:looking-to-see  consider:other-applications  consider:performance-measures  consider:rediscovery  consider:representation  content-samurai  corpora  corporatism  corpus  course  data-analysis  data-fusion  data-mining  data-science  data-visualization  datascience  dataset  deep-learning  dev  dialog  digital-humanities  digital-voice-assistant  documentation  ecommerce  embedded-systems  embeddings  encoders  english  essay  feature-construction  feature-extraction  feature-selection  free-speech  generative-art  generative-models  genetic-programming  gensim  google  grammar  have-read  history-of-science  how-to  humor  ia  image-analysis  image-processing  inference  interpolation  interpretability  inverse-problems  ios  ios11  iosdev  javascript  jeopardy-questions  journals  kinda-scary  language  latex  learning-by-doing  learning-by-watching  learning-from-data  library  linguistics  looking-to-see  machine-learning  mathematical-recreations  max-grigorev  metaheuristics  multitask-learning  music  natural-language  natural  neural-networks  nlp  node.js  node  not-so-deep  nslinguistictagger  nudge-targets  nudge  ocr  open-source  optimization  package  parse  parsing  pattern-discovery  people  performance-measure  person:eui-suk-chung  pos-tagging  preprocessing  privacy  processing  programming  puzzles  python  r  racism  rather-interesting  recurrent-neural-networks  reference  reinforcement-learning  representation  review  rita  robustness  salesforce.com  salesforce  samsung-developer-conference  science  sdc-2017  search  seeing-like-a-state  semantics  sentiment-analysis  sexism  skip-grams  smart-home  social-media  social-networks  social-norms  software-development  software  sonos  spark  speech-recognition  speech  statistics  strings  summarize  summarizer  summary  swift  syntax  system-of-professions  technology  text-analysis  text-mining  tfidf  the-objective-truth-oh-right  theory-and-practice-sitting-in-a-tree  time-series  to-do  to-understand  to-write-about  tools  topic-modeling  tutorial  twitter  type-systems  unsupervised-learning  user-interface  voice-interface  wikipedia  word-nets  word-order 

Copy this bookmark:



description:


tags: