lda2vec: Tools for interpreting natural language
The lda2vec model tries to mix the best parts of word2vec and LDA into a single framework. word2vec captures powerful relationships between words, but the resulting vectors are largely interpretable and don’t represent documents. LDA on the other hand is quite interpretable by humans, but doesn’t model local word relationships like word2vec. We build a model that builds both word and document topics, makes them interpreable, makes topics over clients, times, and documents, and makes them supervised topics.
See this Jupyter Notebook for an example of an end-to-end demonstration.
See this presentation for a presentation focused on the benefits of word2vec, LDA, and lda2vec.
See the API reference docs
Word2vec tries to model word-to-word relationships.
LDA models document-to-word relationships.
LDA yields topics over each document.
lda2vec yields topics not over just documents, but also regions.
lda2vec also yields topics over clients.
lda2vec the topics can be ‘supervised’ and forced to predict another target.
lda2vec also includes more contexts and features than LDA. LDA dictates that words are generated by a document vector; but we might have all kinds of ‘side-information’ that should influence our topics. For example, a single client comment is about a particular item ID, written at a particular time and in a particular region. In this case, lda2vec gives you topics over all items (separating jeans from shirts, for example) times (winter versus summer) regions (desert versus coastal) and clients (sporty vs professional attire).
Ultimately, the topics are interpreted using the excellent pyLDAvis library:
- Python 2.7+
- NumPy 1.10+
- Chainer 1.5.1+
- spaCy 0.99+
Requirements for some features:
- CUDA support
- Testing utilities: py.test