ELMo

ELMo (embeddings from language model) is a word embedding method for representing a sequence of words as a corresponding sequence of vectors. Character-level tokens are taken as the inputs to a bidirectional LSTM which produces word-level embeddings. Like BERT (but unlike the word embeddings produced by "bag of words" approaches, and earlier vector approaches such as Word2Vec and GloVe), ELMo embeddings are context-sensitive, producing different representations for words that share the same spelling but have different meanings (homonyms) such as "bank" in "river bank" and "bank balance".

ELMo's innovation stems from its utilization of bidirectional language models. Unlike their predecessors, these models process language in forward and backwards directions. By considering a word's entire context, bidirectional models capture a more comprehensive understanding of its meaning. This holistic approach to language representation enables ELMo to encode nuanced meanings that might be missed in unidirectional models.

It was created by researchers at the Allen Institute for Artificial Intelligence, and University of Washington and first released in February, 2018.