Sutskever et al 2014: sequence-to-sequence learning
A general framework for mapping one sequence to another one using a neural network
Transduction â input sequences transformed into output sequences in a one-to-one fashion.
E.g. signal (energy) transduction from electricity to sound and vice versa - transducer
Recall must embed word when chosen for feedback
Basic RNN-based encoder-decoder architecture. The final hidden state of the encoder RNN serves as the context for the decoder in its role as $h_0$ in the decoder RNN
Typically RNN's
Other:
Adaptation of machine translation technique
Lebret et al 2016, "Neural Text Generation from Structured Data with Application to the Biography Domain"
Input = fact tables, fact-table2vec embedding
Output = biographical sentences
Vinyals et al 2015, "Grammar as a Foreign Language"
Convert parsing tree into a sequence, "linearizing the structure"
GO = "generate output" input
Use for Entity Recognition (NER), Semantic Role Labeling, other tasks