In this lecture, we will see how to generate text from a neural language model (we will use RNNs in our discussion but much of what we are going to say is applicable to other NN architectures). We will consider sequnce-to-sequence tasks (e.g., machine translation), and introduce a basic form of encoder-decoder models for seq2seq. We will also spend some time discussing evaluation of text generation systems (e.g., BLEU).
The folder contains slides, required reading and a quiz.
Slides and reading
(The recorded video contains animations which are not visible in pdf)
Recommended reading: Jurafsky and Martin, 3rd edition (online), section 9.7. Also, study chapter 3 about machine translation (e.g., 13.3, 13.4 and 13.6 are especially relevant).
Also optionally: study language modeling and seq2seq sections in Lena Voita's NLP course:
Quiz 25: Text generation
These questions are designed to test your understanding of the above course content; doing this quiz does not contribute to your overall grade. Some questions require a text answer. You can ask for formative feedback on these from your tutor or on piazza. Other questions are multiple choice or they require a numeric answer: you will get immediate feedback for these. Please don't attempt this quiz until you have acquainted yourself with this lecture and the required reading.
You must be logged onto Learn to do this quiz.