In this lecture, we will introduce transfer learning. Transfer learning is the fundamental idea for modern NLP, as the recent progress in our field can be attributed to developing new ways of extracting knowledge from large collections of texts and transferring this knowledge to specific tasks. We will look into the evolution of this idea over the years, from word embeddings to using large pretrained models, with only minimal (or no) fine-tuning on the target tasks.
The folder contains slides, required reading and a quiz.
Slides and reading
Chapter 11 in Jurafsky and Martin (3 edition, online).
Also optionally: study language modeling and seq2seq sections in Lena Voita's NLP course:
Quiz 28: Transfer Learning
These questions are designed to test your understanding of the above course content; doing this quiz does not contribute to your overall grade. Some questions require a text answer. You can ask for formative feedback on these from your tutor or on piazza. Other questions are multiple choice or they require a numeric answer: you will get immediate feedback for these. Please don't attempt this quiz until you have acquainted yourself with this lecture and the required reading.
You must be logged onto Learn to do this quiz.