This page introduces you to representation learning and neural network (NN) classification methods for NLP. It starts with introducing you to word embedding, which is at the core of virtually any modern NN method in NLP. We then proceed to discuss classification models. We start this section by recapping the logistic regression and then introduce more powerful neural classification methods (e.g., those relying on recurrent neural networks).
The page contains slides, a quiz, and required reading.
Recommended Reading and Slides
(Note: the lecture includes an animation which will be visible in the recording but not in the slides)
Recommended reading: Jurafsky and Martin, 3rd edition (online), Section 6.8, 9.2, 9.4.1-9.4.2.
Optionally: study word embeddings and text classification sections in Lena Voita's NLP course:
Quiz 22: Word Embeddings
These questions are designed to test your understanding of the above course content; doing this quiz does not contribute to your overall grade. Some questions require a text answer. You can ask for formative feedback on these from your tutor or on piazza. Other questions are multiple choice or they require a numeric answer: you will get immediate feedback for these. Please don't attempt this quiz until you have acquainted yourself with this lecture and the required reading.
You must be logged onto Learn to do this quiz.