This page introduces distributional semantics. DS defines similarities between words (and other linguistic units) based on their distributional properties in large corpora, or, in simpler terms, recognizes words appearing in similar sets of contexts as semantically similar. We will review the underlying assumptions, their power and limitations, and introduce different frameworks for insantiating DS. Our discussion of word embeddings will also provide a bridge to the final topic of the FNLP class -- neural network models -- which will covered in the next week. This folder consists of:
- slides of the lecture
- some required reading from Jurafsky and Martin
- a quiz that tests your understanding of the material presented here.
Please do the required reading, and attempt the quiz. If there is anything you don't understand, then please ask questions in the lecture or on piazza.
Slides and required reading
Readings: Sections 6.1 - 6.7 in the J&M's online 3rd edition
https://web.stanford.edu/~jurafsky/slp3/6.pdf
Quiz 21: Distributional Semantics
These questions are designed to test your understanding of the above course content; doing this quiz does not contribute to your overall grade. Some questions require a text answer. You can ask for formative feedback on these from your tutor or on piazza. Other questions are multiple choice or they require a numeric answer: you will get immediate feedback for these. Please don't attempt this quiz until you have acquainted yourself with this lecture and the required reading.
You must be logged onto Learn to do this quiz.