COS 484: Natural Language Processing
[Information]   [Schedule]   [Coursework]   [FAQ]

Links

What is this course about?

Recent advances have ushered in exciting developments in natural language processing (NLP), resulting in systems that can translate text, answer questions and even hold spoken conversations with us. This course will introduce students to the basics of NLP, covering standard frameworks for dealing with natural language as well as algorithms and techniques to solve various NLP problems, including recent deep learning approaches. Topics covered include language modeling, representation learning, text classification, sequence tagging, machine translation, Transformers, and others.

Information

Course staff:

Time/location:

(All times are in EST.)

Grading

Prerequisites:

Reading:

There is no required textbook for this class, and you should be able to learn everything from the lectures and assignments. However, if you would like to pursue more advanced topics or get another perspective on the same material, here are some books (all of them can be read free online):
Schedule

Lecture schedule is tentative and subject to change. All assignments are due at 12pm EST before Friday lectures.

Week Date Topics Readings Assignments
1 Fri (2/2) Introduction to NLP 1. Advances in natural language processing
2. Human Language Understanding & Reasoning
A1 out
n-gram language models J & M 3.1-3.5
2 Fri (2/9) Text classification Naive Bayes: J & M 4.1-4.6
Logistic regression: J & M 5.1-5.8
Word embeddings 1 J & M 6.2-6.4, 6.6
3 Fri (2/16) Word embeddings 2 1. J & M 6.8, 6.10-6.12
2. Efficient Estimation of Word Representations in Vector Space (original word2vec paper)
3. Distributed representations of words and phrases and their compositionality (negative sampling)
A1 due, A2 out
Sequence models 1 1. J&M 8.1-8.4
2. Michael Collin's notes on HMMs
4 Fri (2/23) Sequence models 2 1. Michael Collins's notes on MEMMs and CRFs
2. Michael Collins's notes on CRFs
Neural networks for NLP J&M 7.3-7.5
5 Fri (3/1) Recurrent neural networks 1 1. J&M 9.1-9.3
2. The Unreasonable Effectiveness of Recurrent Neural Networks
A2 due
Recurrent neural networks 2
1. J&M 9.5
2. Understanding LSTM Networks
3. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (GRUs)
4. Simple Recurrent Units for Highly Parallelizable Recurrence (SRUs)
6 Fri (3/8) Midterm
7 Fri (3/15) Spring Recess (no class) A3 out
8 Fri (3/22) Machine translation 1. J&M 13.2
2. Michael Collin's notes on IBM models 1 and 2
3. Sequence to Sequence Learning with Neural Networks
4. Machine Translation. From the Cold War to Deep Learning.
Seq2seq models + attention 1. Neural Machine Translation by Jointly Learning to Align and Translate
2. Effective Approaches to Attention-based Neural Machine Translation
3. Blog post: Visualizing A Neural Machine Translation Model
4. Blog post: Sequence to Sequence (seq2seq) and Attention
10 Fri (3/29) Transformers 1 1. J&M 10.1
2. Attention Is All You Need
3. The Annotated Transformer
4. The Illustrated Transformer
A3 due, A4 out
Transformers 2 1. Efficient Transformers: A Survey
2. Vision Transformer
11 Fri (4/5) Contextualized representations and pre-training 1. Deep contextualized word representations (ELMo)
2. Improving Language Understanding by Generative Pre-Training (GPT)
3. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
4. The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning)
Project proposals due
Large language models 1. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators (Electra)
2. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (T5)
3. Language Models are Few-Shot Learners (GPT-3)
4. Training language models to follow instructions with human feedback (InstructGPT)
5. GPT-4 Technical Report (GPT-4)
12 Fri (4/12) Part 1: Safe and accessible generative AI for all (Ameet Deshpande, Vishvak Murahari)
Part 2: Responsible LLM Development (Peter Henderson)
Toxicity in ChatGPT
Bias runs deep: Implicit reasoning biases in Persona-assigned LLMs
Anthropomorphization of AI: Opportunities and Risks
13 Fri (4/19) Part 1: Driving Progress in Language Modeling Through Better Benchmarking (SWE-bench + SWE-agent) (Ofir Press)
Part 2: Challenges in Human Preference Elicitation and Modeling (Tanya Goyal)
InstructGPT A4 due
14 Fri (4/26) No lecture (final project feedback sessions)
Tue (5/7) Dean's date Final project report due
Coursework

Assignments

All assignments are due at 12pm before the Friday lecture. There are 96 free late hours (~4 days) in total over all assignments. Once you have used up all your free late hours, late submissions incur a penalty of 10% for each day, up to a maximum of 3 days beyond which submissions will not be accepted. The only exception to this rule is if you have a note from your Dean of Studies. In this case, you must notify the instructors on Ed via a private post. For students with a dean’s note, the weight of their missed/penalized assignment will be added to the midterm and your midterm score will be scaled accordingly (for homeworks 1 and 2) (e.g. if you are penalized 2 points overall, your midterm will be worth 27 and your score will be multiplied by 27/25). Missing homework 3 and 4 after the midterm can only be compensated by arranging an oral exam on the pertinent material.
Writeups: Homeworks should be written up clearly and succinctly; you may lose points if your answers are unclear or unnecessarily complicated. Using LaTeX is recommended (here's a template), but not a requirement. If you've never used LaTeX before, refer to this introductory guide on Working with LaTeX to get started. Hand-written assignments must be scanned and uploaded as a pdf.
Programming: For each assignment, we provide a Google Colab file with the programming questions included. You’ll need to make a copy of this file, fill in necessary parts, run results, and upload the code and results as a PDF file. You should also include the ipynb file in your submission. If you've never used Google Colab before, refer to this introductory guide on Working with Google Colab to get started.
Collaboration policy and honor code: You are free to form study groups and discuss homeworks and projects. However, you must write up homeworks and code from scratch independently, and you must acknowledge in your submission all the students you discussed with. The following are considered to be honor code violations (in addition to the Princeton honor code):
  • Looking at the writeup or code of another student.
  • Showing your writeup or code to another student.
  • Discussing homework problems in such detail that your solution (writeup or code) is almost identical to another student's answer.
  • Uploading your writeup or code to a public repository (e.g. github, bitbucket, pastebin) so that it can be accessed by other students.
When debugging code together, you are only allowed to look at the input-output behavior of each other's programs (so you should write good test cases!). It is important to remember that even if you didn't copy but just gave another student your solution, you are still violating the honor code, so please be careful. If you feel like you made a mistake (it can happen, especially under time pressure!), please reach out to Karthik; the consequences will be much less severe than if we approach you.

Final Project

The final project offers you the chance to apply your newly acquired skills towards an in-depth NLP application. Students are required to complete the final project in teams of 3 students.

There are two options for the final project: (a) reproducing an ACL/NAACL/EMNLP 2021-2023 paper (encouraged); (b) complete a research project (for this option, you need to discuss your proposal and get prior approval from the instructor). All the final projects will be completed in teams of 3 students (Find your teammates early!).
Deliverables: The final project is worth 35% of your course grade. The deliverables include:
Policy and honor code:
  • The final projects are required to be implemented in Python. You can use any deep learning framework such as PyTorch and Tensorflow.
  • You are free to discuss ideas and implementation details with other teams. However, under no circumstances may you look at another team's code, or incorporate their code into your project.
  • Do not share your code publicly (e.g. in a public GitHub repo) until after the class has finished.

Submission

Electronic Submission: Assignments and project proposal/paper are to be submitted as pdf files through Gradescope. If you need to sign up for a Gradescope account, please use your @princeton.edu email address. You can submit as many times as you'd like until the deadline: we will only grade the last submission. Submit early to make sure your submission uploads/runs properly on the Gradescope servers. If anything goes wrong, please ask a question on Ed or contact a TA. Do not email us your submission. Partial work is better than not submitting any work. For more detailed information on submitting your assignment solutions, see this guide on assignment submission logistics.

For assignments with a programming component, we may automatically sanity check your code with some basic test cases, but we will grade your code on additional test cases. Important: just because you pass the basic test cases, you are by no means guaranteed to get full credit on the other, hidden test cases, so you should test the program more thoroughly yourself!

Regrades: If you believe that the course staff made an objective error in grading, then you may submit a regrade request. Remember that even if the grading seems harsh to you, the same rubric was used for everyone for fairness, so this is not sufficient justification for a regrade. It is also helpful to cross-check your answer against the released solutions. If you still choose to submit a regrade request, click the corresponding question on Gradescope, then click the "Request Regrade" button at the bottom. Any requests submitted over email or in person will be ignored. Regrade requests for a particular assignment are due one week after the grades are returned. Note that we may regrade your entire submission, so depending on your submission you may actually lose more points than you gain.

FAQ