COS 484: Natural Language Processing
Spring 2026

Links

What is this course about?

Recent advances have ushered in exciting developments in natural language processing (NLP), resulting in systems that can translate text, answer questions and even hold spoken conversations with us. This course will introduce students to the basics of NLP, covering standard frameworks for dealing with natural language as well as algorithms and techniques to solve various NLP problems, including recent deep learning approaches. Topics covered include language modeling, representation learning, text classification, sequence tagging, machine translation, Transformers, and others.

Information

Course staff:

Time/location:

(All times are in EST)

Office hours for instructors and TAs are listed in the Google Calendar below.

Grading

Prerequisites:

Reading:

There is no required textbook for this class, and you should be able to learn everything from the lectures and assignments. However, if you would like to pursue more advanced topics or get another perspective on the same material, here are some books (all of them can be read free online):
Schedule

Lecture schedule is tentative and subject to change. All assignments are due at 11:59pm EST on Tuesdays. Check the schedule for exact dates

Week Date Topics Readings Assignments
1 Tue (1/27) Introduction + Language Models 1. Advances in natural language processing
2. Human Language Understanding & Reasoning
3. J & M 3.1-3.5
Thu (1/29) Text classification Naive Bayes: J & M 4.1-4.6
Logistic regression: J & M 5.1-5.8
Fri (1/30) Precept 1
2 Tue (2/3) Word Embeddings 1. J & M 6.2-6.4, 6.6, 6.8, 6.10-6.12
2. Efficient Estimation of Word Representations in Vector Space (word2vec)
3. Distributed representations of words and phrases and their compositionality (negative sampling)
A1 out
Thu (2/5) Neural Networks for NLP J&M 7.3-7.5
Fri (2/6) Precept 2
3 Tue (2/10) Sequence Models 1. J&M 17.1-17.4
2. Michael Collin's notes on HMMs
3. Michael Collins's notes on MEMMs and CRFs
Thu (2/12) Recurrent neural networks 1. J&M 8.1-8.5
2. The Unreasonable Effectiveness of Recurrent Neural Networks
3. Understanding LSTM Networks
4. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (GRUs)
Fri (2/13) Precept 3 A2 out
4 Tue (2/17) Seq2Seq models + Attention 1. Sequence to Sequence Learning with Neural Networks
2. Neural Machine Translation by Jointly Learning to Align and Translate
3. Effective Approaches to Attention-based Neural Machine Translation
4. Blog post: Visualizing A Neural Machine Translation Model
A1 due
Thu (2/19) Transformers 1 1. J&M 9.1-9.4
2. Attention Is All You Need
3. The Annotated Transformer
4. The Illustrated Transformer
Fri (2/20) Precept 4
5 Tue (2/24) Transformers 2 1. Efficient Transformers: A Survey
2. Vision Transformer
Thu (2/26) Contextualized representations + Pre-Training 1. Deep contextualized word representations (ELMo)
2. Improving Language Understanding by Generative Pre-Training (GPT)
3. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
4. The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning)
Fri (2/27) Precept 5
6 Tue (3/3) Midterm review A2 due
Thu (3/5) Midterm (in person)
Fri (3/6) No precept A3 out
7 Tue (3/10) Spring break (no class)
Thu (3/12) Spring break (no class)
8 Tue (3/17) LLMs 1 (Pre-training, GPT-3, few-shot learning) 1. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (T5)
2. Language Models are Few-Shot Learners (GPT-3)
3. GPT-4 Technical Report (GPT-4)
Thu (3/19) LLM adaptation (prompting, CoT, LoRA) 1. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
2. LoRA: Low-Rank Adaptation of Large Language Models
Fri (3/20) Precept 6 Project proposals due
9 Tue (3/24) LLMs Post-training 1. Training language models to follow instructions with human feedback (InstructGPT)
2. Direct Preference Optimization: Your Language Model is Secretly a Reward Model
3. SimPO: Simple Preference Optimization with a Reference-Free Reward
Thu (3/26) Agents 1. SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
2. SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering
Fri (3/27) Precept 7 A4 out
10 Tue (3/31) Systems for LLM training & inference 1 1. All the Transformer Math You Need to Know
2. All About Transformer Inference
3. The Ultra-Scale Playbook: Training LLMs on GPU Clusters
A3 due
Thu (4/2) Systems for LLM training & inference 2
Fri (4/3) Precept 8
11 Tue (4/7) Evals + data
Thu (4/9) Reasoning
Fri (4/10) Precept 9
12 Tue (4/14) Guest lecture: TBD A4 due
Thu (4/16) Guest lecture: TBD
Fri (4/17) Precept 10
13 Tue (4/21) Guest lecture / Project meetings
Thu (4/23) Guest lecture / Project meetings
14 Tue (4/28) Reading period
Thu (5/8) Project presentations
Mon (5/12) Dean's date Final project report due
Coursework

Assignments

All assignments are due at 11:59pm EST on Tuesdays. We allow each student up to 4 free late days that you can use any time during the semester, with at most 3 late days per assignment. No assignment will be accepted more than 3 days after the deadline. We trust you to use these late days responsibly to help mitigate special circumstances. Late hours will round up to a full day (i.e. if you submitted 4 hours late, it will cost you 1 late day). Once you run out of late days, each additional day late will incur a fixed 10% deduction for the given assignment (e.g. 10 points off a 100-point assignment). For extenuating circumstances (e.g, those that involve a Dean), please make sure to email the Dean and the course instructors. For students with a dean’s note, the weight of their missed/penalized assignment will be added to the midterm and your midterm score will be scaled accordingly (for homeworks 0, 1 and 2) (e.g. if you are penalized 2 points overall, your midterm will be worth 27 and your score will be multiplied by 27/25). Missing homework 3 and 4 after the midterm can only be compensated by arranging an oral exam on the pertinent material.
Writeups: Homeworks should be written up clearly and succinctly; you may lose points if your answers are unclear or unnecessarily complicated. Using LaTeX is recommended (here's a template), but not a requirement. If you've never used LaTeX before, refer to this introductory guide on Working with LaTeX to get started. Hand-written assignments must be scanned and uploaded as a pdf.
Collaboration policy and honor code: You are free to form study groups and discuss homeworks and projects. However, you must write up homeworks and code from scratch independently, and you must acknowledge in your submission all the students you discussed with. The following are considered to be honor code violations (in addition to the Princeton honor code):
  • Looking at the writeup or code of another student.
  • Showing your writeup or code to another student.
  • Discussing homework problems in such detail that your solution (writeup or code) is almost identical to another student's answer.
  • Uploading your writeup or code (or released solutions) to a public repository (e.g. GitHub, Bitbucket, Pastebin) so that it can be accessed by other students.
When debugging code together, you are only allowed to look at the input-output behavior of each other's programs (so you should write good test cases!). It is important to remember that even if you didn't copy but just gave another student your solution, you are still violating the honor code, so please be careful. If you feel like you made a mistake (it can happen, especially under time pressure!), please reach out to the instructors; the consequences will be much less severe than if we approach you.
Large language model policy: You may not consult a Large Language Model (LLM) when working on the Theory portions of assignments. You may use coding assistants like GitHub Copilot/Cursor for completing the programming parts of the homework but we require you to document your use of these assistants in your submission (details will be provided in the assignment). You may also use LLMs to help understand concepts and to study for exams (e.g. like looking up YouTube videos explaining CNNs). However, note that LLMs can generate incorrect text and that they often are equally confident when they are correct vs. incorrect (this is analogous to the fact that not everything presented on YouTube is factually correct). Our course materials are the only standard with which exams and assessments will be evaluated.

Final Project

[Project guidelines are available here]

The final project offers you the chance to apply your newly acquired skills towards an in-depth NLP application. Students are required to complete the final project in teams of 3 students.

There are two options for the final project: (a) reproducing an ACL/NAACL/EMNLP/COLM, or NLP papers from ICLR/ICML/NeurIPS from past 5 years (encouraged); (b) complete a research project (for this option, you need to discuss your proposal and get prior approval from the instructor). All the final projects will be completed in teams of 3 students (Find your teammates early!).
Deliverables: The final project is worth 35% of your course grade. The deliverables include:
Policy and honor code:
  • The final projects are required to be implemented in Python. You can use any deep learning framework such as PyTorch and Tensorflow.
  • You are free to discuss ideas and implementation details with other teams. However, under no circumstances may you look at another team's code, or incorporate their code into your project.
  • Do not share your code publicly (e.g. in a public GitHub repo) until after the class has finished.

Submission

Electronic Submission: Assignments and project proposal/paper are to be submitted as pdf files through Gradescope. If you need to sign up for a Gradescope account, please use your @princeton.edu email address. You can submit as many times as you'd like until the deadline: we will only grade the last submission. Submit early to make sure your submission uploads/runs properly on the Gradescope servers. If anything goes wrong, please ask a question on Ed or contact a TA. Do not email us your submission. Partial work is better than not submitting any work. For more detailed information on submitting your assignment solutions, see this guide on assignment submission logistics.

For assignments with a programming component, we may automatically sanity check your code with some basic test cases, but we will grade your code on additional test cases. Important: just because you pass the basic test cases, you are by no means guaranteed to get full credit on the other, hidden test cases, so you should test the program more thoroughly yourself!

Regrades: If you believe that the course staff made an objective error in grading, then you may submit a regrade request. Remember that even if the grading seems harsh to you, the same rubric was used for everyone for fairness, so this is not sufficient justification for a regrade. It is also helpful to cross-check your answer against the released solutions. If you still choose to submit a regrade request, click the corresponding question on Gradescope, then click the "Request Regrade" button at the bottom. Any requests submitted over email or in person will be ignored. Regrade requests for a particular assignment are due one week after the grades are returned. Note that we may regrade your entire submission, so depending on your submission you may actually lose more points than you gain.
FAQ