DS 801: Advanced Optimization for Data Science
DS 801: Advanced Optimization for Data Science, Spring 2024
MW 9:00 - 10:15 am, E2-2 #1122
Instructor: Dabeen Lee (E2-2 #2109)
Email: dabeenl [at] kaist [dot] ac [dot] kr
Teaching Assistant: Seoungbin Bae (Email: sbbae3 [at] gmail [dot] com)
Text: No required text, but the following materials are helpful.
Syllabus (pdf)
In today's fast-paced world driven by data, the ability to extract valuable insights and make informed decisions is more crucial than ever. Optimization, the process of finding the best solution among a set of alternatives, lies at the heart of this endeavor. From predicting customer behavior to optimizing supply chains, from designing machine learning models to solving complex decision-making problems, optimization techniques play a pivotal role in harnessing the power of data for practical applications. In this course, we will embark on a journey to explore the fundamental principles, algorithms, and applications of optimization in the context of data science. Through a blend of theory, practical examples, and hands-on exercises, we will equip ourselves with the necessary tools and techniques to tackle real-world optimization challenges in data-driven decision-making. There are no formal prerequisites, but basic knowledge of mathematical optimization and convex analysis will be assumed.
Lecture notes
- Mon 2/26: introduction (lecture note)
- Wed 2/28: convex optimization basics (lecture note)
- Mon 3/04: introduction to gradient descent (lecture note, code)
- Wed 3/06: gradient descent for smooth functions, adaptive gradient (AdaGrad) (lecture note)
- Mon 3/11: gradient descent for strongly convex functions, regularization (lecture note, code)
- Wed 3/13: proximal gradient, acceleration, and ISTA and FISTA for LASSO (lecture note, code)
- Mon 3/18: stochastic gradient descent, binary classification (perceptron algorithm, SVM, logistic regression) (lecture note)
- Wed 3/20: coordinate descent, variance reduced stochastic methods (lecture note)
- Mon 3/25: introduction to nonconvex optimization (lecture note)
- Wed 3/27: singular value decomposition, the power method (lecture note)
- Mon 4/01: matrix completion, the Frank-Wolfe algorithm (lecture note)
- Wed 4/03: nonconvex landscape & finding stationary points (lecture note)
- Mon 4/08: algorithms for finding second-order stationary points (lecture note)
- Mon 4/22: Lagrangian duality & dual methods (lecture note)
- Wed 4/24: training neural networks and Lagrangian duality (lecture note)
- Mon 4/29: introduction to minimax optimization (applications, saddle points, gradient descent ascent) (lecture note)
- Wed 5/01: algorithms for minimax optimization (extra gradient, optimistic GDA, PPA)and variational inequality (lecture note)
- Wed 5/08: generative adversarial networks (lecture note)
- Mon 5/13: Wasserstein GAN, adversarial training, and sharpness-aware minimization (lecture note)
- Mon 5/20: online convex optimization, online mirror descent (lecture note)
- Mon 5/27: bandit convex optimization (lecture note)
- Wed 5/29: optimistic optimization methods for black-box optimization (lecture note)
- Mon 6/03: black-box optimization via supervised learning (kernel ridge regression, optimizing over a trained neural network) (lecture note)
- Wed 6/05: Bayesian optimization via Gaussian process (lecture note)
Assignments
- Assignment 1 (pdf)
- Assignment 2 (pdf)
- Assignment 3 (pdf)
- Assignment 4 (pdf)