Optimization theory is nowadays a well-developed area, both in the theoretical and practical aspects. This graduate course introduces the theory for solving optimization problems and illustrates its use with many recent applications in signal processing, communication systems and machine learning.

Students attending this course will:

  • Develop a solid theoretical basis for solving optimization problems arising in research.

  • Distinguish between convex and nonconvex optimization problems.

  • Learn manipulations to unveil the hidden convexity of optimization problems and relaxation techniques to treat non-convex optimization problems.

  • Be able to characterize the solution of convex and non-convex optimization problems either analytically or algorithmically.

  • Learn the usage of some of the more popular optimization toolboxes.

Program

Unit 1. Introduction

  • Optimization problems and constraints

  • On closed-form optimization: analytical versus algorithmic solutions

  • Types of optimization problems

  • Applied linear algebra

Unit 2. Convex optimization

  • Convex sets and convex functions

  • Convex optimization problems

  • Disciplined convex programming, CVX

  • Lagrange duality and KKT conditions

Unit 3. Optimization algorithms

  • Local optimization algorithms

  • Stochastic optimization

  • Global optimization

  • Integer programming and metaheuristics

Unit 4. Applications

  • Optimization for machine learning

  • Matrix optimization: SDP programming

  • Final project

Textbooks

  • S. Boyd and L. Vandenberghe, “Convex Optimization,” Cambridge University Press, 2004.
    Available online at http://www.stanford.edu/~boyd/cvxbook/

  • A. Zhang, Z.C. Lipton, M. Li and A.J. Smola, “Dive into Deep Learning,” 2019.
    Available online at https://d2l.ai