The field of optimization is concerned with the study of maximization and minimization of mathematical functions. Very often the arguments of (i.e., variables in) these functions are subject to side conditions or constraints. By virtue of its great utility in such diverse areas as applied science, engineering, economics, finance, medicine, and statistics, optimization holds an important place in the practical world and the scientific world.
Following the prerequisite course on linear optimization, this course will focus on nonlinear optimization, also called nonlinear programming, where the function to be optimized and/or the constraints on the decision variable are nonlinear. Although nonlinear programming problems are generally difficult to solve in practice, effective algorithms for solving particular cases such as linear or convex programming now exist, and understanding the theory of nonlinear programming aften allows one to formulate his problem that can be solved efficiently.
The goal of this course is to help the student develop a working knowledge of nonlinear programming, in particular convex optimization, i.e., to develop the skills and background needed to formulate, analyze and solve nonlinear problems. We will start with the main theoretical results of nonlinear programming, concentrating on results that are useful in computation. This will include optimality conditions, duality and sensitivity analysis. We will then study the main algorithms for solving optimization problems, including descent methods and interior-point methods. We will finally illustrate the power of nonlinear optimization on several examples of interest to the participants, including practical implementation in MATLAB.
Date | Topic | Homework |
May 9 | Introduction, problem formulation | homework 1 |
May 16 | Optimality conditions (1) | homework 2 |
May 23 | Optimality conditions (2) | homework 3 |
May 25 | Optimality condition (3) / Duality theory (1) | |
May 30 | Duality theory (2) | homework 4 |
June 6 | Duality theory (3) / Algorithms for unconstrained problems (1) | homework 5 |
June 8 | Algorithms for unconstrained problems (2) | homework 6 |
June 13 | Algorithms for constrained problems (1) | homework 7 |
June 15 | Algorithms for constrained problems (2). Applications (1) | homework 8 |
June 27 | Discrete optimization. Applications (2) | Final exam due |
>> [x,f]=minimize_newton([-1,5]',@fun_rosenbrock) +++++++ x = 1.0000 1.0000 f = 3.2981e-25
Below are some scripts used in the course:
Many more examples are available at http://www.stanford.edu/~boyd/cvx/examples/index.html.
There will be homework for each session, involving exercices and little MATLAB programming. You are required to hand in all homework of a particular week on the Tuesday the week after. However, you are expected to have solved and discuss some of the problems of the homework (without necessarily handing in your solutions) for every session. All homework will count for 50% of your grade.
Beyond the homework problems, you have one more assignment: For session 10 you will need to prepare a presentation of a journal paper that you choose, which uses lessons we learn in the class. Details will be discussed during the class. This will count for 10% of your grade.
Finally, there will be a final exam that will count for 40% of your grade.