AMSI header image










A/Prof Regina Burachik
The University of South Australia


The need to optimize (understood as the search for a best option given the circumstances) is found everywhere from the natural and social sciences to engineering, business and economics. Optimizing requires a model, a mathematical theory for solving the model, and a computer code to implement the theory. This course focuses on the mathematical aspects of optimization. First, it gives basic tools of convex analysis, convex functions and separation theorems. Then it considers optimization problems, including convex (non-differentiable) and differentiable ones. We will analyse convergence properties of classical solution methods, such as steepest descent, Newton, and their variants, for unconstrained problems. Finally, we will study penalty, barrier, and exact penalty methods for constrained problems.

The course, week by week (taking into account one lecture of 90 min per week is a tutorial), is as follows.

Week 1: Introduction to Optimization problems: classification and examples. Elements of convex analysis: convex sets and convex functions, differentiability properties of convex functions, one-sided directional derivatives, epigraphs and level sets.

Week 2: Separation results for convex sets, topological properties of convex sets, subgradients of convex functions. Existence of solutions of optimization problems: boundedness and coerciveness. First and second order optimality conditions: Unconstrained case. Constrained case: Equality and inequality constraints for differentiable problems.

Week 3: Sensitivity analysis for unconstrained optimization. Optimality conditions for convex (non-differentiable) problems. Maximum of a convex function.

Week 4: Methods for Unconstrained Optimization: Newton method and its variants, and their convergence analysis. Cauchy and Armijo variants of steepest descent and convergence analysis of descent type methods. Methods for Constrained Optimization: Penalty Methods, Exact penalty methods and Lagrange multipliers, Barrier methods.

Contact hours

There will be 5 lectures of 90 minutes duration in weeks 1, 2, and 4. In week 3 there will be 4 lectures. One lecture per week is a tutorial. Consultation hours are available as requested by students.


Students are expected to be familiar with Multidimensional Calculus (gradients, Hessians, partial derivatives), limits of sequences in ℝn, matrices and Linear Algebra. Elements of topology in ℝn are highly desirable. All pre-requisite material is provided in the Appendix of the lecture notes.


The course is based on the Lecture Notes on Optimization by R.S. Burachik, provided to the students.

Additional References:

  1. Avriel, M., Nonlinear Programming: Analysis and Methods, Prentice-Hall, 1976.
  2. Bazaraa, M. S., Sherali H. D., and Shetty, C. M., Nonlinear Programming: Theory and algorithms, 2nd Edition, John Wiley & Sons, Inc, 1993.
  3. Bertsekas, D., Nonlinear Programming, Athena Scientific, 1999.
  4. Bertsekas, D., Convex Analysis and Optimization, Athena Scientific, 2003.
  5. Clarke, F. H., Optimization and Nonsmooth Analysis, SIAM, 1999.
  6. Dennis, J. E. and Schnabel, R. B., Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall, Englewood Cliffs, 1983.
  7. Luenberger, D. G. Linear and Nonlinear Programming, Second Edition, Addison-Wesley, 1989.
  8. Nocedal, J. and Wright, S., Numerical Optimization, Springer, 1999.

About Regina Burachik

Regina Burachik is an Associate Professor at University of South Australia, and her current research interests include nonsmooth, convex, and nonconvex optimization. Her PhD thesis introduced and analysed solution methods for variational inequalities, the latter being a generalization of the convex constrained optimization problem. She recently co-authored the Springer book "Set-Valued Analysis and Monotone Mappings". Regina enjoys learning, teaching, supervising and collaborating with others. Her main and only reason for doing math is pleasure.