For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
On January 30, 2025 OPTIMAL researcher Lara Scavuzzo Montana successfully defended her dissertation. Read more about her research project.

Lara Scavuzzo Montana

Towards smarter MILP solvers: A data-driven approach to branch-and-bound

Defense date: 30-01-2025

Cum Laude

Summary:

The available technology to solve Mixed Integer Linear Programs (MILPs) has experienced dramatic improvements in the past two decades. Pushing this algorithmic progress further is essential for solving even more complex optimization problems that arise in practice. This thesis examines various methods to enhance Branch-and-Bound (B&B) based MILP solvers, focusing on areas such as branching and Machine Learning (ML) assisted rules. Through our analysis of current methodologies and the introduction of
novel techniques, this thesis contributes to the development of more efficient and adaptive MILP solvers. We divide the discussion into four chapters, which are preceded by an introduction to the topics of this thesis (Chapter 1).

Chapter 2 explores the synergy between Machine Learning and B&B-based MILP solvers. In particular, we are interested in the case where these two technologies cooperate, enhancing rather than substituting each other. We survey the literature that falls into this category by first defining a number of abstract learning tasks where ML can play a key role. We point out methodological trends both in terms of how to frame and to solve the learning tasks. Further, we highlight some core directions and choices
in terms of problem representation, benchmarks and software.

Chapter 3 dives into the problem of variable selection, i.e., the problem of choosing a variable that will be used for branching. We start with a discussion of the current challenges faced by some popular methods for branching, both classical andML-based. We then propose our new formulation, the tree Markov Decision Process (tree MDP), which is a generalization of the well-knownMDP framework. We show that the variable
selection problem can be cast as a tree MDP and that this allows us to more efficiently learn a variable selection strategy without the need for expert demonstrations.

Chapter 4 also addresses branching but froma broader perspective. We study latticebased reformulations of the feasible set that transform the shape and sometimes also the dimension of the problem. The variables in the reformulation are hyperplanes in the original space, hence can be interpreted as a way to generate general branching directions
for the original problem. We study the reformulations from different perspectives, trying to uncover why and when the reformulations are effective. Our results show that these techniques have a wide applicability and high potential for speeding-up the solution of difficult Integer Programs.

Chapter 5 takes up again the perspective of ML-enhanced MILP solvers. This time, we ask a simple question: can we predict the optimal objective value of an MILP? The answer to this question is of great relevance to MILP solving, with several sub-routines and solver strategies having the potential to benefit from such a prediction. We propose both a static and a dynamicmethod which outperformthe previously proposed models. These results open the door for more dynamically configurable solvers that automatically adapt their strategy as more information becomes available.