CP 2024 impressions
In this post, I want to share some of my takeaways from the CP 2024 conference this week in Girona. While this event, as usual, was stacked with a variety of high-quality optimization talks, I can see a few overarching themes I think would be worthwhile to share.
Of course, condensing a week’s worth of technical content to a single post can never have a hint of completeness in it, therefore, I will do my best to share a general impression of the field as I see it without trying to dive into any specific technical details. If you are in Delft or nearby, however, I would welcome you to the MOO session this Monday, September 9, where I will lead a more in-depth discussion over the same narrative points.
Now, without further ado, let’s dive into the first point I want to make, which is…
Good old-fashioned AI is here to stay
One of the major takeaways from this year’s papers is that some of the less known or recognized ideas from earlier days are worth being re-explored, at least because the computational resources we have now do not look like anything we have had in the heyday of GOFAI. As for the specific examples, I cannot avoid mentioning an invited talk by Ian Gent (Gent 2024) on Solvitaire, software for evaluating the winnability of solitaire games which narrowed the winnability range of Klondike to a fraction of a percentage point (Blake and Gent 2024). This approach achieves impressive computational results – but it does so without reaching to ideas unrelated to search: a good way to summarize this approach would be “running a depth-first search, maintaining the transposition table, and using the dominance pruning rules.”
Another direction worth mentioning here is domain-independent dynamic programming, introduced with a tutorial by Chris Beck. Dynamic programming is one of the foundational CS curriculum topics, yet most of the attention in this context is diverted to ad-hoc implementation of various techniques. An approach that brings it closer to model-and-solve is certainly welcome!
For a more modern spin, I would like to mention an approach for improving the bin packing propagation with GPUs (Tardivo, Michel, and Pontelli 2024). While the first association with GPU programming is surely the linear algebra computations, it turns out you can also use it to parallelize the calculation of different lower bounds, making it useful for getting many uncorrelated lower bounds for the price of one. I think using GPUs in search is yet another underexplored direction; looking forward to more!
A hallmark of GOFAI approaches, however, is that they require substantial effort during the implementation to get it right, ranging from establishing the workflow to correcting the smallest of implementation details, or, as Ian Gent has put it, it takes:
Punctilious tenacious precision.
(Gent 2024)
Given that, it is hardly surprising that more than a few papers support the next major trend:
Trust is an appreciating asset
Disclaimer: both of the papers I have co-authored for this conference fall into this group, so I have an inherent bias towards this subfield.
To counter-balance the previous point, several papers have explored ways to justify various claims made by optimization algorithms. To start, a paper by Berg et al. has introduced an approach for justifying a without loss of generality reasoning in Pacose MaxSAT solver (Berg et al. 2024; Paxian, Reimer, and Becker 2018). The key challenge behind justifying this type of reasoning is that it introduces statements that are not implied but do not really change the problem: think symmetry breaking or pure clause elimination.
Justifying a claim by any solver is tricky, but the situation for the constraint solvers is particularly hard because they employ a wide range of reasoning techniques to propagate through different constraints. We addressed this in (Flippo et al. 2024) with an approach that first writes the proof scaffold and expands the propagations necessary to justify the “scaffolded” claims later on.
All of this involves justifying different parts of model-and-solve workflows; not every optimization algorithm can be described this way, however, with dynamic programming being a notable exception. We proposed a way to do this in (Demirović et al. 2024) by encoding the dynamic programming states with new variables and mirroring the individual transitions.
All in all, as the search algorithms become more important, the need for justifying their results naturally grows as well. Luckily, this is much less of a problem than in, say, learning-based approaches. If you have an idea of how to justify a decision-making procedure (and what to justify in the first place), there is a good chance it has either not been executed yet or differs in some important context; I think we will see more of this action in the next CP conference!
Incomplete search is worth more than a passing mention
In many problems, however, the optimality claim is not an asset; the only thing that matters is finding good solutions. Quite a few papers explored this idea this year; in fact, the best paper award went to the work that proposed new local search operators for MIP solving, showing impressive results on the MIPLIB (Lin, Zou, and Cai 2024). Given that many MIP models stem from practical settings without a meaningful interpretation for a lower bound, this certainly looks like a promising result!
Another noteworthy work by Chen et al. introduces an approach for running a local search for pseudo-Boolean optimization problems in parallel (Chen et al. 2024). The parallelism here serves both as a way to run many parallel searches and as a remedy against local optima which shares good solutions to re-fire searches that will stall later.
I have to admit I do not have a clear idea of what is missing in this subfield of optimization. On the other hand, the local search seems to be an evergreen idea that is always on the table… and clearly useful to keep in mind.
Data generated by search algorithms is underexploited
Most of the talks mentioned above were about various techniques for or around the search algorithms; however, there is undeniably a lot of value in learning from data. On a surface, this area does seem to be fit for learning approaches, as the algorithms have to make a lot of decisions and therefore produce a lot of data. However, my takeaway from this year is that there are many more patterns to mine in data produced by optimization algorithms than we do now.
That is not to say, of course, that there is no effort; quite the contrary, there have been a few exciting talks showing new ways of working with data “in the loop”. For example, (Parjadis et al. 2024) suggest training a GNN model to produce good Lagrangean multiplies for the traveling salesperson problem. I think this is a promising approach, not least because it seems to only assume that the problem in hand has a “convenient” Lagrangian relaxation; I wonder if we see this approach used in more problem domains next year.
There is also a lot to say about reasoning over the constraints of a problem. (Michailidis, Tsouros, and Guns 2024) explore this idea with the help of large language models. This seems to be a promising way to address the “open world” problem that appears in applied modeling when the optimization problem typically undergoes many iterations which include both the optimization people and the domain experts. Overall, this seems to be a promising way to make optimization technologies more accessible, hopefully with more applications down the line.
Conclusion
Overall, that was an exciting event, and an inspiring one at that; not only there is a lot of action going on, but there are several topics that are clearly in high demand. I expect that over the next year, we will see a lot of new ways of justifying various algorithms—or their parts—as well as novel combinations of search techniques across the field. Of course, the usual suspects of novel local search techniques are new propagators are never off the table:) And that’s it for now; goodbye, Girona, and see you in Glasgow next year!