In the OPTIMAL research project are currently four post-doc researchers involved.
Giulia Bernardini is a post-doc researcher in the Life Sciences and Health group at Centrum Wiskunde & Informatica (CWI). The main purpose of her research is to investigate the combinatorial aspects of recent problems arising in computational biology and data sanitisation. Specifically, the goal is to develop new algorithmic frameworks to deal with pan-genomic and phylogenetic data, with particular attention to tumor phylogenies. She also studies new combinatorial methods to achieve privacy-utility trade-offs for individual data dissemination.
Marek Elias is a postdoctoral researcher in Network and Optimization group at Centrum Wiskunde & Informatica (CWI). He works in the fields of online optimization and differential privacy with a target to understand the role of uncertainty in computation. Predictive ML models trained on past data have a great potential to decrease uncertainty about the future. The goal of Marek's participation in OPTIMAL project is to design robust algorithms based on such predictive models using methods from algorithmic theory.
Ade Fajemisin is a post-doctoral researcher at the University of Amsterdam and works on the ENW-Groot project OPTIMAL (Optimization for and with Machine Learning). His main research focus is in data-driven optimization, which involves using data and machine learning to determine components of optimisation problems, as well as to facilitate the solution of complex optimisation problems. He is also interested in applying these techniques to solve real-world problems.
Moslem Zamani is a postdoctoral researcher at Tilburg University, and works on the ENW-Groot project OPTIMAL (Optimization for and with Machine Learning). His research concerns understanding the worst-case behaviour of stochastic gradient methods. To this end, the new approaches, performance estimation using semidefinite optimisation and robust-control analysis, are employed. This research may lead to the development of novel stochastic gradient methods for the training of neural networks.