visiting address: ITC, Lägerhyddsvägen 2, Hus 2, 

752 37 Uppsala 


I am an Assistant Professor at the Division of Scientific Computing, Department of Information Technology, Uppsala University and a SciLifeLab fellow. My research interests broadly span optimization, machine learning and distributed systems. More specifically, I am interested in developing scalable machine learning and optimization methods in support of various (computationally expensive, large scale) problems, particularly within the domain of life sciences. Prior to joining Uppsala University, I was an Assistant Professor at Umeå University and a postdoctoral researcher at Uppsala University where I worked on (Bayesian) parameter inference of stochastic descriptive models in computational biology. My doctoral research at the Department of Information Technology, Ghent University focussed on data-efficient non-convex optimization of computationally expensive objective functions (with applications in engineering).

Research Areas
Data-efficient learning and optimization methods form a central theme of a majority of my research activities. Such methods are particularly important in the context of (computationally expensive) simulation-driven design and analysis of systems across domains as varied as electrical engineering, computational fluid dynamics and systems biology. My research has therefore been highly interdisciplinary and driven by real-world problems. Research challenges encountered in problems involving computationally expensive simulation models include – sampling or selection of simulation locations, training fast and accurate surrogate models of simulators which minimizing training dataset size, optimizing the simulator while minimizing the number of computationally expensive simulations, performing likelihood-free parameter inference from observed datasets, among others. The following themes provide an overview of my recent research activities. Please click on the links below to learn more.

Statistical Sampling

active learning, design of experiments, sequential design, reinforcement learning.


surrogate modeling, multi-fidelity modeling, deep learning, time series modeling.


Bayesian optimization, constrained optimization, surrogate-based single and multi-objective optimization, evolutionary multi-objective optimization.

Parameter Inference

likelihood-free parameter estimation, approximate Bayesian computation, summary statistic selection, construction of priors, hyperparameter optimization.