Our mission is to find practical methods for hard coordination problems that arise in multi-robot and multi-agent systems. This research brings in methods from machine learning, planning and control, and has numerous applications, including automated transport, logistics, environmental monitoring, and search & rescue.


We are interested in devising multi-robot learning strategies that allow agents to obtain unobservable information about other robots and the environment. We focus on generating policies that are executed locally, while ensuring that ensuing strategies have a performance near to that of coupled (centralized) systems. As part of this thrust, we develop sim-to-real methods that facilitate the transfer of policies learned in simulation to real world systems.

Mini cars 2

Solving Hard Coordination Problems

We are interested in developing data-driven or hybrid approaches that find near-optimal solutions to hard coordination problems involving resource allocation, path finding or graph optimization. Our solutions enable fast on-line (and often also decentralized) decision-making, as typically required in robotics applications.

Synthesizing Collective Behaviours

The natural world abounds with examples of complex and collectively intelligent behavior. We are interested in synthesizing local agent controllers and decentralized communication policies that lead to cooperative, collaborative, and resilient behaviors. Many of our methods build on Graph Neural Network architectures, which provide a well-suited inductive bias for networked multi-agent teams and swarms.

Sign up to our lab Newsletter

Sign Up

Follow us on YouTube

Sign Up

Follow us on Twitter

Sign Up

Find us on Github


University of Cambridge
William Gates Building
15 JJ Thompson Avenue
Cambridge, CB3 0FD