Making actual robots smarter.
Researching how just a few expert demonstrations, alongside a causal view of the world, can be used to learn difficult, yet robust and safe, long-horizon behaviours for agents. Using hierarchical learning as a tool for the robot to understand the underlying problem structure. Pushing towards interpretable, explainable, cause-effect based machine learning for robotics.
PhD student at the Robust Autonomy and Decisions group, part of the Insitute for Perception, Action and Behaviour at the University of Edinburgh. I am supervised by Dr. Subramanian Ramamoorthy and Dr. Kartic Subr.
Building robots and liquid rocket engines.
PhD in Robotics and Autonomous Systems, 2020
University of Edinburgh
MEng in Robotics, 2015
University of Reading
Worked as Teaching Support for various courses: Probabilistic Modellign and Reasoning, Reinforcement Learning, System Design Projects.
Currently RA with the Alan Turing Institute and the CRUK project MAMMOBOT.
imgprocand custom pattern calibration.
Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics. These change-points are often exploited in hierarchical motion planning to build approximate models and to facilitate the design of local, region-specific controllers.
Many realistic robotics tasks are best solved compositionally, through control architectures that sequentially invoke prim- itives and achieve error correction through the use of loops and conditionals taking the system back to alternative earlier states. Recent end-to-end approaches to task learning attempt to directly learn a single controller that solves an entire task, but this has been difficult for complex control tasks that would have otherwise required a diversity of local primi- tive moves, and the resulting solutions are also not easy to inspect for plan monitoring purposes.