Paper

Disentangled Relational Representations for Explaining and Learning from Demonstration

Learning from demonstration is an effective method for human users to instruct desired robot behaviour. However, for most non-trivial tasks of practical interest, efficient learning from demonstration depends crucially on inductive bias in the chosen structure for rewards/costs and policies.

Using Causal Analysis to Learn Specifications from Task Demonstrations

Learning models of user behaviour is an important problem that is broadly applicable across many application domains requiring human-robot interaction. In this work we show that it is possible to learn a generative model for distinct user behavioral types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space. We use this model to differentiate between user types and to find cases with overlapping solutions.

DynoPlan: Combining Motion Planning and Deep Neural Network based Controllers for Safe HRL

Many realistic robotics tasks are best solved compositionally, through control architectures that sequentially invoke prim- itives and achieve error correction through the use of loops and conditionals taking the system back to alternative earlier states. Recent end-to-end approaches to task learning attempt to directly learn a single controller that solves an entire task, but this has been difficult for complex control tasks that would have otherwise required a diversity of local primi- tive moves, and the resulting solutions are also not easy to inspect for plan monitoring purposes.