site stats
ARPatching
Abstract

We present a novel Augmented Reality (AR) approach, through Microsoft HoloLens, to address the challenging problems of diagnosing, teaching, and patching interpretable knowledge of a robot. A Temporal And-Or graph (T-AOG) of opening bottles is learned from human demonstration and programmed to the robot. This representation yields a hierarchical structure that captures the compositional nature of the given task, which is highly interpretable for the users. By visualizing the knowledge structure represented by the T-AOG and the decision making process by parsing a T-AOG, the user can intuitively understand what the robot knows, supervise the robot's action planner, and monitor visually latent robot states (e.g., the force exerted during interactions). Given a new task, through such comprehensive visualizations of robot's inner functioning, users can quickly identify the reasons of failures, interactively teach the robot with a new action, and patch it to the knowledge structure represented by the T-AOG. In this way, the robot is capable of solving similar but new tasks only through minor modifications provided by the users interactively. This process demonstrates the interpretability of our knowledge representation and the effectiveness of the AR interface.

BibTeX

Please cite our paper if you use our code or data.

					
@inproceedings{liu2018interactive,
    title={Interactive Robot Knowledge Patching using Augmented Reality},
    author={Liu, Hangxin and Zhang, Yaofang and Si, Wenwen and Xie, Xu and Zhu, Yixin and Zhu, Song-Chun},
    booktitle={International Conference on Robotics and Automation (ICRA)},
    pages={1947--1954},
    year={2018}
}
					
				
Acknowledgements

The authors thank Zhenliang Zhang and Feng Gao for their assistance in the experiment. The work reported herein is supported by DARPA XAI N66001-17-2-4029 and ONR MURI N00014-16-1-2007.