TL;DR Watch one human demo → robot repeats the task in a new scene.
  1. 01MR-headset
    human demo
  2. 023D hand
    trajectory
  3. 03Robot
    end-effector
  4. 04Trajectory
    execution

Validated on 16 real-world tasks · 3 trials each on a Fetch mobile manipulator.

Abstract

We introduce a novel system for human-to-robot trajectory transfer that enables robots to manipulate objects by learning from human demonstration videos. The system consists of four modules. The first module is a data collection module that is designed to collect human demonstration videos from the point of view of a robot using an MR headset. The second module is a video understanding module that detects objects and extracts 3D human-hand trajectories from demonstration videos. The third module transfers a human-hand trajectory into a reference trajectory of a robot end-effector in 3D space. The last module utilizes a trajectory optimization algorithm to solve a trajectory in the robot configuration space that can follow the end-effector trajectory transferred from the human demonstration. Consequently, these modules enable a robot to watch a human demonstration video once and then repeat the same mobile manipulation task in different environments, even when objects are placed differently from the demonstrations.

HRT1 Overview

HRT1 system overview
  1. Stage-1 MR-Headset
    Capture
  2. Stage-2 Video
    Understanding
  3. Stage-3 Trajectory
    Transfer
  4. Stage-4 Trajectory
    Optimization
 System Walkthrough

Real-World Execution

16
tasks
3
trials each
5× / 1×
base / execution speed

BibTeX

Please cite our work if it helps your research.

@misc{2025hrt1,
  title         = {HRT1: One-Shot Human-to-Robot Trajectory Transfer for Mobile Manipulation},
  author        = {Sai Haneesh Allu and Jishnu Jaykumar P and Ninad Khargonkar and Tyler Summers and Jian Yao and Yu Xiang},
  year          = {2025},
  eprint        = {2510.21026},
  archivePrefix = {arXiv},
  primaryClass  = {cs.RO},
  url           = {https://arxiv.org/abs/2510.21026}
}

Contact

Send any comments or questions to Sai or Jishnu — saihaneesh.allu@utdallas.edu · jishnu.p@utdallas.edu

Acknowledgements

Supported by

NSF
National Science Foundation
2346528 · 2520553
NVIDIA
Academic Grant Program Award
XPeng
Gift funding