We present a new reproducible benchmark for evaluating robot manipulation in the real world, specifically focusing on the task of pick-and-place. Our benchmark uses the YCB objects, a commonly used dataset in the robotics community, to ensure that our results are comparable to other studies. Additionally, the benchmark is designed to be easily reproducible in the real world, making it accessible for researchers and practitioners. We also provide our experimental results and analyses for model-based and model-free 6D robotic grasping on the benchmark, where representative algorithms for object perception, grasping planning and motion planning are evaluated. We believe that our benchmark will be a valuable tool for advancing the field of robot manipulation. By providing a standardized evaluation framework, researchers can more easily compare different techniques and algorithms, leading to faster progress in developing robot manipulation methods.
20 scenes in our SceneReplica benchmark with 5 YCB objects in each scene
The process of replicating a scene in the real world. The reference scene image is overlaid to the real camera image to guide how to place objects into the real-world scene.
# | Perception | Grasp Planning | Motion Planning | Control | Ordering | Grasping Type | Pick & Place Success | Grasping Success | Videos |
---|
@article{khargonkar2023scenereplica,
title={SCENEREPLICA: Benchmarking Real-World Robot Manipulation by Creating Replicable Scenes},
author={Ninad Khargonkar and Sai Haneesh Allu and Yangxiao Lu and Jishnu Jaykumar P and Balakrishnan Prabhakaran and Yu Xiang},
journal={arXiv preprint arXiv:2306.15620},
year={2023}}
Send any comments or questions to Ninad | Sai:
ninadarun.khargonkar@utdallas.edu | saihaneesh.allu@utdallas.edu
This work was supported in part by the DARPA Perceptually-enabled Task Guidance (PTG) Program under contract number HR00112220005.