RobotFingerPrint: Unified Gripper Coordinate Space for Multi-Gripper Grasp Synthesis​

The University of Texas at Dallas     
Under submission to IEEE International Conference on Robotics and Automation (ICRA), 2025
Teaser Image

Abstract

We introduce a novel representation named as the unified gripper coordinate space for grasp synthesis of multiple grippers. The space is a 2D surface of a sphere in 3D using longitude and latitude as its coordinates, and it is shared for all robotic grippers. We propose a new algorithm to map the palm surface of a gripper into the unified gripper coordinate space, and design a conditional variational autoencoder to predict the unified gripper coordinates given an input object. The predicted unified gripper coordinates establish correspondences between the gripper and the object, which can be used in an optimization problem to solve the grasp pose and the finger joints for grasp synthesis. We demonstrate that using the unified gripper coordinate space improves the success rate and diversity in the grasp synthesis of multiple grippers.

Code

IRVLUTD => robot-finger-print

Code repository for the project, adapted from GenDexGrasp. Please follow their instructions for the isaac gym grasp evaluation setup. We used a learning rate of learning_rate=0.1 and step_size=0.02 for the grasp evaluation params defined under the env script for each gripper.

Dataset

Dataset: Gripper Coordinates and Surface Points

Box.com (no login required) link for the gripper surface points coordinates and other metadata files. Please refer to the README provided in the folder for overall setup (since these files are supposed to be used with the dataset provided by GenDexGrasp).

Citation (BibTeX)

Please cite RobotFingerPrint if this work helps in your research:
@inproceedings{khargonkar2024robotfingerprint,
      title={RobotFingerPrint: Unified Gripper Coordinate Space for Multi-Gripper Grasp Synthesis​},
      author={Khargonkar, Ninad and Casas, Luis Felipe and  and Prabhakaran, Balakrishnan and Xiang, Yu},
      journal={arXiv preprint arXiv:2409.14519},
      year={2024}
    }

Contact

Send any comments or questions to Ninad Khargonkar: ninadarun.khargonkar@utdallas.edu or
Luis Felipe Casas: Luis.CasasMurillo@UTDallas.edu

Acknowledgements

This work was supported by the Sony Research Award Program and the National Science Foundation (NSF) under Grant No. 2346528.