We introduce a new robotic system that enables a mobile robot to autonomously explore an unknown environment, build a semantic map of the environment, and subsequently update the semantic map to reflect environment changes, such as location changes of objects. Our system leverages a LiDAR scanner for 2D occupancy grid mapping and an RGB-D camera for object perception. We introduce a semantic map representation that combines a 2D occupancy grid map for geometry, with a topological map for object semantics. This map representation enables us to effectively update the semantics by deleting or adding nodes to the topological map. Our system has been tested on a Fetch robot. The robot can semantically map a 93m×90m floor and update the semantic map once objects are moved in the environment.
The video demonstrates a robot autonomously exploring a 96m x 93m area using a Dynamic Window Frontier Exploration strategy. The robot completes the exploration in approximately 150 minutes, reaching a maximum speed of 0.6 m/s. During this process, it covers a total distance of over 800 meters.
@inproceedings{allu2024semanticmapping,
title={Autonomous Exploration and Semantic Updating of Large-Scale Indoor Environments with Mobile Robots},
author={Allu, Sai Haneesh and Kadosh, Itay and Summers, Tyler and Xiang, Yu},
journal={arXiv preprint arXiv:2409.15493},
year={2024}
}
Send any comments or questions to Sai Haneesh Allu: saihaneesh.allu@utdallas.edu
This work was supported by the DARPA Perceptuallyenabled Task Guidance (PTG) Program under contract number HR00112220005 and the Sony Research Award Program. The work of T. Summers was supported by the United States Air Force Office of Scientific Research under Grant FA9550- 23-1-0424 and the National Science Foundation under Grant ECCS-2047040. We would like to thank our colleague, Jishnu Jaykumar P, for his assistance during the experiments