GOAT-Bench: A Benchmark for Multi-modal Lifelong Navigation

1Georgia Tech, 2Carnegie Mellon University
3University of Illinois Urbana-Champaign 4Mistral AI 5University of Washington
*Co-first authors

We study the Go to Any Thing (GOAT) task and propose GOAT-Bench, a benchmark for agents navigating to a sequence of open vocabulary goals specified through any of the three modalities – category name (🏷️), a language description (🔡), or an image (📸).

Abstract

The Embodied AI community has made significant strides in visual navigation tasks, exploring targets from 3D coordinates, objects, language descriptions, and images. However, these navigation models often handle only a single input modality as the target. With the progress achieved so far, it is time to move towards universal navigation models capable of handling various goal types, enabling more effective user interaction with robots.

To facilitate this goal, we propose GOAT-Bench, a benchmark for the universal navigation task referred to as GO to AnyThing (GOAT). In this task, the agent is directed to navigate to a sequence of targets specified by the category name, language description, or image in an open-vocabulary fashion. We benchmark monolithic RL and modular methods on the GOAT task, analyzing their performance across modalities, the role of explicit and implicit scene memories, their robustness to noise in goal specifications, and the impact of memory in lifelong scenarios.

Multi-modal, Open vocabulary, and Lifelong

Muti-modal, Open vocabulary goals: it is an open vocabulary benchmark, enabling the incorporation of a broad range of targets, including those not encountered during training. This is a departure from prior work, that is often limited to a small set of 6 to 21 categories.
Lifelong navigation: each episode consists of 5 to 10 targets specified through distinct modalities (i.e. image, object, or language goal). This contrasts with most prior navigation benchmarks where the scene is reset after a target is reached, providing a benchmark for evaluating lifelong learning.

Baselines

We benchmark two types of methods: 1) Modular methods: semantic mapping and planning-based, and 2) Reinforcement Learning: sensor-to-action using neural network (SenseAct-NN) policies trained using RL.

Results

Comparison of end-to-end RL and modular methods on seen and unseen categories.

How do agents perform on each modality?

Performance across types of modalities: We breakdown the performance of all 3 baselines by modalities used subtask type: object category, language or image.

How important is memory for efficient navigation?

Usefulness of memory: We benchmark the drop in performance for when no memory is maintained across subtasks for modular GOAT and SenseAct-NN Monolithic RL baselines.

Does success and efficiency improve over time?

Average performance over time in a GOAT episode: We plot the success rate and SPL of memory based baselines against the number of subtasks completed in a GOAT episode

How robust are these methods to noise in goal specifications?

Robustness to noise: We breakdown the effect of noise on performance of different baselines by goal modality.

BibTeX

@misc{khanna2024goatbench,
      title={GOAT-Bench: A Benchmark for Multi-Modal Lifelong Navigation}, 
      author={Mukul Khanna* and Ram Ramrakhya* and Gunjan Chhablani and Sriram Yenamandra and Theophile Gervet and Matthew Chang and Zsolt Kira and Devendra Singh Chaplot and Dhruv Batra and Roozbeh Mottaghi},
      year={2024},
      eprint={2404.06609},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}