Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild

Conference on Robot Learning (CoRL) 2025


1National University of Singapore, 2University of Toronto,
3IIT-Dhanbad, 4Singapore Technologies Engineering

Visual search for bears in simulated Yosemite Valley (colored path = simulation path).

TL;DR

Search-TTA is a multimodal test-time adaptation framework that significantly corrects poor VLM predictions due to domain mismatch or the lack of training data, given various input modalities (e.g. image, text, sound) and planning methods (e.g. RL) to achieve efficient visual navigation and search in-the-wild.

AVS-Bench is a visual search dataset based on internet-scale ecological data that contains up to 380k training and 8k validation satellite images, each with targets and their corresponding ground image, taxonomic label, and sound data.

Abstract

To perform outdoor autonomous visual navigation and search, a robot may leverage satellite imagery as a prior map. This can help inform high-level search and exploration strategies, even when such images lack sufficient resolution to allow for visual recognition of targets. However, there are limited training datasets of satellite images with annotated targets that are not directly visible. Furthermore, approaches which leverage large Vision Language Models (VLMs) for generalization may yield inaccurate outputs due to hallucination, leading to inefficient search. To address these challenges, we introduce Search-TTA, a multimodal test-time adaptation framework with a flexible plug-and-play interface compatible with various input modalities (e.g. image, text, sound) and planning methods. First, we pretrain a satellite image encoder to align with CLIP's visual encoder to output probability distributions of target presence used for visual search. Second, our framework dynamically refines CLIP’s predictions during search using a test-time adaptation mechanism. Through a novel feedback loop inspired by Spatial Poisson Point Processes, uncertainty-weighted gradient updates are used to correct potentially inaccurate predictions and improve search performance. To train and evaluate Search-TTA, we curate AVS-Bench, a visual search dataset based on internet-scale ecological data that contains up to 380k training and 8k validation images (in- and out-domain). We find that Search-TTA improves planner performance by up to 30.0%, particularly in cases with poor initial CLIP predictions due to limited training data. It also performs comparably with significantly larger VLMs, and achieves zero-shot generalization to unseen modalities. Finally, we deploy Search-TTA on a real UAV via hardware-in-the-loop testing, by simulating its operation within a large-scale simulation that provides onboard sensing.

Test-Time Adaptation Feedback

In this work, we use CLIP as our lightweight VLM, and first align a satellite image encoder to the same representation space as a vision encoder through patch-level contrastive learning. This enables the satellite image encoder to generate a score map by taking the cosine similarity between its per-patch embeddings and the embeddings of other modalities (e.g., ground image, text, sound). We then introduce a novel test-time adaptation feedback mechanism to refine CLIP’s predictions. To achieve this, we take inspiration from Spatial Poisson Point Processes to perform gradient updates to the satellite image encoder based on past measurements. We also enhance the loss function with an uncertainty-driven weighting scheme that acts as a regularizer to ensure stable gradient updates.

Search-TTA Framework

TTA Example

Regulated change in heatmap region probabilities given measurements collected.

AVS-Bench

To validate Search-TTA, we curate AVS-Bench, a visual search dataset based on internet-scale ecological data. It comprises Sentinel-2 level 2A satellite images with unseen taxonomic targets from the iNat-2021 dataset, each tagged with ground-level image and taxonomic label (some with sound data). One advantage of using ecological data is the hierarchical structure of taxonomic labels (seven distinct tiers), which facilitates baseline evaluation across various levels of specificity. AVS-Bench is diverse in geography and taxonomies to reflect in-the-wild scenarios. Our dataset offers 380k training images 8k validation images (in- and out-domain). Aside from Search-TTA, we use AVS-Bench to finetune LISA that output score maps and text explanations (demo link at the top).

AVS-Bench covers a diverse set of taxonomies across the world.

Results

Performance Analysis

TTA Performance Gain with Pretraining Data

VLM Inference Time Analysis

Multimodality

Emergent Alignment to Unseen Modalities

🤗 HuggingFace Demo


BibTeX

@inproceedings{tan2025searchtta,
  title        = {Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild},
  author       = {Derek Ming Siang Tan, Shailesh, Boyang Liu, Alok Raj, Qi Xuan Ang, Weiheng Dai, Tanishq Duhan, Jimmy Chiun, Yuhong Cao, Florian Shkurti, Guillaume Sartoretti},
  booktitle    = {Conference on Robot Learning},
  year         = {2025},
  organization = {PMLR}
}