Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild

1National University of Singapore, 2University of Toronto, 3IIT-Dhanbad, 4Singapore Technologies Engineering

Visual search for bears in simulated Yosemite Valley environment (colored path = simulation path).

Abstract

To perform autonomous visual search for environmental monitoring, a robot may leverage satellite imagery as a prior map. This can help inform coarse, high level search and exploration strategies, even when such images lack sufficient resolution to allow fine-grained, explicit visual recognition of targets. However, there are some challenges to overcome with using satellite images to direct visual search. For one, targets that are unseen in satellite images are underrepresented in most existing datasets, and thus vision models trained on these datasets fail to reason effectively based on indirect visual cues. Furthermore, approaches which leverage large Vision Language Models (VLMs) for generalization may yield inaccurate outputs due to hallucination, leading to inefficient search.

To address these challenges, we introduce Search-TTA, a multimodal test-time adaptation framework that can accept text and/or image input. First, we pretrain a remote sensing image encoder to align with CLIP’s visual encoder to output probability distributions of target presence used for visual search. Second, our framework dynamically refines CLIP’s predictions during search using a test-time adaptation mechanism. Through a feedback loop inspired by Spatial Poisson Point Processes, gradient updates (weighted by uncertainty) are used to correct (potentially inaccurate) predictions and improve search performance. To validate Search-TTA's performance, we curate a visual search dataset based on internet-scale ecological data. We find that Search-TTA improves planner performance by up to 9.7%, particularly in cases with poor initial CLIP predictions. It also achieves comparable performance to state-of-the-art VLMs. Finally, we deploy Search-TTA on a real UAV via hardware-in-the-loop testing, by simulating its operation within a large-scale simulation.

Approach

Search-TTA is a multimodal test-time adaptation framework that refines a VLM’s (potentially inaccurate) predictions online, using the agent’s measurements during visual search. In this work, we use CLIP as our lightweight VLM, and first align a satellite image encoder to the same representation space as a vision encoder through patch-level contrastive learning. This enables the satellite image encoder to generate a score map by taking the cosine similarity between its per-patch embeddings and the embeddings of other modalities (e.g., text, ground image). We then introduce a novel test-time adaptation feedback mechanism to refine CLIP’s predictions based on new measurements. To achieve this, we take inspiration from Spatial Poisson Point Processes to perform gradient updates to the satellite image encoder based on patches where measurements were taken. We also enhance the loss function with an uncertainty-driven weighting scheme to ensure stable gradient updates, especially at the beginning of the search process.


Dataset

To validate Search-TTA's performance, we curate a dataset which comprises satellite images tagged with the coordinates of multiple unseen taxonomic targets using internet-scale ecological data. More specifically, we use Sentinel-2 level 2A satellite images tagged with coordinates from the iNat- 2021 dataset. One advantage of using ecological data is the hierarchical structure of taxonomic labels (seven distinct tiers), which facilitates baseline evaluation across various levels of specificity. In total, our dataset offers 437k training images 4k validation images.


Examples

More coming soon.


BibTeX

@article{tan2025searchtta,
      author    = {Derek Ming Siang Tan, Shailesh, Boyang Liu, Alok Raj, Qi Xuan Ang, Weiheng Dai, Tanishq Duhan, Jimmy Chiun, Yuhong Cao, Florian Shkurti, Guillaume Sartoretti},
      title     = {Search-TTA: A Multimodal Test-Time Adaptation Framework for Visual Search in the Wild},
      journal   = {Under Review},
      year      = {2025},
    }