CUCAI 2025 Archive
← Back to papers

Enhancing Self-Driving Segmentation in Adverse Weather Conditions: A Dual Uncertainty-Aware Training Approach to SAM Optimization

Dharsan Ravindran, Kevin Wang, Zhuoyuan Cao, Saleh Abdelrahman, Jeffery Wu

CUCAI 2025 Proceedings2025

Published 2025/03/26

Abstract

Recent advancements in vision foundation models like Segment Anything Model (SAM) and its successor SAM2 have established new state-of-the-art benchmarks for image segmentation tasks. However, these models often fail in inclement weather scenarios where visual ambiguity is prevalent, primarily due to their lack of uncertainty quantification capabilities. Drawing inspiration from recent successes in medical imaging—where uncertainty-aware training has shown considerable promise in handling ambiguous cases. We explore two approaches to enhance segmentation performance in adverse driving conditions. First, we implement a multi-step finetuning process for SAM2 that incorporates uncertainty metrics directly into the loss function (1) to improve overall scene recognition. Second, we adapt the Uncertainty-Aware Adapter (UAT) originally developed for medical image segmentation (2) to autonomous driving contexts. We evaluate these approaches on three diverse datasets: CamVid(1,2), BDD100K(1), and GTA driving(1). Our experimental results demonstrate that UAT-SAM outperforms standard SAM in extreme weather scenarios, while the finetuned SAM2 with uncertainty-aware loss shows improved performance across overall driving scenes. These findings highlight the importance of explicit uncertainty modeling in safety-critical autonomous driving applications, particularly when operating in challenging environmental conditions.