Modular Sensory Stream for Integrating Physical Feedback in Vision-Language-Action Models

Jimin Lee1, Huiwon Jang1,2, Myungkyu Koo1,2, Jungwoo Park3, Jinwoo Shin1,2
1KAIST, 2RLWRLD, 3Seoul National University

Abstract

Humans understand and interact with the real world by relying on diverse physical feedback beyond visual perception. Motivated by this, recent approaches attempt to incorporate physical sensory signals into Vision-Language-Action models (VLAs). However, they typically focus on a single type of physical signal, failing to capture the heterogeneous and complementary nature of real-world interactions. In this paper, we propose MoSS, a modular sensory stream framework that adapts VLAs to leverage multiple sensory signals for action prediction. Specifically, we introduce decoupled modality streams that integrate heterogeneous physical signals into the action stream via joint cross-modal self-attention. To enable stable incorporation of new modalities, we adopt a two-stage training scheme that freezes pretrained VLA parameters in the early stage. Furthermore, to better capture contact interaction dynamics, we incorporate an auxiliary task that predicts future physical signals. Through extensive real-world experiments, we demonstrate that MoSS successfully augments VLAs to leverage diverse physical signals (i.e., tactile and torque), integrating multiple signals to achieve synergistic performance gains.

Results

TBD

BibTeX

@misc{lee2026moss,
      title={Modular Sensory Stream for Integrating Physical Feedback in Vision-Language-Action Models},
      author={Jimin Lee and Huiwon Jang and Myungkyu Koo and Jungwoo Park and Jinwoo Shin},
      year={2026},
      eprint={2604.23272},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2604.23272},
}