Synthetic aperture radar (SAR) ship classification Ponchos is crucial for maritime surveillance.Most existing methods primarily focus on visual or polarimetric features, often constrained by a limited feature set and facing challenges in data diversity and multimodal information integration.This study introduces a text-enhanced multimodal framework for SAR ship classification (TeMSC), an extensible and unified approach that integrates multimodal information related to SAR ships.It consists of text-form geometry information embedding, polarization and visual information embedding, and a multimodal prediction module.By incorporating ship geometry information in text format, TeMSC leverages text representation to enhance feature expressiveness, compensating for the limited discriminative power of traditional visual and polarization features, especially in low-resolution scenarios.
TeMSC effectively processes Dummy Cameras complementary multimodal information through a multimodal prediction module, while avoiding the complexity associated with traditional decision-level feature fusion strategies.In addition, a classification token mechanism is introduced to streamline the classification process.Through a two-stage training strategy, TeMSC captures information across multiple SAR datasets, enhancing its generalization and adaptability.Extensive experiments on the FUSAR-Ship and OpenSARShip datasets demonstrate the superior performance of TeMSC and highlight the benefits of multimodal integration for SAR ship classification.TeMSC provides a foundation for future research on SAR-focused multimodal learning applications.