Understanding signboard text in natural scenes is essential for real-world applications of Visual Question Answering (VQA), yet remains underexplored, particularly in low-resource languages. We introduce ViSignVQA, the first large-scale Vietnamese dataset designed for signboard-oriented VQA, which comprises 10,762 images and 25,573 question-answer pairs. The dataset captures the diverse linguistic, cultural, and visual characteristics of Vietnamese signboards, including bilingual text, informal phrasing, and visual elements such as color and layout. To benchmark this task, we adapted state-of-the-art VQA models (e.g., BLIP-2, LaTr, PreSTU, and SaL) by integrating a Vietnamese OCR model (SwinTextSpotter) and a Vietnamese pretrained language model (ViT5). The experimental results highlight the significant role of the OCR-enhanced context, with F1-score improvements of up to 209% when the OCR text is appended to questions. Additionally, we propose a multi-agent VQA framework combining perception and reasoning agents with GPT-4, achieving 75.98% accuracy via majority voting. Our study presents the first large-scale multimodal dataset for Vietnamese signboard understanding. This underscores the importance of domain-specific resources in enhancing text-based VQA for low-resource languages. ViSignVQA serves as a benchmark capturing real-world scene text characteristics and supporting the development and evaluation of OCR-integrated VQA models in Vietnamese.
翻译:理解自然场景中的招牌文本对于视觉问答(VQA)的实际应用至关重要,但该领域仍未得到充分探索,尤其是在低资源语言中。我们提出了ViSignVQA,这是首个面向招牌的大规模越南语VQA数据集,包含10,762张图像和25,573个问答对。该数据集捕捉了越南语招牌多样的语言、文化和视觉特征,包括双语文本、非正式措辞以及颜色和布局等视觉元素。为了给此任务建立基准,我们通过集成一个越南语OCR模型(SwinTextSpotter)和一个越南语预训练语言模型(ViT5),对最先进的VQA模型(例如BLIP-2、LaTr、PreSTU和SaL)进行了适配。实验结果突显了OCR增强上下文的重要作用,当OCR文本附加到问题时,F1分数最高提升了209%。此外,我们提出了一个结合感知与推理智能体与GPT-4的多智能体VQA框架,通过多数投票实现了75.98%的准确率。我们的研究提出了首个用于越南语招牌理解的大规模多模态数据集。这强调了领域特定资源对于增强低资源语言基于文本的VQA的重要性。ViSignVQA作为一个基准,捕捉了现实世界的场景文本特征,并支持越南语OCR集成VQA模型的开发与评估。