CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning

1Meta, 2Boston University
2025

Work done during an internship at Meta
MY ALT TEXT

Capabilities of various vision-language models. While encoder-based models, e.g., CLIP, excel in generating vision-text aligned embeddings and show promising results in image-text retrieval, they fall short in producing free-form text and reasoning about retrieved images (left). Conversely, Multimodal Large Language Models (MLLMs) have shown remarkable success in multimodal understanding and generation, but their direct embeddings yield suboptimal retrieval results (middle). CAFe effectively bridges this gap by integrating representation learning and language generation, enabling not only retrieval but also advanced generative capabilities (right).

Abstract

The rapid advancement of large vision-language models (LVLMs) has driven significant progress in multimodal tasks, enabling models to interpret, reason, and generate outputs across both visual and textual domains. While excelling in generative tasks, existing LVLMs often face limitations in tasks requiring high-fidelity representation learning, such as generating image or text embeddings for retrieval. Recent work has proposed finetuning LVLMs for representational learning, but the fine-tuned model often loses its generative capabilities due to the representational learning training paradigm. To address this trade-off, we introduce CAFe, a contrastive-autoregressive fine-tuning framework that enhances LVLMs for both representation and generative tasks. By integrating a contrastive objective with autoregressive language modeling, our approach unifies these traditionally separate tasks, achieving state-of-the-art results in both multimodal retrieval and multimodal generative benchmarks, including object hallucination (OH) mitigation. CAFe establishes a novel framework that synergizes embedding and generative functionalities in a single model, setting a foundation for future multimodal models that excel in both retrieval precision and coherent output generation.

BibTeX


        @article{yu2025cafe,
          title={CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning},
          author={Yu, Hao and Zhao, Zhuokai and Yan, Shen and Korycki, Lukasz and Wang, Jianyu and He, Baosheng and Liu, Jiayi and Zhang, Lizhu and Fan, Xiangjun and Yu, Hanchao},
          journal={arXiv preprint arXiv:2503.19900},
          year={2025}
        }