Recent advances in text-to-video diffusion models have enabled high-quality video synthesis, but controllable generation remains challenging, particularly under limited data and compute. Existing fine-tuning methods for conditional generation often rely on external encoders or architectural modifications, which demand large datasets and are typically restricted to spatially aligned conditioning, limiting flexibility and scalability. In this work, we introduce Temporal In-Context Fine-Tuning (TIC-FT), an efficient and versatile approach for adapting pretrained video diffusion models to diverse conditional generation tasks. Our key idea is to concatenate condition and target frames along the temporal axis and insert intermediate buffer frames with progressively increasing noise levels. These buffer frames enable smooth transitions, aligning the fine-tuning process with the pretrained model's temporal dynamics. TIC-FT requires no architectural changes and achieves strong performance with as few as 10-30 training samples. We validate our method across a range of tasks, including image-to-video and video-to-video generation, using large-scale base models such as CogVideoX-5B and Wan-14B. Extensive experiments show that TIC-FT outperforms existing baselines in both condition fidelity and visual quality, while remaining highly efficient in both training and inference. For additional results, visit https://kinam0252.github.io/TIC-FT/
翻译:近期文本到视频扩散模型的进展已实现高质量视频合成,但可控生成仍具挑战性,尤其在数据与算力受限的场景下。现有面向条件生成的微调方法通常依赖外部编码器或架构修改,这些方法需要大规模数据集,且通常仅限于空间对齐的条件控制,限制了灵活性与可扩展性。本研究提出时序上下文微调(TIC-FT),这是一种高效且通用的方法,用于将预训练视频扩散模型适配至多样化的条件生成任务。我们的核心思想是沿时间轴拼接条件帧与目标帧,并插入噪声水平逐级递增的中间缓冲帧。这些缓冲帧可实现平滑过渡,使微调过程与预训练模型的时序动态特性对齐。TIC-FT无需修改模型架构,仅需10-30个训练样本即可实现优异性能。我们在图像到视频、视频到视频等多种生成任务上验证了该方法,使用的基模型包括CogVideoX-5B和Wan-14B等大规模模型。大量实验表明,TIC-FT在条件保真度与视觉质量上均优于现有基线方法,同时在训练与推理阶段均保持高效性。更多结果请访问 https://kinam0252.github.io/TIC-FT/