Papers
arxiv:2603.24800

Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration

Published on Mar 25
· Submitted by
Konstantin Sobolev
on Mar 27
Authors:
,
,

Abstract

Diffusion Transformers can be enhanced through a parameter-efficient calibration approach that improves generative quality while reducing inference steps.

AI-generated summary

In this paper, we uncover the hidden potential of Diffusion Transformers (DiTs) to significantly enhance generative tasks. Through an in-depth analysis of the denoising process, we demonstrate that introducing a single learned scaling parameter can significantly improve the performance of DiT blocks. Building on this insight, we propose Calibri, a parameter-efficient approach that optimally calibrates DiT components to elevate generative quality. Calibri frames DiT calibration as a black-box reward optimization problem, which is efficiently solved using an evolutionary algorithm and modifies just ~100 parameters. Experimental results reveal that despite its lightweight design, Calibri consistently improves performance across various text-to-image models. Notably, Calibri also reduces the inference steps required for image generation, all while maintaining high-quality outputs.

Community

Paper author Paper submitter

Introducing Calibri – a parameter-efficient method for diffusion transformer alignment. By optimizing only ∼ 102 parameters, Calibri substantially improves generation quality while reducing inference time.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.24800
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.24800 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.24800 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.