TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities
Abstract
TencentPretrain is a modular toolkit for pre-training models across different modalities, enabling efficient reproduction and creation of new models that match original performance.
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
Get this paper in your agent:
hf papers read 2212.06385 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 63
Browse 63 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 96
Collections including this paper 0
No Collection including this paper