🚀 MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining

Jacob Portes**, Alex Trott*, Sam Havens, Daniel King, Abhinav Venigalla, Moin Nadeem, Nikhil Sardana, Daya Khudia, Jonathan Frankle

MosaicML x Databricks
**jacob.portes@databricks.com, *equal contribution
Blogpost Paper GitHub Hugging Face Colab

TLDR; How to Speed up Transformer Pretraining

MosaicBERT is a custom BERT architecture optimized for fast pretraining. This study motivated many of the architecture choices around MosaicML's MPT-7B and MPT-30B models. Below are the main architectural modifications used by MosaicBERT for rapid pretraining 👇
And here are a few more efficiency tips used by MosaicBERT:
  • Change the Masked Language Modeling ratio to 30% (instead of the default 15%)
  • Remove dropout from the attention module (dropout often slows things down)
  • Use bfloat16!
  • Make your vocab size as a multiple of 64 (Andrej Karpathy says so!)
Every ML practitioner who is pretraining or finetuning transformers should consider using these modifications in their own stack. Note that all modifications (except for MLM) can be applied to decoder architectures such as GPT and MPT.

Abstract

Although BERT-style encoder models are heavily used in NLP research, many researchers do not pretrain their own BERTs from scratch due to the high cost of training. In the past half-decade since BERT first rose to prominence, many advances have been made with other transformer architectures and training configurations that have yet to be systematically incorporated into BERT. Here, we introduce MosaicBERT, a BERT-style encoder architecture and training recipe that is empirically optimized for fast pretraining. This efficient architecture incorporates FlashAttention, Attention with Linear Biases (ALiBi), Gated Linear Units (GLU), a module to dynamically remove padded tokens, and low precision LayerNorm into the classic transformer encoder block. The training recipe includes a 30% masking ratio for the Masked Language Modeling (MLM) objective, bfloat16 precision, and vocabulary size optimized for GPU throughput, in addition to best-practices from RoBERTa and other encoder models. When pretrained from scratch on the C4 dataset, this base model achieves a downstream average GLUE score of 79.6 in 1.13 hours on 8 A100 80 GB GPUs at a cost of roughly $20. We plot extensive accuracy vs. pretraining speed Pareto curves and show that MosaicBERT base and large are consistently Pareto optimal when compared to a competitive BERT base and large. This empirical speed up in pretraining enables researchers and engineers to pretrain custom BERT-style models at low cost instead of finetune on existing generic models. We open source our model weights and code.

Main Result

main result (Fig 1)
(A) Schematic of MosaicBERT architecture (B) Pareto curves of average GLUE (dev) scores for MosaicBERT-Base and the standard BERT-Base. Error bars indicate 95% confidence interval over n=5 pretraining seeds. All training was on 8xA100-80GB GPUs. FlashAttention schematic adapted from Dao et al. [2022], and unpadding schematic adapted from Zeng et al. [2022]

Citation

@article{portes2023MosaicBERT,
  title={MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining},
  author={Jacob Portes, Alexander R Trott, Sam Havens, Daniel King, Abhinav Venigalla,
  Moin Nadeem, Nikhil Sardana, Daya Khudia, Jonathan Frankle},
  journal={NeuRIPS https://openreview.net/pdf?id=5zipcfLC2Z},
  year={2023},
}