How to Fine-Tune BERT for Text Classification?

Published in The Eighteenth China National Conference on Computational Linguistics, 2019

Recommended citation: Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang: How to Fine-Tune BERT for Text Classification? CCL 2019: 194-206 http://xuanjing-huang.github.io/files/bert-ft.pdf

Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.

Download paper here