K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters

Published in Findings of the Association for Computational Linguistics: ACL-IJCNLP, 2021

Recommended citation: Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, Ming Zhou: K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. ACL/IJCNLP (Findings) 2021: 1405-1418 http://xuanjing-huang.github.io/files/K-Adapter.pdf

We study the problem of injecting knowledge into large pre-trained models like BERT and RoBERTa. Existing methods typically update the original parameters of pre-trained models when injecting knowledge. However, when multiple kinds of knowledge are injected, the historically injected knowledge would be flushed away. To address this, we propose KADAPTER, a framework that retains the original parameters of the pre-trained model fixed and supports the development of versatile knowledge-infused model.

Download paper here