Publications

This page features a selection of my publications. For a more comprehensive list of publications, please refer to my Google Scholar profile, Semantic Scholar profile, Computer Science Bibliography, or ACL Anthology.

Secrets of RLHF in Large Language Models Part II: Reward Modeling

Published in CoRR abs/2401.06080, 2024

From a data perspective, we propose a method to measure the strength of preferences within the data, based on a voting mechanism of multiple reward models. From an algorithmic standpoint, we introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses, thereby improving model generalization.

Recommended citation: Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, Songyang Gao, Nuo Xu, Yuhao Zhou, Xiaoran Fan, Zhiheng Xi, Jun Zhao, Xiao Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang: Secrets of RLHF in Large Language Models Part II: Reward Modeling. CoRR abs/2401.06080 (2024) http://xuanjing-huang.github.io/files/reward.pdf

MouSi: Poly-Visual-Expert Vision-Language Models

Published in CoRR abs/2401.17221, 2024

This paper proposes the use of ensemble experts technique to synergizes the capabilities of individual visual encoders, including those skilled in image-text matching, OCR, image segmentation, etc.

Recommended citation: Xiaoran Fan, Tao Ji, Changhao Jiang, Shuo Li, Senjie Jin, Sirui Song, Junke Wang, Boyang Hong, Lu Chen, Guodong Zheng, Ming Zhang, Caishuang Huang, Rui Zheng, Zhiheng Xi, Yuhao Zhou, Shihan Dou, Junjie Ye, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang: MouSi: Poly-Visual-Expert Vision-Language Models. CoRR abs/2401.17221 (2024) http://xuanjing-huang.github.io/files/mousi.pdf

大规模语言模型:从理论与实践

Published in 电子工业出版社, 2023

本书将介绍大语言模型的基础理论包括语言模型、分布式模型训练以及强化学习,并以Deepspeed-Chat框架为例介绍实现大语言模型和类ChatGPT系统的实践。

Recommended citation: 张奇、桂韬、郑锐、黄萱菁:大规模语言模型:从理论与实践,电子工业出版社,2023 https://intro-llm.github.io/

The Rise and Potential of Large Language Model Based Agents: A Survey

Published in CoRR abs/2309.07864, 2023

In this paper, we perform a comprehensive survey on LLM-based agents.

Recommended citation: Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huan, Tao Gui: The Rise and Potential of Large Language Model Based Agents: A Survey. CoRR abs/2309.07864 (2023) http://xuanjing-huang.github.io/files/agent.pdf

自然语言处理导论

Published in 电子工业出版社, 2023

随着自然语言处理的广泛应用以及以深度学习为代表的机器学习算法的快速进步,近年来自然语言处理算法和研究任务也在快速发展中。作者自2003年起,在复旦大学计算机科学技术学院针对本科生、硕士生和博士生先后分别开设了自然语言处理课程。本书对多年教学和研究进行总结梳理,希望使得读者对自然语言处理有更加系统性且全面的了解。

Recommended citation: 张奇、桂韬、黄萱菁:自然语言处理导论,电子工业出版社,2023 https://intro-nlp.github.io/

Secrets of RLHF in Large Language Models Part I: PPO

Published in CoRR abs/2307.04964, 2023

We dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training.

Recommended citation: Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang: Secrets of RLHF in Large Language Models Part I: PPO. CoRR abs/2307.04964 (2023) http://xuanjing-huang.github.io/files/rlhf.pdf

A Multi-Format Transfer Learning Model for Event Argument Extraction via Variational Information Bottleneck

Published in Proceedings of the 29th International Conference on Computational Linguistics, 2022

In this paper, we propose a multi-format transfer learning model with variational information bottleneck for EAE in new datasets.

Recommended citation: Jie Zhou, Qi Zhang, Qin Chen, Liang He, Xuanjing Huang: A Multi-Format Transfer Learning Model for Event Argument Extraction via Variational Information Bottleneck. COLING 2022: 1990-2000 http://xuanjing-huang.github.io/files/mft.pdf

K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters

Published in Findings of the Association for Computational Linguistics: ACL-IJCNLP, 2021

The paper proposes a framework that retains the original parameters of the pre-trained model fixed and supports the development of versatile knowledge-infused model.

Recommended citation: Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, Ming Zhou: K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. ACL/IJCNLP (Findings) 2021: 1405-1418 http://xuanjing-huang.github.io/files/K-Adapter.pdf

Extractive Summarization as Text Matching

Published in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.

Recommended citation: Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang: Extractive Summarization as Text Matching. ACL 2020: 6197-6208 http://xuanjing-huang.github.io/files/ext.pdf

Simplify the Usage of Lexicon in Chinese NER

Published in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

In this work, we propose a simple but effective method for incorporating the word lexicon into the character representations.

Recommended citation: Ruotian Ma, Minlong Peng, Qi Zhang, Zhongyu Wei, Xuanjing Huang: Simplify the Usage of Lexicon in Chinese NER. ACL 2020: 5951-5960 http://xuanjing-huang.github.io/files/Simplify.pdf

FLAT: Chinese NER Using Flat-Lattice Transformer

Published in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

In this paper, we propose FLAT: Flat-LAttice Transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans.

Recommended citation: Xiaonan Li, Hang Yan, Xipeng Qiu, Xuanjing Huang: FLAT: Chinese NER Using Flat-Lattice Transformer. ACL 2020: 6836-6842 http://xuanjing-huang.github.io/files/FLAT.pdf

A Lexicon-Based Graph Neural Network for Chinese NER

Published in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019

In this work, we introduce a lexicon-based graph neural network with global semantics for Chinese NER.

Recommended citation: Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, Xuanjing Huang: A Lexicon-Based Graph Neural Network for Chinese NER. EMNLP/IJCNLP (1) 2019: 1040-1050 http://xuanjing-huang.github.io/files/ALB.pdf

Adversarial Multi-Criteria Learning for Chinese Word Segmentation

Published in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017

In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria.

Recommended citation: Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang: Adversarial Multi-Criteria Learning for Chinese Word Segmentation. ACL (1) 2017: 1193-1203 http://xuanjing-huang.github.io/files/cws.pdf

Adversarial Multi-task Learning for Text Classification

Published in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017

The paper proposed an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other.

Recommended citation: Pengfei Liu, Xipeng Qiu, Xuanjing Huang: Adversarial Multi-task Learning for Text Classification. ACL (1) 2017: 1-10 http://xuanjing-huang.github.io/files/AMT.pdf