Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
Xuanjing Huang
中文简介
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 2016
We propose three different mechanisms of sharing information to model text with task-specific and shared layers.
Recommended citation: Pengfei Liu, Xipeng Qiu, Xuanjing Huang: Recurrent Neural Network for Text Classification with Multi-Task Learning. IJCAI 2016: 2873-2879 http://xuanjing-huang.github.io/files/RNN.pdf
Published in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017
The paper proposed an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other.
Recommended citation: Pengfei Liu, Xipeng Qiu, Xuanjing Huang: Adversarial Multi-task Learning for Text Classification. ACL (1) 2017: 1-10 http://xuanjing-huang.github.io/files/AMT.pdf
Published in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017
In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria.
Recommended citation: Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang: Adversarial Multi-Criteria Learning for Chinese Word Segmentation. ACL (1) 2017: 1193-1203 http://xuanjing-huang.github.io/files/cws.pdf
Published in Proceedings of the 27th International Conference on Computational Linguistics, 2018
We propose a novel lexicon-based supervised attention model (LBSA) for generating sentiment-informative representations.
Recommended citation: Yicheng Zou, Tao Gui, Qi Zhang, Xuanjing Huang: A Lexicon-Based Supervised Attention Model for Neural Sentiment Analysis. COLING 2018: 868-877 http://xuanjing-huang.github.io/files/nsa.pdf
Published in The Eighteenth China National Conference on Computational Linguistics, 2019
In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task.
Recommended citation: Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang: How to Fine-Tune BERT for Text Classification? CCL 2019: 194-206 http://xuanjing-huang.github.io/files/bert-ft.pdf
Published in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019
In this work, we introduce a lexicon-based graph neural network with global semantics for Chinese NER.
Recommended citation: Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, Xuanjing Huang: A Lexicon-Based Graph Neural Network for Chinese NER. EMNLP/IJCNLP (1) 2019: 1040-1050 http://xuanjing-huang.github.io/files/ALB.pdf
Published in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
In this paper, we propose FLAT: Flat-LAttice Transformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans.
Recommended citation: Xiaonan Li, Hang Yan, Xipeng Qiu, Xuanjing Huang: FLAT: Chinese NER Using Flat-Lattice Transformer. ACL 2020: 6836-6842 http://xuanjing-huang.github.io/files/FLAT.pdf
Published in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
In this work, we propose a simple but effective method for incorporating the word lexicon into the character representations.
Recommended citation: Ruotian Ma, Minlong Peng, Qi Zhang, Zhongyu Wei, Xuanjing Huang: Simplify the Usage of Lexicon in Chinese NER. ACL 2020: 5951-5960 http://xuanjing-huang.github.io/files/Simplify.pdf
Published in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
Recommended citation: Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang: Extractive Summarization as Text Matching. ACL 2020: 6197-6208 http://xuanjing-huang.github.io/files/ext.pdf
Published in SCIENCE CHINA Technological Sciences (SCTS), 2020
In this survey, we provide a comprehensive review of PTMs for NLP.
Recommended citation: Xipeng Qiu, TianXiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang, Pre-trained Models for Natural Language Processing: A Survey, SCIENCE CHINA Technological Sciences (SCTS) , 2020, Vol. 63(10), pp. 1872–1897 http://xuanjing-huang.github.io/files/PTM.pdf
Published in Findings of the Association for Computational Linguistics: ACL-IJCNLP, 2021
The paper proposes a framework that retains the original parameters of the pre-trained model fixed and supports the development of versatile knowledge-infused model.
Recommended citation: Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, Ming Zhou: K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. ACL/IJCNLP (Findings) 2021: 1405-1418 http://xuanjing-huang.github.io/files/K-Adapter.pdf
Published in Proceedings of the 29th International Conference on Computational Linguistics, 2022
In this paper, we propose a multi-format transfer learning model with variational information bottleneck for EAE in new datasets.
Recommended citation: Jie Zhou, Qi Zhang, Qin Chen, Liang He, Xuanjing Huang: A Multi-Format Transfer Learning Model for Event Argument Extraction via Variational Information Bottleneck. COLING 2022: 1990-2000 http://xuanjing-huang.github.io/files/mft.pdf
Published in CoRR abs/2307.04964, 2023
We dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training.
Recommended citation: Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu, Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang: Secrets of RLHF in Large Language Models Part I: PPO. CoRR abs/2307.04964 (2023) http://xuanjing-huang.github.io/files/rlhf.pdf
Published in 电子工业出版社, 2023
随着自然语言处理的广泛应用以及以深度学习为代表的机器学习算法的快速进步,近年来自然语言处理算法和研究任务也在快速发展中。作者自2003年起,在复旦大学计算机科学技术学院针对本科生、硕士生和博士生先后分别开设了自然语言处理课程。本书对多年教学和研究进行总结梳理,希望使得读者对自然语言处理有更加系统性且全面的了解。
Recommended citation: 张奇、桂韬、黄萱菁:自然语言处理导论,电子工业出版社,2023 https://intro-nlp.github.io/
Published in CoRR abs/2309.07864, 2023
In this paper, we perform a comprehensive survey on LLM-based agents.
Recommended citation: Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huan, Tao Gui: The Rise and Potential of Large Language Model Based Agents: A Survey. CoRR abs/2309.07864 (2023) http://xuanjing-huang.github.io/files/agent.pdf
Published in 电子工业出版社, 2023
本书将介绍大语言模型的基础理论包括语言模型、分布式模型训练以及强化学习,并以Deepspeed-Chat框架为例介绍实现大语言模型和类ChatGPT系统的实践。
Recommended citation: 张奇、桂韬、郑锐、黄萱菁:大规模语言模型:从理论与实践,电子工业出版社,2023 https://intro-llm.github.io/
Published in CoRR abs/2401.17221, 2024
This paper proposes the use of ensemble experts technique to synergizes the capabilities of individual visual encoders, including those skilled in image-text matching, OCR, image segmentation, etc.
Recommended citation: Xiaoran Fan, Tao Ji, Changhao Jiang, Shuo Li, Senjie Jin, Sirui Song, Junke Wang, Boyang Hong, Lu Chen, Guodong Zheng, Ming Zhang, Caishuang Huang, Rui Zheng, Zhiheng Xi, Yuhao Zhou, Shihan Dou, Junjie Ye, Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang: MouSi: Poly-Visual-Expert Vision-Language Models. CoRR abs/2401.17221 (2024) http://xuanjing-huang.github.io/files/mousi.pdf
Published in CoRR abs/2401.06080, 2024
From a data perspective, we propose a method to measure the strength of preferences within the data, based on a voting mechanism of multiple reward models. From an algorithmic standpoint, we introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses, thereby improving model generalization.
Recommended citation: Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, Songyang Gao, Nuo Xu, Yuhao Zhou, Xiaoran Fan, Zhiheng Xi, Jun Zhao, Xiao Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang: Secrets of RLHF in Large Language Models Part II: Reward Modeling. CoRR abs/2401.06080 (2024) http://xuanjing-huang.github.io/files/reward.pdf
Published:
社交媒体是对人们在网络社会进行沟通的各种媒体的总称,具有重要的商业价值和社会价值,也是信息传播和维系社会关系的重要渠道。过去几年,复旦大学的自然语言处理团队在社会媒体开展各种智能挖掘研究,形成了社会媒体理解、发现、预测的链条,包括理解社会媒体上非规范的文字内容,从社会媒体发现有价值的信息,预测社会媒体上的用户行为。这个报告了主要介绍社交媒体上的用户行为预测方法,包括微博标签推荐、@用户(公司)推荐、转发行为预测、用户话题参与预测、专家推荐、在社会媒体挖掘中融入多模态信息等。
Published:
信息提取主要包括命名实体识别及关系提取两大主要任务,旨在自动地从海量非结构化文本中抽取出关键信息,从而有效地支撑知识图谱构建和智能问答等下游任务。在深度学习时代,由于神经网络,特别是预训练模型已经能自动地提取高层语义特征,人们把更多的精力关注在如何构建预训练任务实现更完备的语义知识嵌入,以及如何高效使用这样的模型。然而,深度学习模型自动提取特征难以避免捷径学习问题,导致现实应用场景下的鲁棒性缺陷,对信息提取的下游应用带来了一些隐藏的危险,在低资源环境下尤为严重。本报告将围绕信息提取的鲁棒性问题展开深入分析,探究影响模型鲁棒性的深层原因,并介绍我们在弱样本、小样本、无标注、跨领域等场景上提升信息提取模型鲁棒性的研究成果。
Published:
Recent years have witnessed the great success of large-scale pre-trained language models. However, performing the entire language model for each sample can be computationally uneconomical. Hence, dynamic networks are attracting a lot of attention in the NLP community, which can adapt their structures or parameters to the input samples during inference. In contrast to static language models, dynamic ones enjoy favorable properties such as efficiency, adaptiveness, accuracy, etc. In this talk, I will review recent advances in dynamic networks in NLP and discuss the prospects and challenges of applying dynamic structure to pre-trained language models.
Published:
机器学习和深度学习的可解释性指的是以受众可理解的,直截了当的方式解释模型预测值的程度。近年来,深度学习已经在自然语言处理中取得成功应用,大幅度提升了各种任务的性能,但由于其内在复杂性,可理解性和可解释性不够令人满意,也妨碍了深度学习方法的进一步推广。该报告首先介绍什么是可解释性分析,自然语言处理中有哪些可解释性分析任务,可解释性分析的目的,然后从理解模型部件的功能属性、解释模型预测的行为、模型诊断三个方面介绍可解释性分析在自然语言处理领域的发展现状,最后讨论了未来的研究趋势。
Published:
自然语言通常指人类的语言,是思维逻辑的载体,交流沟通的方式,也是传承文明的手段。对自然语言的处理是人工智能的重要研究内容,被称为人工智能皇冠上的明珠。自然语言处理必不可少的基础步骤是语言表示学习,其目的是构建自然语言的形式化或数学描述,以便在计算机中表示自然语言,并能让计算机程序进行自动处理。早期的语言表示方法主要采用符号化的离散表示。近年来,深度神经网络广泛应用于自然语言处理,不仅在文本分类、序列标注、机器翻译和自动问答等许多任务中取得了超越传统统计方法的性能,而且能以端到端的方式进行训练,避免了繁琐的特征工程。报告的第一部分将介绍自然语言处理的基本任务、应用领域、研究历史和技术发展趋势;第二部分将从词语、短语、句子和句对等粒度介绍基于神经网络的语言表示学习方法,阐述如何将语言的潜在语法或语义特征分布式地存储在一组神经元中,用稠密、低维、连续的向量来表示,并从模型、学习等层面讨论神经语言表示学习的近期研究趋势。
Undergraduate course, Fudan University, School of Computer Science, 2022
Undergraduate course, Fudan University, School of Computer Science, 2022