Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks

Save for later
  • 3 min read
  • 30 Jul 2019

article-image
Today Baidu released a continual natural language processing framework ERNIE 2.0. ERNIE stands for Enhanced Representation through kNowledge IntEgration. Baidu claims in its research paper that ERNIE 2.0 outperforms BERT and the recent XLNet in 16 NLP tasks in Chinese and English. Additionally, Baidu has open sourced ERNIE 2.0 model.

In March Baidu had announced the release of ERNIE 1.0, its pre-trained model based on PaddlePaddle, Baidu’s deep learning open platform. According to Baidu, ERNIE 1.0 outperformed BERT in all Chinese language understanding tasks.

Pre-training procedures of the models such as BERT, XLNet and ERNIE 1.0 are mainly based on a few simple tasks modeling co-occurrence of words or sentences, highlights the paper. For example, BERT constructed a bidirectional language model task and the next sentence prediction task to capture the co-occurrence information of words and sentences; XLNet constructed a permutation language model task to capture the co-occurrence information of words.

But besides co-occurring information, there are much richer lexical, syntactic and semantic information in training corpora. For example, named entities, such as person names, place names, and organization names, contain concept information; Sentence order and sentence proximity information can enable the models to learn structure-aware representations; Semantic similarity at the document level or discourse relations among sentences can enable the models to learn semantic-aware representations. So is it possible to further improve the performance if the model was trained to learn more kinds of tasks constantly?

baidu-open-sources-ernie-2-0-a-continual-pre-training-nlp-model-that-outperforms-bert-and-xlnet-on-16-nlp-tasks-img-0

Source: ERNIE 2.0 research paper


Based on this idea, Baidu has proposed a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through multi-task learning in a continual way. According to Baidu, in this framework, different customized tasks can be incrementally introduced at any time and these tasks are trained through multi-task learning, which enables the encoding of lexical, syntactic and semantic information across tasks. And whenever a new task arrives, this framework can incrementally train the distributed representations without forgetting the previously trained parameters.

baidu-open-sources-ernie-2-0-a-continual-pre-training-nlp-model-that-outperforms-bert-and-xlnet-on-16-nlp-tasks-img-1

The Structure of Released ERNIE 2.0 Model

Source: ERNIE 2.0 research paper

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime


ERNIE is a continual pre-training framework which provides a feasible scheme for developers to build their own NLP models. The fine-tuning source codes of ERNIE 2.0 and pre-trained English version models can be downloaded from the GitHub page.

The team at Baidu compared the performance of ERNIE 2.0 model with the existing  pre-training models on the English dataset GLUE and 9 popular Chinese datasets separately. The results show that ERNIE 2.0 model outperforms BERT and XLNet on 7 GLUE language understanding tasks and outperforms BERT on all of the 9 Chinese NLP tasks, such as DuReader Machine Reading Comprehension, Sentiment Analysis and Question Answering. 

Specifically, according to the experimental results on GLUE datasets, ERNIE 2.0 model almost comprehensively outperforms BERT and XLNET on English tasks, whether it is a base model or the large model. Furthermore, the research paper shows that ERNIE 2.0 large model achieves the best performance and creates new results on the Chinese NLP tasks.

baidu-open-sources-ernie-2-0-a-continual-pre-training-nlp-model-that-outperforms-bert-and-xlnet-on-16-nlp-tasks-img-2

baidu-open-sources-ernie-2-0-a-continual-pre-training-nlp-model-that-outperforms-bert-and-xlnet-on-16-nlp-tasks-img-3

Source: ERNIE 2.0 research paper


To know more about ERNIE 2.0, read the research paper and check out their official blog on Baidu’s website.

DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games

CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks

Transformer-XL: A Google architecture with 80% longer dependency than RNNs