Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Deep Learning Indaba presents the state of Natural Language Processing in 2018

Save for later
  • 5 min read
  • 12 Dec 2018

article-image
The ’Strengthening African Machine Learning’ conference organized by Deep Learning Indaba, at Stellenbosch, South Africa, is ongoing right now. This 6-day conference will celebrate and strengthen machine learning in Africa through state-of-the-art teaching, networking, policy debate, and through support programmes. Yesterday, three conference organizers, Sebastian Ruder, Herman Kamper, and Stephan Gouws asked tech experts their view on the state of Natural Language Processing, more specifically these 4 questions:
  1. What do you think are the three biggest open problems in Natural Language Processing at the moment?
  2. What would you say is the most influential work in Natural Language Processing in the last decade, if you had to pick just one?
  3. What, if anything, has led the field in the wrong direction?
  4. What advice would you give a postgraduate student in Natural Language Processing starting their project now?
  5. Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at $19.99/month. Cancel anytime


The tech experts interviewed included the likes of Yoshua Bengio, Hal Daumé III, Barbara Plank, Miguel Ballesteros, Anders Søgaard, Lea Frermann, Michael Roth, Annie Louise, Chris Dyer, Felix Hill,  Kevin Knight and more.

https://twitter.com/seb_ruder/status/1072431709243744256

Biggest open problems in Natural Language Processing at the moment


Although each expert talked about a variety of Natural Language Processing open issues, the following common key themes recurred.


No ‘real’ understanding of Natural language understanding

Many experts argued that natural Language understanding is central and also important for natural language generation. They agreed that most of our current Natural Language Processing models do not have a “real” understanding. What is needed is to build models that incorporate common sense, and what (biases, structure) should be built explicitly into these models. Dialogue systems and chatbots were mentioned in several responses.

Maletšabisa Molapo, a Research Scientist at IBM Research and one of the experts answered, “Perhaps this may be achieved by general NLP Models, as per the recent announcement from Salesforce Research, that there is a need for NLP architectures that can perform well across different NLP tasks (machine translation, summarization, question answering, text classification, etc.)


NLP for low-resource scenarios

Another open problem is using NLP for low-resource scenarios. This includes generalization beyond the training data, learning from small amounts of data and other techniques such as Domain-transfer, transfer learning, multi-task learning. Also includes different supervised learning techniques, semi-supervised, weakly-supervised, “Wiki-ly” supervised, distantly-supervised, lightly-supervised, minimally-supervised and unsupervised learning.

Per Karen Livescu, Associate Professor Toyota Technological Institute at Chicago, “Dealing with low-data settings (low-resource languages, dialects (including social media text "dialects"), domains, etc.).  This is not a completely "open" problem in that there are already a lot of promising ideas out there; but we still don't have a universal solution to this universal problem.


Reasoning about large or multiple contexts

Experts believed that NLP has problems in dealing with large contexts. These large context documents can be either text or spoken documents, which currently lack common sense incorporation. According to, Isabelle Augenstein, tenure-track assistant professor at the University of Copenhagen, “Our current models are mostly based on recurrent neural networks, which cannot represent longer contexts well. One recent encouraging work in this direction I like is the NarrativeQA dataset for answering questions about books. The stream of work on graph-inspired RNNs is potentially promising, though has only seen modest improvements and has not been widely adopted due to them being much less straight-forward to train than a vanilla RNN.”

Defining problems, building diverse datasets and evaluation procedures

“Perhaps the biggest problem is to properly define the problems themselves. And by properly defining a problem, I mean building datasets and evaluation procedures that are appropriate to measure our progress towards concrete goals. Things would be
easier if we could reduce everything to Kaggle style competitions!” - Mikel Artetxe.

Experts believe that current NLP datasets need to be evaluated. A new generation of evaluation datasets and tasks are required that show whether NLP techniques generalize across the true variability of human language. Also what is required are more diverse datasets. “Datasets and models for deep learning innovation for African Languages are needed for many NLP tasks beyond just translation to and from English,” said Molapo.

Advice to a postgraduate student in NLP starting their project


Do not limit yourself to reading NLP papers. Read a lot of machine learning, deep learning, reinforcement learning papers. A PhD is a great time in one’s life to go for a
big goal, and even small steps towards that will be valued. — Yoshua Bengio

Learn how to tune your models, learn how to make strong baselines, and learn how to build baselines that test particular hypotheses. Don’t take any single paper too seriously, wait for its conclusions to show up more than once. — George Dahl

I believe scientific pursuit is meant to be full of failures. If every idea works out, it’s either because you’re not ambitious enough, you’re subconsciously cheating
yourself, or you’re a genius, the last of which I heard happens only once every century or so. so, don’t despair! — Kyunghyun Cho

Understand psychology and the core problems of semantic cognition. Understand machine learning. Go to NeurIPS. Don’t worry about ACL. Submit something terrible (or even good, if possible) to a workshop as soon as you can. You can’t learn how to do these things without going through the process. — Felix Hill

Make sure to go through the complete list of all expert responses for better insights.

Google open sources BERT, an NLP pre-training technique

Use TensorFlow and NLP to detect duplicate Quora questions [Tutorial]

Intel AI Lab introduces NLP Architect Library