Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments?

Save for later
  • 4 min read
  • 06 Dec 2018

article-image

Facebook’s artificial intelligence research group - FAIR - just turned five. In a blog post, published yesterday, Facebook executives discussed the accomplishments FAIR has made over the last five years, and where it might be heading in the future.

The team was formed with an aim to advance state-of-the-art AI via open research. FAIR has grown since its inception and now has labs in the USA and Europe. Their team has worked broadly with the open-source community and some of their papers have received awards.

A significant part of FAIR research is around the keys to reasoning, prediction, planning, and unsupervised learning. These areas of investigation, in turn, require a better theoretical understanding of various fields related to artificial intelligence. They believe that long-term research explorations are necessary to unlock the full potential of artificial intelligence.

Important milestones achieved by the FAIR team

Memory networks


FAIR developed a new class of machine learning models that can overcome the limitations in neural networks, i.e. long-term memory. These new models can remember previous interactions to answer general knowledge questions while keeping previous statements of a conversation in context.

Self-supervised learning and generative models


FAIR was fascinated by a new unsupervised learning method, Generative Adversarial Networks (GANs) in 2014 proposed by researchers from MILA at Université de Montréal. From 2015, FAIR published a series of papers that showcased the practicality of GANs. FAIR researchers and Facebook engineers have used adversarial training methods for a variety of applications, including long-term video prediction and creating graphic designs in fashion pieces.

A scalable Text classification


In 2016 FAIR built fastText, a framework for rapid text classification and learning word representations. In a 2017 paper, FAIR proposed a model that assigns vectors to “subword units” (sequences of 3 or 4 characters) rather than to whole words. This allowed the system to create representations for words that were not present in training data. This resulted in a model which could classify billions of words by learning from untrained words. Also, FastText is now available in 157 languages.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

Translation research


FAIR developed a CNN-based neural machine translation architecture and published a paper on it in 2017. ‘Multi-hop’ CNNs are easier to train on limited data sets and can also better understand misspelled or abbreviated words; they’re designed to mimic the way humans translate sentences, by taking multiple glimpses at the sentence they are trying to translate. The results were a 9x increase in speed over RNNs while maintaining great accuracy.

AI tools


In 2015, the FAIR community open-sourced Torch deep learning modules to speed up training of larger neural nets. Torchnet was released in 2016 to build effective and reusable learning systems. Further,, they released Caffe2, a modular deep learning framework for mobile computing. After that, they collaborated with Microsoft and Amazon to launch ONNX, a common representation for neural networks. ONNX makes it simple to move between frameworks.

A new benchmark for computer vision


In 2017, FAIR researchers won the ‘International Conference on Computer Vision Best Paper’ for Mask R-CNN, which combines object detection with semantic segmentation. The paper stated: “Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners.

Faster training and bigger data sets for Image Recognition


Facebook’s Applied Machine Learning (AML) team discussed how they trained image recognition networks on large sets of public images with hashtags. The biggest dataset included 3.5 billion images and 17,000 hashtags. This was a breakthrough made possible by FAIR’s research on training speed. FAIR was able to train ImageNet, an order of a magnitude faster than the previous best.

According to FAIR, “Our ultimate goal was to understand intelligence, to discover its fundamental principles, and to make machines significantly more intelligent” They continue to expand their research efforts into various areas such as developing machines that can acquire models of the real world with self-supervised learning, training machines to reason, and to plan, conceive complex sequences of actions. This is the reason why the community is also working on robotics, visual reasoning, and dialogue systems.

Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset as part of their FastMRI project

The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence

Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK?