Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset

Save for later
  • 3 min read
  • 27 Aug 2019

article-image
Two years ago, the team at Facebook AI Research (FAIR) lab open-sourced fastText, a library that is used for building scalable solutions for text representation and classification. To make models work efficiently on datasets with large number of categories, finding the best hyperparameters is crucial. However, searching the best hyperparameters manually is difficult as the effect of each parameter varies from one dataset to another. For this, Facebook has developed an autotune feature in FastText that automatically finds the best hyperparameters for your dataset. Yesterday, they announced that they are open-sourcing the Hyperparameter autotuning feature for fastText library. 

What are hyperparameters?


Hyperparameters are the parameter whose values are fixed before the training process begins. They are the critical components of an application and they can be tuned in order to control how a machine learning algorithm behaves. Hence it is important to search for the best hyperparameters as the performance of an algorithm can be majorly dependent on the selection of these hyperparameters.

The need for Hyperparameter autotuning


It is difficult and time-consuming to search for the best hyperparameters manually, even for expert users. This new feature makes this task easier by automatically determining the best hyperparameters for building an efficient text classifier. A researcher can input the training data, a validation set and a time constraint to use autotuning.

The researcher can also constrain the size of the final model with the help of compression techniques in fastText. Building a size-constrained text classifier can be useful for even deploying models on devices or in the cloud such that it becomes easier to maintain a small memory footprint. 

With Hyperparameter autotuning, researchers can now easily build a memory-efficient classifier that can be used for various tasks, including language identification, sentiment analysis, tag prediction, spam detection, and topic classification.

The team’s strategy of exploring various hyperparameters is inspired by existing tools, such as Nevergrad, but has been tailored to fastText for using the specific structure of models. The autotune feature explores hyperparameters by initially sampling in a large domain that shrinks around the best combinations over time. 

It seems that this new feature could possibly be a competitor to Amazon SageMaker Automatic Model Tuning. In Amazon’s model, however, the user needs to select the hyperparameters required to be tuned, a range for each parameter to explore, and also the total number of training jobs. While Facebook’s Hyperparameter autotuning automatically selects the hyperparameters. 

To know more about this news, check out Facebook’s official blog post.

Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests

Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules

Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans

 

 

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime