Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Amazon unveils Sagemaker: An end-to-end machine learning service

Save for later
  • 3 min read
  • 01 Dec 2017

article-image
Machine Learning was one of the most talked about topic at the Amazon’s re:invent this year. In order to make machine learning models accessible to everyday users, regardless of their expertise level, Amazon Web services launched an end-to-end machine learning service – Sagemaker.

Amazon Sagemaker allows data scientists, developers, and machine learning experts to quickly build, train, and deploy machine learning models at scale. The below image shows the process adopted by Sagemaker to aid developers in building ML models.

amazon-sagemaker-machine-learning-service-img-0

Source: aws.amazon.com

Model Building

Amazon SageMaker makes it easy to build ML models by easy training and selection of best algorithms and frameworks for a particular model. Amazon Sagemaker has zero-setup hosted Jupyter notebooks which makes it easy to explore, connect, and visualize the training data stored on Amazon S3. These notebook IDEs are runnable on either general instance types or GPU powered instances.

Model Training

ML models can be trained by a single click in the Amazon SageMaker console. For training the data, Sagemaker also has a provision for moving training data from Amazon RDS, Amazon DynamoDB, and Amazon Redshift into S3. Amazon Sagemaker is preconfigured to run TensorFlow and Apache MXNet. However, developers can use their own frameworks and also create their own training with Docker containers.

Model Tuning and Hosting

Amazon Sagemaker has a model hosting service with HTTPs endpoints. These endpoints can invoke real-time inferences, support traffic, and simultaneously allow A/B Testing. Amazon Sagemaker can automatically tune models to achieve high accuracy. This makes the training process faster and easier. Sagemaker can automate the underlying infrastructure and allows developers to easily scale to train models at petabyte scale.

Model Deployment

After training and tuning come the deployment phase. Sagemaker deploys the models on an auto-scaling cluster of Amazon EC2 instances, for running predictions on new data. These high-performance instances are spread across multiple availability zones.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

According to the official product page, Amazon Sagemaker has multiple use cases. One of them being Ad targeting, where Amazon Sagemaker can be used with other AWS services to help build, train, and deploy ML models for targeting online ads, optimize return on ad spend, customer segmentation, etc. Another interesting use case of Sagemaker is how it can train recommender systems within its serverless, distributed environment which can be hosted easily in low-latency, auto-scaling endpoint systems.

Sagemaker can also be used for building highly efficient Industrial IoT and ML models to predict machine failure or for maintenance scheduling.

As of now, Amazon Sagemaker is free for developers for the first two months. Each month developers are provided with 250 hours of t2.medium notebook usage, 50 hours of m4.xlarge usage for training, and 125 hours of m4.xlarge usage for hosting.

After the free period, the pricing would vary by region and customers would be billed per-second for instance usage, per-GB of storage, and per-GB of Data transfer into and out of the service.

AWS Sagemaker provides an end-to-end solution for the development of machine learning applications. The ease and flexibility offered by AWS Sagemaker could be harnessed by developers to solve several business-related problems.