Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Machine Learning with Amazon SageMaker Cookbook

You're reading from   Machine Learning with Amazon SageMaker Cookbook 80 proven recipes for data scientists and developers to perform machine learning experiments and deployments

Arrow left icon
Product type Paperback
Published in Oct 2021
Publisher Packt
ISBN-13 9781800567030
Length 762 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Joshua Arvin Lat Joshua Arvin Lat
Author Profile Icon Joshua Arvin Lat
Joshua Arvin Lat
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Chapter 1: Getting Started with Machine Learning Using Amazon SageMaker 2. Chapter 2: Building and Using Your Own Algorithm Container Image FREE CHAPTER 3. Chapter 3: Using Machine Learning and Deep Learning Frameworks with Amazon SageMaker 4. Chapter 4: Preparing, Processing, and Analyzing the Data 5. Chapter 5: Effectively Managing Machine Learning Experiments 6. Chapter 6: Automated Machine Learning in Amazon SageMaker 7. Chapter 7: Working with SageMaker Feature Store, SageMaker Clarify, and SageMaker Model Monitor 8. Chapter 8: Solving NLP, Image Classification, and Time-Series Forecasting Problems with Built-in Algorithms 9. Chapter 9: Managing Machine Learning Workflows and Deployments 10. Other Books You May Enjoy

Preparing and testing the serve script in R

In this recipe, we will create a serve script using R that runs an inference API using the plumber package. This API loads the model during initialization and uses the model to perform predictions during endpoint invocation.

The following diagram shows the expected behavior of the R serve script that we will prepare in this recipe. The R serve script loads the model file from the /opt/ml/model directory and runs a plumber web server on port 8080:

Figure 2.84 – The R serve script loads and deserializes the model and 
runs a plumber API server that acts as the inference endpoint

Figure 2.84 – The R serve script loads and deserializes the model and runs a plumber API server that acts as the inference endpoint

The web server is expected to have the /ping and /invocations endpoints. This standalone R backend API server will run inside a custom container later.

Getting ready

Make sure you have completed the Preparing and testing the train script in R recipe.

How to do it...

We will start by preparing the api.r file:

  1. Double-click the api.r file inside the ml-r directory in the file tree:
    Figure 2.85 – An empty api.r file inside the ml-r directory

    Figure 2.85 – An empty api.r file inside the ml-r directory

    Here, we can see four files under the ml-r directory. Remember that we created an empty api.r file in the Setting up the Python and R experimentation environments recipe:

    Figure 2.86 – Empty api.r file

    Figure 2.86 – Empty api.r file

    In the next couple of steps, we will add a few lines of code inside this api.r file. Later, we will learn how to use the plumber package to generate an API from this api.r file.

  2. Define the prepare_paths() function, which we will use to initialize the PATHS variable. This will help us manage the paths of the primary files and directories used in the script. This function allows us to initialize the PATHS variable with a dictionary-like data structure, which we can use to get the absolute paths of the required files:
    prepare_paths <- function() {
        keys <- c('hyperparameters', 
                  'input', 
                  'data',
                  'model')
        values <- c('input/config/hyperparameters.json', 
                    'input/config/inputdataconfig.json', 
                    'input/data/',
                    'model/')
        paths <- as.list(values)
        names(paths) <- keys
        return(paths);
    }
        
    PATHS <- prepare_paths()
  3. Next, define the get_path() function, which makes use of the PATHS variable from the previous step:
    get_path <- function(key) {
        output <- paste(
            '/opt/ml/', PATHS[[key]], sep="")
        return(output);
    }
  4. Create the following function (including the comments), which responds with "OK" when triggered from the /ping endpoint:
    #* @get /ping
    function(res) {
      res$body <- "OK"
      return(res)
    }

    The line containing #* @get /ping tells plumber that we will use this function to handle the GET requests with the /ping route.

  5. Define the load_model() function:
    load_model <- function() {
      model <- NULL
      filename <- paste0(get_path('model'), 'model')
      print(filename)
      model <- readRDS(filename)
      return(model)
    }
  6. Define the following /invocations function, which loads the model and uses it to perform a prediction on the input value from the request body:
    #* @post /invocations
    function(req, res) {
      print(req$postBody)
      model <- load_model()
      payload_value <- as.double(req$postBody)
      X_test <- data.frame(payload_value)
      colnames(X_test) <- "X"
      
      print(summary(model))
      y_test <- predict(model, X_test)
      output <- y_test[[1]]
      print(output)
      
      res$body <- toString(output)
      return(res)
    }

    Here, we loaded the model using the load_model() function, transformed and prepared the input payload before passing it to the predict() function, used the predict() function to perform the actual prediction when given an X input value, and returned the predicted value in the request body.

    Tip

    You can access a working copy of the api.r file in the Machine Learning with Amazon SageMaker Cookbook GitHub repository: https://github.com/PacktPublishing/Machine-Learning-with-Amazon-SageMaker-Cookbook/blob/master/Chapter02/ml-r/api.r.

    Now that the api.r file is ready, let's prepare the serve script:

  7. Double-click the serve file inside the ml-r directory in the file tree:
    Figure 2.87 – The serve file inside the ml-r directory

    Figure 2.87 – The serve file inside the ml-r directory

    It should open an empty serve file, similar to what is shown in the following screenshot:

    Figure 2.88 – The serve file inside the ml-r directory

    Figure 2.88 – The serve file inside the ml-r directory

    We will add the necessary code to this empty serve file in the next set of steps.

  8. Start the serve script with the following lines of code. Here, we are loading the plumber and here packages:
    #!/usr/bin/Rscript
    suppressWarnings(library(plumber))
    library('here')

    The here package provides utility functions to help us easily build paths to files (for example, api.r).

  9. Add the following lines of code to start the plumber API server:
    path <- paste0(here(), "/api.r")
    pr <- plumb(path)
    pr$run(host="0.0.0.0", port=8080)

    Here, we used the plumb() and run() functions to launch the web server. It is important to note that the web server endpoint needs to run on port 8080 for this to work correctly.

    Tip

    You can access a working copy of the serve script in the Machine Learning with Amazon SageMaker Cookbook GitHub repository: https://github.com/PacktPublishing/Machine-Learning-with-Amazon-SageMaker-Cookbook/blob/master/Chapter02/ml-r/serve.

  10. Open a new Terminal tab:
    Figure 2.89 – Locating the Terminal

    Figure 2.89 – Locating the Terminal

    Here, we see that a Terminal tab is already open. If you need to create a new one, simply click the plus (+) sign and then click New Terminal.

  11. Install libcurl4-openssl-dev and libsodium-dev using apt-get install. These are some of the prerequisites for installing the plumber package:
    sudo apt-get install -y --no-install-recommends libcurl4-openssl-dev
    sudo apt-get install -y --no-install-recommends libsodium-dev
  12. Install the here package:
    sudo R -e "install.packages('here',repos='https://cloud.r-project.org')"

    The here package helps us get the string path values we need to locate specific files (for example, api.r). Feel free to check out https://cran.r-project.org/web/packages/here/index.html for more information.

  13. Install the plumber package:
    sudo R -e "install.packages('plumber',repos='https://cloud.r-project.org')"

    The plumber package allows us to generate an HTTP API in R. For more information, feel free to check out https://cran.r-project.org/web/packages/plumber/index.html.

  14. Navigate to the ml-r directory:
    cd /home/ubuntu/environment/opt/ml-r
  15. Make the serve script executable using chmod:
    chmod +x serve
  16. Run the serve script:
    ./serve

    This should yield log messages similar to the following ones:

    Figure 2.90 – The serve script running

    Figure 2.90 – The serve script running

    Here, we can see that our serve script has successfully run a plumber API web server on port 8080.

    Finally, we must trigger this running web server.

  17. Open a new Terminal tab:
    Figure 2.91 – New Terminal

    Figure 2.91 – New Terminal

    Here, we are creating a new Terminal tab as the first tab is already running the serve script.

  18. Set the value of the SERVE_IP variable to localhost:
    SERVE_IP=localhost
  19. Check if the ping endpoint is available with curl:
    curl http://$SERVE_IP:8080/ping

    Running the previous line of code should yield an OK from the /ping endpoint.

  20. Test the invocations endpoint with curl:
    curl -d "1" -X POST http://$SERVE_IP:8080/invocations

    We should get a value close to 881.342840085751.

Now, let's see how this works!

How it works…

In this recipe, we prepared the serve script in R. The serve script makes use of the plumber package to serve an API that allows GET requests for the /ping route and POST requests for the /invocations route. The serve script is expected to load the model file(s) from the specified model directory and run a backend API server inside the custom container. This should provide a /ping route and an /invocations route.

Compared to its Python recipe counterpart, we are dealing with two files instead of one as that's how we used plumber in this recipe:

  • The api.r file defines what the API looks like and how it behaves.
  • The serve script uses the api.r file to initialize and launch a web server using the plumb() function from the plumber package. Note that with Flask, there is no need to create a separate file to define the API routes.

When working with the plumber package, we start with an R file describing how the API will behave (for example, api.r). This R file follows the following format:

#* @get /ping
function(res) {
  res$body <- "OK"
  return(<RETURN VALUE>)
}
     
#* @post /invocations
function(req, res) {
  return(<RETURN VALUE>)
}

Once this R file is ready, we simply create an R script that makes use of the plumb() function from the plumber package. This will launch a web server using the configuration and behavior coded in the api.r file:

pr <- plumb(<PATH TO API.R>)
pr$run(host="0.0.0.0", port=8080)

With this, whenever the /ping URL is accessed, the mapped function defined in the api.r file is executed. Similarly, whenever the /invocations URL is accessed with a POST request, the corresponding mapped function is executed. For more information on the plumber package, feel free to check out https://www.rplumber.io/.

You have been reading a chapter from
Machine Learning with Amazon SageMaker Cookbook
Published in: Oct 2021
Publisher: Packt
ISBN-13: 9781800567030
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image