Implementing a batch scoring pipeline
Operating batch scoring services is very similar to the previously discussed online-scoring approach—you provide an environment, compute target, and scoring file. However, in your scoring file, you would rather pass a path to a blob storage location with a new batch of data instead of the data itself. You can then use your scoring function to process the data asynchronously and output the predictions to a different storage location, back to the blob storage, or push the data asynchronously to the calling service.
It is up to you how you implement your scoring file as it is simply a Python script that you control. The only difference in the deployment process is that the batch-scoring script will be deployed as a pipeline on an Azure Machine Learning cluster, and triggered through a REST service. Therefore, it is important that your scoring script can be configured through command-line parameters. Remember that the difference with batch...