Deployment – what happens after the workstation?
In Chapter 1, we discussed deployment strategies that can be used as you manage your AI/ML products in production. In this section, we’d like you to understand the avenues available from a DevOps perspective, where you will ultimately use and deploy the models in production outside of the training workstation or training environment itself. Perhaps you’re using something such as GitLab to manage the branches of your code repository for various applications of AI/ML in your product and experimenting there. However, once you are ready to make changes or update your models after retraining, you’ll push the new models into production regularly. This means you need a pipeline that can support this kind of experimentation, retraining, and deployment regularly. This section will primarily focus on the considerations after we place a finished ML model into production (a live environment) where it will be accessed...