What is MLOps?
We’ve covered such a huge amount of content in this book that it’s almost inconceivable. From the absolute foundations of pretraining, we’ve worked through use cases, datasets, models, GPU optimizations, distribution basis, optimizations, hyperparameters, working with SageMaker, fine-tuning, bias detection and mitigation, hosting your model, and prompt engineering. Now, we come to the art and science of tying it all together.
MLOps stands for machine learning operations. Broadly speaking, it includes a whole set of technologies, people, and processes that your organization can adopt to streamline your machine learning workflows. In the last few chapters, you learned about building RESTful APIs to host your model, along with tips to improve your prompt engineering. Here, we’ll focus on building a deployment workflow to integrate this model into your application.
Personally, I find the pipeline aspect of MLOps the most poignant. A pipeline...