Summary
In this chapter, we covered the use of model tampering as an alternative approach to compromising model integrity without the need to poison data. We looked at the different attack vectors, such as pickle serialization, lambda and custom layers, and neural payload injection. We discussed mitigations, looked at edge AI, and covered the additional risks and defenses that mobile and IoT applications entail.
Finally, we looked at model hijacking to repurpose the function of a model either via code injection or a new, novel approach called model reprogramming.
The defenses are similar in all cases but rely heavily on the assumption that we can fully control model development.
In the next chapter, we will look at supply chain attacks, the risks from third-party components, and how we can defend against poisoning and model tampering when using models sourced from outside our organization.