Mitigating threats to the algorithm
The ultimate goal of everything you read in this chapter is to develop a strategy for dealing with security threats. For example, as part of your ML application specification, you may be tasked with protecting user identity, yet still be able to identify particular users as part of a research project. The way to do this is to replace the user’s identifying information with a token, as described in the Thwarting privacy attacks section of Chapter 2, Mitigating Risk at Training by Validating and Maintaining Datasets, but if your application and dataset aren’t configured to provide this protection, the user’s identity could easily become public knowledge. Don’t think that every hacker is looking for a positive response either. Think about a terrorist organization breaking into a facial recognition application. In this case, the organization may be looking for members of their group that don’t appear in the database...