Preserving data privacy and model privacy
When dealing with ML and ML engineering requirements, we need to make sure that we protect the training data, along with the parameters of the generated model, from attackers. When given the chance, these malicious actors will perform a variety of attacks to extract the parameters of the trained model or even recover the data used to train the model. This means that PII may be revealed and stolen. If the model parameters are compromised, the attacker may be able to perform inference on their end by recreating the model that your company took months or years to develop. Scary, right? Let’s share a few examples of attacks that can be performed by attackers:
- Model inversion attack: The attacker attempts to recover the dataset used to train the model.
- Model extraction attack: The attacker tries to steal the trained model using the prediction output values.
- Membership inference attack: The attacker attempts to infer if a record...