Mitigating privacy risks using federated learning and differential privacy
Some data is simply too hot to handle. You don’t want to move it from its safe location unless absolutely necessary and if you do then you’ll want to ensure that the data remains safe. Of course, it’s still necessary to train a model in order to create the required application.
From a security perspective, creating some types of models would require the use of either federated learning or differential privacy techniques because the data concerning individuals of interest is locked down and inaccessible. To build a successful model, it may become necessary to create the model and then train it using these alternative techniques. The following sections discuss two common methods of mitigating privacy risks by keeping data safe and still providing a means to train a model.
Distributing data and privacy risks using federated learning
Some situations call for obtaining data from multiple...