[ad_1]
But that assumes you can get the training data, says Kautz. He and his colleagues at Nvidia have come up with a different way to disclose private data that doesn’t require access to training data, including images of faces and other objects, medical data, and more.
Instead, they developed an algorithm that can reconstruct data exposed to a trained model. Reversing the steps the model goes through when processing this data. Take a trained image recognition network: to determine what’s in an image, the network runs it through a series of layers of artificial neurons; each layer extracts different levels of information, from abstract edges to shapes and more recognizable features.
Kautz’s team found that in the middle of these steps they could interrupt a model and reverse its direction and recreate the input image from the model’s internal data. They tested the technique on various common image recognition models and GANs. In a test, they demonstrated that they could accurately reconstruct images from ImageNet, one of the best-known image recognition datasets.
Like Webster’s work, the recreated images are very similar to the real ones. “We were surprised by the final quality,” says Kautz.
Researchers argue that this type of attack is not merely hypothetical. Smartphones and other small devices are starting to use more AI. Due to battery and memory constraints, models are sometimes only half-processed on the device itself and sent to the cloud for the final computing crisis, an approach known as split computing. Kautz says most researchers assume that split computing won’t reveal any private data on a person’s phone because only the model is shared. But his attack shows that this is not the case.
Kautz and colleagues are now trying to find ways to prevent models from leaking private data. “We wanted to understand the risks so we could minimize vulnerabilities,” he says.
Although they use very different techniques, he feels that his work and Webster’s complement each other well. Webster’s team demonstrated that private data can be found in the output of a model; By reconstructing the input, Kautz’s team showed that private data can be revealed by reversing it. “It’s important to explore both aspects to better understand how to prevent attacks,” says Kautz.
[ad_2]
Source link