Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
AI models are surprisingly hackable—provided you can somehow sniff out the model’s electromagnetic signature. North Carolina State University researchers have publicly described such a technique, although they have repeatedly emphasized that they do not actually want to help people attack neural networks. new paper. All they needed was a single electromagnetic probe, a few pre-trained, open-source AI models, and the Google Edge Tensor Processing Unit (TPU). Their method involves the analysis of electromagnetic radiation while the TPU chip is actively working.
“Building and training a neural network is extremely expensive,” said the study’s lead author and NC State Ph.D. student Ashley Kurian on a call with Gizmodo. “This is intellectual property that the company owns, and it takes a lot of time and computing resources. For example, ChatGPT — contains billions of parameters, which is a kind of mystery. ChatGPT is theirs when someone steals it. You know, they don’t have to pay for it, they can also sell it.”
Theft is already a major concern in the world of artificial intelligence. However, the opposite is usually the case, as AI developers train their models on copyrighted works without obtaining permission from human creators. This is a great model spark claims and even tools for help artists fight back by “poisoning” art generators.
“Electromagnetic data from the sensor gives us a ‘signature’ of AI operation behavior,” Kurian explains. statementhe calls it “the easy part.” But to decipher the model’s hyperparameters—its architecture and defining details—they had to compare the electromagnetic field data with data from other AI models running on the same type of chip.
By doing this, they were able to “identify the architecture and specific features known as layer details—we need to create a copy of the AI model,” Kurian explained, adding that they could do so with “99.91% accuracy.” ” To overcome this, the researchers gained physical access to the chip to both explore and operate other models. They also worked directly with Google to help the company determine how attackable its chips are.
Kurian hypothesized that it would also be possible to shoot models that work on smartphones, for example, but their super-compact design would make controlling electromagnetic signals even more difficult.
“Side-channel attacks on third-party devices are nothing new,” Mehmet Sencan, AI standards security researcher at the nonprofit Atlas Computing, told Gizmodo. But this particular technique is “important for extracting the hyperparameters of the entire model architecture.” Sencan explained that while the AI facility “produces conclusions in plain text,” “anyone who deploys their models outside or on any server that is not physically protected will have to consider that their architecture can be extracted by extensive investigation.”