Millions have pounds have been stolen from a security firm through a “deepfaked” audio hacking campaign, reports reveal.
Software security firm, Symantec, says it has been the victim of three “deepfake” attacks – a method whereby hackers use AI technology to manufacture other people’s voices to trick listeners into following instructions.
In this case, a number of the company’s chiefs’ voices were mocked up to lure senior financial officers to hand over cash. Ambient noise was used in the background of the recordings to help smooth over less convincing aspects of the speech production.
Symantec said the AI technology being used by the hackers had the capacity to learn huge swathes of footage that could easily be picked up from the average bosses’ recorded, publically available dialogue.
Symantec’s chief technology officer, Dr Hugh Thompson, said:
“Corporate videos, earning calls, media appearances as well as conference keynotes and presentations would all be useful for fakers looking to build a model of someone’s voice. The model can probably be almost perfect.”
“Who would not fall for something like that?” Dr Thompson added.
Creating audio fakes to a sufficiently high standard is no easy task, and would require substantial time, money and expertise. As reported by the BBC, Faculty data scientist Dr Alexander Adam, maintains “training the models costs thousands of pounds.”
“This is because you need a lot of computing power and the human ear is very sensitive to a wide range of frequencies, so getting the model to sound truly realistic takes a lot of time.”
According to Mr Adams, hours of good quality audio footage would be needed to help pin down and accurately reproduce the victim’s vocal idiosyncrasies.
Join our free-to-attend digital event, Last Thursday in Privacy, addressing data protection, privacy and security challenges including working from home, COVID-19, global regulations and more. Visit https://digital.privsec.info/.
We have been awarded the number 1 GDPR Blog in 2019 by Feedspot.