If criminals can create their own reality, how can they be stopped?
Deepfakes, realistic counterfeit videos created by artificial intelligence, are being posted online in ever greater numbers, says a new report. In the last year, there has been a 100% increase in deepfakes on the web, and while the vast majority (96%) of them are pornographic in nature, this technology also poses a major security risk for businesses – because as deepfakes improve, their effectiveness as a means to commit fraud also gets better.
Just imagine you get a video call from your boss. He says you need to transfer a large sum of company funds urgently to a new supplier. Now, if you received the same request via email, you might be suspicious, but this is a video call. You know what your boss looks and sounds like. So you transfer the money, as requested.
It’s only when your boss calls you to ask why you’ve made an unauthorised transfer into an unknown account that the truth finally comes out: you’ve been scammed.
In this situation, a criminal would have used deepfake technology to create a realistic model of your boss, which they would be able to manipulate to make say whatever they want. Your boss’s voice could be provided by an impersonator, or it could generated by AI, just like the video.
It sounds like science fiction, but it’s not as far-fetched as it sounds. Deepfakes like the one below give you an idea of what’s possible with today’s technology, and as it improves, it’s going to get harder and harder to spot these fakes.
The good news is that creating convincing fakes isn’t easy right now, and it requires access to large amounts of footage of the subject being faked, so if your boss doesn’t make a lot of filmed public appearances, that will work in your favour. However, as deepfakes improve, the amount of source material needed will decrease, and it will become easier for novices to produce convincing videos. Have no doubt: this is a security problem that is in its infancy, and there’s every chance it’s going to get worse.
As things stand, deepfake videos usually look slightly unnatural, especially in real time, so they’re unlikely to fool anyone into transferring money to criminals. But audio deepfakes are another problem entirely, and they’ve already proven effective in at least one real-life crime.
Just this week, it was reported that the boss of a UK energy firm was tricked into handing £200,000 to criminals, who had used deepfake audio. Thinking he was speaking to the head of the parent company in Germany, he responded to what appeared to be an urgent request to pay a Hungarian supplier, but it was, of course, fake. Where the crooks got the voice samples to create their fake is unknown, and the victim company has chosen to remain anonymous. However, theoretically, they could have used YouTube videos, interviews, podcasts, even recordings of live speeches to build up a database of samples.
Again, it’s not particularly easy to create these fakes, so these attacks aren’t going to become widespread overnight. Also, it’s possible that the UK executive in this case wasn’t overly familiar with his German counterpart’s voice. Had he been, he might have spotted the ruse for what it was.
Assuming both audio and video deepfakes continue to improve, how can anyone protect themselves from scams like this?
First of all, the best way of confirming requests like large bank transfers is to ask in person. But, like in the case above, that’s not always practical. Right now, video and phone calls are still good ways to confirm information, but it’s harder to say what the solution will be if deepfakes get too accurate for humans to detect.
Technological solutions, like ID and device management will play a part, by limiting who can communicate using your company channels and with which devices they can do it. Multi-factor authentication, a security measure that already makes a huge difference in the fight against cyber crime, will also grow in importance.
Companies might also try social solutions, like passwords that have to be spoken to confirm identity. Or they might simply ask something that only a real employee would know, like what colour the walls are in the stationery cupboard.
Unfortunately, no method is going to be 100% effective, so there will be more deepfake fraud victims in future. For many businesses, their best strategy will be to align themselves with a cyber-security-aware, proactive IT provider like TMB Group, who will stay abreast of any new or emerging threats, and who will know the best approach for dealing with them.
To find out more about our cyber security products and services, use our contact page, or call 0333 900 9050.