Last updated at Thu, 09 May 2024 14:50:06 GMT
Part of what we do at Rapid7 is keep an eye on emerging trends, so we can help organizations prepare for and protect against external threats coming over the horizon. Over the past few years, we've seen sharp increases in the number of ransomware attacks, dark web activity, and even supply chain attacks, but there's one trend in particular we think is worth watching out for: deepfakes.
What is a deepfake?
A deepfake is an impersonation of someone — whether it's a fake photo or video of a person's face, an audio file or filter imitating their voice, or anything else that recreates some likeness of a person — developed with very sophisticated technologies, including artificial intelligence (AI) and machine learning (ML).
To be clear, visual and audio manipulation have been around for decades. For example, movies have used visual editing techniques to smooth over mistakes on film, while people have been using photo editing software to create memes since the early days of the internet. What separates these examples from deepfakes?
For one, deepfakes can be used with the intent to deceive. And two, they leverage more recently developed technologies like autoencoders and generative adversarial networks. These deep learning techniques enable threat actors to specifically target aspects of a person's likeness, like their facial structure or body posture. It makes for very convincing imitations, and hackers are taking advantage of these clever guises.
Deepfakes as a threat
Any global trend, whether new technologies or pandemics, is ultimately adopted by the dark web underground community for manipulation and monetization. Deepfake is no different.
As part of our threat intelligence research, we've been tracking “hacker chatter" around deepfakes on the dark web. We've seen more discussions about deepfakes over the last few years:
- In 2019, we identified 40 posts on dark web hacking forums discussing deepfakes.
- In 2020, that number rose to 94 posts.
- In 2021, we've seen a total of 92 posts so far — this number will likely outpace the prior year's 94 by the end of the year.
While not yet a widespread, established threat, this rise in activity indicates that more cybercriminals are becoming interested in deepfakes. The more we see deepfakes being talked about, the higher the chances are that we'll see more deepfake attacks in 2022.
In fact, we've already witnessed a high-profile attack leveraging a deepfake. In October of this year, Forbes reported on a fraud incident that affected a bank in Hong Kong. The bank manager received a call from a voice he recognized: a director at a company with whom he'd spoken before. The director said his company was going to make an acquisition and needed the bank manager to authorize transfers accordingly.
Alongside this call, he received what appeared to be a legitimate email from the director and a lawyer he worked with. Everything looked and sounded real, so the bank manager carried out $35 million worth of transfers. However — you guessed it — the bank manager was the victim of a deepfake.
The fraudsters used deep voice technology to clone the director's speech to dupe the bank manager. Investigators were able to trace around $400,000 worth of stolen funds and identified that around 17 individuals were involved in the scheme.
While this was only the second known case of fraudsters using voice-shaping tools to conduct a heist, it's an early example of the type of deepfake attacks the world may face in the future.
A potential evolution to deepfake as a service
Like many other technologies, those producing deepfakes are still require specialized skills and technologies; however, we're already seeing deepfake capabilities becoming accessible for the masses via deepfake apps and websites.
Since deepfakes utilize such advanced AI and ML technologies, there won't be many threat actors with the skillset to make them on their own. Enter deepfake as a service.
Those who know how to leverage sophisticated AI can perform the service for others, enabling threat actors to fake a face and/or voice of a person without understanding the intricacies behind how it works. All it will take from their standpoint is money. Then, they can conduct advanced social engineering attacks on unsuspecting victims, with the aim to make a sizable profit.
Clearly, deepfakes can be dangerous, but are they the next big thing? At the moment, deepfakes are more of a trending threat rather than an emerging threat, but the threat vector is undoubtedly gaining momentum. As such, it's worth including them in your threat intelligence efforts and keeping an eye on them over the long run.
The more you know...
By following deepfake trends, you can identify a plan early on to protect against them accordingly. That way, you can stay one step ahead of cybercriminals and defend your organization against these advanced external threats and social engineering attacks in the future.