Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How hackers are using Deepfakes to trick people

Save for later
  • 7 min read
  • 02 Oct 2019

article-image

Cybersecurity analysts have warned that spoofing using artificial intelligence is within the realm of possibility and that people should be aware of the possibility of getting fooled with such voice or picture-based deepfakes.

What is Deepfake?


Deepfakes rely on a branch of AI called Generative Adversarial Networks (GANs). It requires two machine learning networks that teach each other with an ongoing feedback loop. The first one takes real content and alters it. Then, the second machine learning network, known as the discriminator, tests the authenticity of the changes.

As the machine learning networks keep passing the material back and forth and receiving feedback about it, they get smarter. GANs are still in the early stages, but people expect numerous potential commercial applications.

For example, some can convert a single image into different poses. Others can suggest outfits similar to what a celebrity wears in a photo or turn a low-quality picture into a high-resolution snapshot.

But, outside of those helpful uses, deepfakes could have sinister purposes. Consider the blowback if a criminal creates a deepfake video of something that would hurt someone's reputation — for instance, a deepfake video of a politician "admitting" to illegal activities, like accepting a bribe.

Other instances of this kind of AI that are already possible include cases of misleading spoken dialogue. Then, the lips of someone saying something offensive get placed onto someone else.

In one of the best-known examples of Deepfake manipulation, BuzzFeed published a clip now widely known as "ObamaPeele." It combined a video of President Obama with film director Jordan Peele's lips. The result made it seem as if Obama cursed and said things he never would in public.

Deepfakes are real enough to cause action


The advanced deepfake efforts that cybersecurity analysts warn about rely on AI to create something so real that it causes people to act.

For example, in March of 2019, the CEO of a British energy firm received a call from what sounded like his boss. The message was urgent — the executive needed to transfer a large amount of funds to a Hungarian supplier within the hour.

Only after the money was sent did it become clear the executive’s boss was never on the line. Instead, cybercriminals had used AI to generate an audio clip that mimicked his boss’s voice. The criminals called the British man and played the clip, convincing him to transfer the funds. The unnamed victim was scammed out of €220,000 — an amount equal to $243,000.

Reports indicate it's the first successful hack of its kind, although it's an unusual way for hackers to go about fooling victims. Some analysts point out other hacks like this may have happened but have gone unreported, or perhaps the people involved did not know hackers used this technology.

According to Rüdiger Kirsch, a fraud expert at the insurance company that covered the full amount of the claim, this is the first time the insurer dealt with such an instance. The AI technology apparently used to mimic the voice was so authentic that it captured the parent company leader's German accent and the melody of his voice.

Deepfakes capitalize on urgency


One of the telltale signs of deepfakes and other kinds of spoofing — most of which currently happen online — is a false sense of urgency. For example, lottery scammers emphasize that their victims must send personal details immediately to avoid missing out on their prizes. The deepfake hackers used time constraints to fool this CEO, as well.

The AI technology on the other end of the phone told the CEO that he needed to send the money to a Hungarian supplier within the hour, and he complied. Even more frighteningly, the deceiving tech was so advanced that hackers used it for several phone calls to the victim.

One of the best ways to avoid scams is to get further verification from outside sources, rather than immediately responding to the person engaging with you.

For example, if you're at work and get a call or email from someone in accounting who asks for your Social Security number or bank account details to update their records, the safest thing to do is to contact the accounting department yourself and verify the legitimacy.

Many online spoofing attempts have spelling or grammatical errors, too. The challenging thing about voice trickery, though, is that those characteristics don't apply. You can only go by what your ears tell you.

Since these kinds of attacks are not yet widespread, the safest thing to do for avoiding disastrous consequences is to ignore the urgency and take the time you need to verify the requests through other sources.

Hackers can target deepfake victims indefinitely


One of the most impressive things about this AI deepfake case is that it involved more than one phone conversation.

The criminals called again after receiving the funds to say that the parent company had sent reimbursement funds to the United Kingdom firm. But, they didn't stop there. The CEO received a third call that impersonated the parent company representative again and requested another payment.

That time, though, the CEO became suspicious and didn't agree. As, the promised reimbursed funds had not yet come through. Moreover, the latest call requesting funds originated from an Austrian phone number. Eventually, the CEO called his boss and discovered the fakery by handling calls from both the real person and the imposter simultaneously.

Evidence suggests the hackers used commercially available voice generation software to pull off their attack. However, it is not clear if the hackers used bots to respond when the victim asked questions of the caller posing as the parent company representative.

Why do deepfakes work so well?


This deepfake is undoubtedly more involved than the emails hackers send out in bulk, hoping to fool some unsuspecting victims. Even those that use company logos, fonts and familiar phrases are arguably not as realistic as something that mimics a person's voice so well that the victim can't distinguish the fake from the real thing.

The novelty of these incidents also makes individuals less aware that they could happen. Although many people receive training that helps them spot some online scams, the curriculum does not yet extend to these advanced deepfake cases.

Making the caller someone in a position of power increases the likelihood of compliance, too. Generally, if a person hears a voice on the other end of the phone that they recognize as their superior, they won't question it. Plus, they might worry that any delays in fulfilling the caller's request might get perceived as them showing a lack of trust in their boss or an unwillingness to follow orders.

You've probably heard people say, "I'll believe it when I see it." But, thanks to this emerging deepfake technology, you can't necessarily confirm the authenticity of something by hearing or seeing it.

That's an unfortunate development, plus something that highlights how important it is to investigate further before acting. That may mean checking facts or sources or getting in touch with superiors directly to verify what they want you to do.

Indeed, those extra steps take more time. But, they could save you from getting fooled.

Author Bio


Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime

Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report

Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube

Now there is a Deepfake that can animate your face with just your voice and a picture using Temporal GANs