AI is already transforming workplaces in ways that we could have never imagined. But while it’s revolutionizing aspects of work for the better, it’s also posing serious threats to business that can’t be ignored.

Chief among these emerging threats are deepfakes, a term that generally describes media, like videos, that have been doctored with AI technology. This type of content is extremely convincing and it’s very difficult to spot if it’s real or not. One of the most well-known examples of deepfakes are those viral Tom Cruise videos that were circulating around TikTok a few years ago. If you watched the video, it’s clear that the person in the footage looks and sounds like Tom Cruise, but it’s not him. It’s a generated replica of the actor (produced by a content creator) saying and doing things that Cruise didn’t do.

When deepfakes are used in that way, it can seem pretty harmless. But there’s a dark side of deepfakes that must be acknowledged. Imagine, for example, a video of a prominent figure, making horrible statements to the public, circulates around social media. Or a fake video of a business owner insulting her customers emerges.

As deepfake technology continues to evolve and edge its way into the working world, HR will need to be aware of the ramifications for both employees and employers.

3 Examples of How Deepfake Scams Hurt Organizations

  1. Data Theft
    Over the years, it’s been reported that deepfakes are already being used by cybercriminals to apply for remote jobs. Fraudsters usually start the process by posting fake job listings to collect real candidate information. Then use deepfake video technology during the remote interviewing process. The FBI stated that over 16,000 people reported being part of this scam in 20201 – and just last year, the FBI released another warning about the increase in complaints detailing how scammers use deepfakes to target remote positions.2

    Basically, with so many remote jobs, video interviews are quite common, and candidates don’t ever have to meet their employers face to face. In fact, more than 50% of employees hired since the early days of the pandemic have never met any of their coworkers in person, according to a survey by Green Building Elements.3

    Essentially, scammers are using deepfakes to apply for work-at-home positions to steal employee and company data, and/or unleash ransomware in corporate networks.
  2. Impersonation Scams
    There is a well-known case in Hong Kong where scammers pretended to be an employees’ boss, then called the employee using a deepfake voice. The employee was convinced he was speaking to his real boss and ended up giving the cybercriminals $35 million.4

    That may sound like an extreme example, but recent data suggests that this type of fraud has led to loss of $3.4 billion for some businesses in the U.S. alone.5 (For now, the data is limited in Canada.)

    As deepfake technology becomes more sophisticated and available to cybercriminals, businesses will be more at risk of fraudulent financial transactions.
  3. Attacks on Brand Reputation
    Deepfakes could be misused to seriously damage brand and business reputation. Just like the fake Tom Cruise videos, cyberhackers could potentially mimic CEOs, staff or leaders, disseminating false information or making false statements that could have dire consequences for your business.

    It could be tough to convince others that the person they saw with their own eyes on video is committing an impersonation scam.

How HR Can Fight the Fakes

Before diving into some of the ways HR can protect against deepfakes, it’s important to note that deepfake technology is in its infancy. This means that deepfakes are unlikely the number one scam of choice for most fraudsters right now.

But deepfakes are quickly maturing and evolving, becoming increasingly difficult to detect. Until recently, the creation of deepfake content required a lot of technical skills. Now, with how widespread AI and ChatGPT have become, more cybercriminals can generate deepfakes without advanced computing capabilities – which means businesses and HR need to know how to reduce the risk of being scammed.

One of the first steps in protecting against deepfake scams should be education. When there’s a knowledge gap, employees and employers are more vulnerable to attacks. So, HR should work with their employers to at least give employees a rudimentary understanding of deepfakes and a few signs to spot a bogus video. Since deepfake job interviews are becoming more prevalent, HR should consider at least meeting remote candidates in person during the hiring process too.

HR should also work with IT and employers to tighten basic cybersecurity procedures and policies. Multi-authentication factor is one security measure that should be implemented to safeguard sensitive data.

Another consideration (although currently not widely available) would be deepfake detectors which can authenticate and verify if the subject in the video is who they claim to be. Companies like Twitter and Google are already using them.

But even these detectors aren’t always perfect. For example, engineers at a Canadian company called Dessa, recently tested a deepfake detector. When they tested their detector on deepfake videos pulled from across the internet, it failed more than 40% of the time. 6

They eventually fixed the problem by creating a technology that could spot the fake but it’s a reminder that constant AI reinvention is needed. HR needs to continually keep pace with emerging technological changes to protect from deepfake threats – and other potential AI issues.