Home / News / AI video calling scams are on the rise and they work like this

AI video calling scams are on the rise and they work like this

133
Zoom Cal

The rise of generative artificial intelligence is here, for better or for worse. Unfortunately, the biggest winners here are fraudsters and misinformation peddlers. AI video generation is far from perfect, but it has become good enough to impersonate another person.




How do AI video call scams work?

“”>

Approach

The premise is simple: use deepfake technology to fake your identity and then use it to get something from you, usually sensitive financial information.

These are often subsets of romance scams designed to target and exploit people looking for love on dating sites. However, they can sometimes be used in other ways, such as impersonating a well-known celebrity or political figure, or occasionally someone you might actually know (such as your boss at work). In the latter case, they often use a fake phone number to better sell the illusion.


What is Deepfake?

Deepfakes are short for any image or video created by artificial intelligence to mimic an existing character, usually for the purpose of deception. Deepfake technology is used for nefarious purposes – mostly memes and entertainment – ​​but now it’s the medium for these video call scams.

What is the scammer’s goal?

In a nutshell? Information. This can take several forms, but at the end of the day, most AI scams want the same thing, even if the consequences are different.

At the lower end of the spectrum, a scammer might try to trick you into giving them sensitive information about, say, a company you work for. You may have details of a client or contract that could be sold to a competing business. This is probably your best case scenario, but it could destroy your company.

In the worst cases, they could lure you with your bank information or Social Security number, often in the guise of a lover who needs financial help, or even an interviewer for a promising new job you may not have applied for.


In any case, once the fraudster gathers the information and the damage is essentially done, it becomes a factor in how quickly you detect the fraud and mitigate the damage.

How to spot an AI video call scam

Currently, the deepfakes used in most AI video call scams are inconsistent. They often have visual glitches and quirks that make the video or voice seem fake.

Look for things like facial expressions that don’t quite match, the background of the video shifts weirdly, the voice sounds a bit flat, and so on. Current technology often has difficulty tracking a target if they stand up or raise their arms above their head (mostly trained on headshots and “shoulder up” videos).

However, AI technology is improving rapidly. While it’s good to recognize the shortcomings of current AI, relying on these skills in the grand scheme of things is potentially dangerous. These deepfakes are already many times better than last year and we are fast approaching the point where they could become almost completely indistinguishable from real video or audio. And while there are deep fake video detection tools with artificial intelligence, you can’t rely on them because technology is moving so fast.


How to protect yourself from depth scams again

Rather than trying to detect AI-generated material, try taking a more proactive approach to security. Most information security methods that worked in the past will still work now.

Verify the identity of the caller

“”>

Calling teams
Microsoft

Instead of trying to track someone based on their face or voice, use features that are harder to fake.

Make sure the call is coming from the correct phone number or account name. For apps like Teams or Zoom, check the email that sent the room code to see if it matches the caller’s credentials.

If you’re still concerned, ask them to verify their identity in other ways. For example, try texting them, “Are you on this Zoom call with me right now?” or something similar.


You can even try to engage in conversation. Scammers are often confused when forced to go “off script” and if they’re pretending to be someone you actually know, they’ll probably struggle to answer questions like, “How’s Jimmy? I haven’t seen him since we talked about it. fishing trip a while ago.”, especially if you mix in fake details to trip them up.

Finally, if it’s someone you talk to a lot, consider using childhood “stranger danger” protection by setting a password; if you both know how to say “marzipan” or something at the start of a conversation, it’s hard to fake it.

Don’t give them sensitive information

Of course, the best protection is to keep all important information close to the vest. No one should ask you for your bank details or social security number out loud during a call (and you should never share them online). And if they’re rushing you or pressuring you to do it, that’s all the more reason to end it.


This information should only be provided using official documentation or in a way that gives you more time to verify the legitimacy of the source.

If someone tries to redirect you to a Google Doc or PDF by linking them in a Zoom or Teams text chat, ask them to send it to you from their work email. Then take some time to look at it and verify that it is a legitimate email address before clicking on the link, let alone entering any information.

The best protection against scammers is always to arrange things so that you are not rushed and can take the time to really think about what is going on before you react.

Comments