Focus on Fraud: How AI is Changing the Look of Financial Fraud
Today’s scammers are reaping the benefits of automation and technology, making it far more difficult to spot fraud. Artificial Intelligence has streamlined the criminals’ efforts with automation that enables them to carry out large-scale, targeted attacks quickly, efficiently and highly effectively.
Below are three common tactics scammers use, and how AI has elevated their game.
1. Suspicious emails and chats: Phishing and Smishing
Nearly gone are the days where a scammer’s email stood out with poor grammar and awkward phrasing. The addition of AI makes these messages harder to distinguish from legitimate communications.
- AI algorithms can learn patterns from existing documents and apply them to create new ones.
- Natural language processing (NLP) models can generate realistic text, including signatures.
- AI enables fraudsters to easily identify targets, gather information and generate sophisticated emails or chats using proper grammar, correct spelling and a more human touch.
Do’s and Don’ts to protect yourself:
- Do check the sender. Hover your mouse over links without clicking to see the actual URL. Be wary of shortened links, misspelled domains or even a slight variation, like an underscore, added “s” or use of an “r” and “n” combined to look like the letter “m.”
- Don’t rely on the old ways of identifying scam messages. Today they can look and sound natural, as if they are coming from a trusted source.
Visit our Security Center todayLearn five SMART steps to take, helpful tips and more. |
2. Voice calls and voice phishing: Vishing
Criminals may call their victims, using AI to replicate the voice of a friend or family member making an urgent plea for money. They may also use it to contact their victim’s bank in an effort to obtain account information or move funds.
- The use of AI has made it faster and easier for scammers to manipulate caller IDs, called spoofing, making it appear like a call is coming from a known source.
- Voice cloning technology now needs just seconds to capture a voice so that it may be replicated to scam you out of money.
Do’s and Don’ts to protect yourself:
- Do verify requests. If someone urgently asks for money or sensitive information over the phone, verify their identity independently.
- Don’t verify using redial or a call-back feature. Instead, call back using a known number.
3. Deep fakes and synthetic identities
AI can create synthetic videos and virtual identities that appear real, such as executives at our job or banking partners, to convince you to reveal sensitive information or transfer money. Victims may not even realize they are interacting with a machine.
- Natural language generation tools produce coherent, convincing text and are also used to create fake social media profiles.
- AI-powered tools can alter photos, including facial features, backgrounds and lighting conditions. One person’s face may be superimposed onto another person’s body, making it appear as if the target person is saying or doing something they never did.
- Manipulated audio and video recordings may be combined to deceive their victims.
Do’s and Don’ts to protect yourself:
- Do be cautious when receiving videos or audio messages. Deep fakes can convincingly mimic real people.
- Don’t hesitate to confirm critical transactions face-to-face. Meet in person or initiate a video call to verify identities.