AI chatbots can now craft perfectly written phishing emails in any language
Attackers are building long-term relationships before exploiting targets
Deepfake videos have become nearly indistinguishable from real footage
Breakneck evolution of AI tools able to generate convincing text, images and even live video is enabling ever smarter and more targeted scams, cybersecurity experts told AFP, urging internet users to raise their guard.
In recent weeks a high-profile "romance scam" in France, in which a woman forked over 830,000 euros ($840,000), or fake donation drives for Los Angeles fire victims show "absolutely everyone, private individuals or businesses, is a target for cyberattacks," said Arnaud Lemaire of cybersecurity firm F5.
One of the best-known forms of cyberattack is phishing, the sending of emails, texts or other messages under false pretenses.
Most try to get users to take an action like click a link, install a harmful program or divulge sensitive information.
Phishing and its social-engineering cousin "pretexting" together accounted for more than 20 percent of almost 10,000 data breaches worldwide last year, reported to US telecoms operator Verizon for the 2024 edition of its industry-staple Data Breach Investigations Report.
AI makes scams more sophisticated
AI chatbots powered by large language models (LLMs) save attackers time and allow for more elaborate fake messages, Lemaire said.
They also mean that "if someone is writing a phishing email... he can make the clues completely vanish" that might give away a non-native speaker of the target language.
But the text generators are just the tip of the AI iceberg.
For instance, AI can "take advantage of all the data that has been breached over the last few years to automate the creation of highly personalized scams," Steve Grobman, Chief Technical Officer at security software maker McAfee, told reporters on Thursday.
This is "something that just a few years ago would not be possible without an army of humans".
Rather than going for a quick score, attackers often aim to gain the trust of select individuals at target firms over months or years.
If an employee is successfully tricked, attackers "might wait until this person becomes very influential or there's a good chance for them to extort money" before exploiting the connection, said Martin Kraemer of cybersecurity training firm KnowBe4.
The stakes were on display in February 2024, when scammers swindled $26 million out of a multinational firm in Hong Kong.
Deepfakes: An unprecedented threat
Police said a finance worker believed he was videoconferencing with the company's CEO and other staff -- all in fact AI-generated deepfakes.
"The latest generation of deepfake video has got to a point where almost no consumers are able to tell the difference between an AI-generated image and a real image," Grobman said.
Internet users need to start applying the same skepticism to video as many now do to still images -- where "photoshop" has become a verb -- he added.
Faced with a purported news video online, that could be as simple as checking against a trusted source.
In personal communications, "I almost want to say it's like BDSM, bondage, where you have a safe word," F5's Lemaire joked.
"You say to yourself, here's the CEO asking me to make a $25 million bank transfer, I'll bring something personal in to make sure it's him."
Other tricks include asking a video caller to pan their camera around -- something AI for now has difficulty recreating, Lemaire said.
Scammers building complex crime networks
The online scam industry is so lucrative that "just like other businesses... there's supply chains and an ecosystem of tools to support it", Grobman said.
Malicious programs for hire include ransomware such as LockBit, which can encrypt data on targets' computers and threaten to release or delete it unless payment is made.
One of its suspected developers was arrested in Israel in December pending extradition to America.
AI tools include one that allowed a McAfee researcher to replace his own face with Hollywood star Tom Cruise in a video for as little as $5, Grobman said.
Even with all the new tools, "I'm not too worried at the defense side that we will be overwhelmed by AI," KnowBe4's Kraemer said.
It's "a tool that we can use for attack as well as for defense", he added.
Nevertheless, the final line of defense remains human for now.
"When we moved from walking and riding horses to driving automobiles, we needed to change the way we thought about transportation safety... that's what consumers are going to need today, the same sort of pivot," Grobman said.
Popular
Spotlight
More from Science
Why blocking China's DeepSeek from using US AI may be difficult
There are challenges in detecting and stopping distillation, as some models are freely available and violations may be hard to spot
Comments
See what people are discussing