Tech

Hackers lure people by promising ChatGPT, steal information instead

Published

on

People are being hoodwinked by hackers with the help of generative artificial intelligence (AI) to install harmful codes on their devices, Meta warned Wednesday.

Chief information security officer Guy Rosen said in a news briefing that “over the course of the past month, security analysts with the social-media giant have found malicious software posing as ChatGPT or similar AI tools.”

The officer said: “The latest wave of malware campaigns have taken notice of generative AI technology that’s been capturing people’s imagination and everyone’s excitement.”

Rosen was of the view that the parent company of WhatsApp, Facebook, and Instagram often shares what it learns with industry peers and others in the cyber defence community.

Rosen said: “Meta has seen threat actors hawk internet browser extensions that promise generative AI capabilities but contain malicious software designed to infect devices.”

Hackers easily trap their prey through click-bait attempts encouraging people to click on malicious links or installing programs that have the potential to steal the victims’ data and essential information.

Rosen added: “We’ve seen this across other topics that are popular, such as crypto scams fuelled by the immense interest in digital currency,” while noting that “from a bad actor’s perspective, ChatGPT is the new crypto.”

Meta’s security team maintained: “Meta has found and blocked more than a thousand web addresses that are touted as promising ChatGPT-like tools but are actually traps set by hackers, according to the tech firm’s security team”.

Rosen said: “Meta has yet to see generative AI used as more than bait by hackers, but is bracing for the inevitability that it will be used as a weapon”.

“Generative AI holds great promise and bad actors know it, so we should all be very vigilant to stay safe,” Rosen said.

Meanwhile, the tech behemoth is working to use ChatGPT for defensive purposes such as shielding against hackers’ attacks and misleading online campaigns.

Meta head of security policy Nathaniel Gleicher said in the briefing: “We have teams that are already thinking through how [generative AI] could be abused, and the defences we need to put in place to counter that.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version