AI tools are all the rage right now. Everyone is obsessed with it…even hackers.
According to a new report from Facebook parent company Meta, the company’s security team has been tracking new malware threats, including ones that weaponize the current AI trend.
“Over the past several months, we’ve investigated and taken action against malware strains taking advantage of people’s interest in OpenAI’s ChatGPT to trick them into installing malware pretending to provide AI functionality,” Meta writes in a new security report released by the company.
Meta claims that it has discovered “around ten new malware families” that are using AI chatbot tools like OpenAI’s popular ChatGPT to hack into users’ accounts.
One of the more pressing schemes, according to Meta, is the proliferation of malicious web browser extensions that appear to offer ChatGPT functionality. Users download these extensions for Chrome or Firefox, for example, in order to use AI chatbot functionality. Some of these extensions even work and provide the advertised chatbot features. However, the extensions also contain malware that can access a users’ device.
According to Meta, it has discovered more than 1,000 unique URLs that offer malware disguised as ChatGPT or other AI-related tools and has blocked them from being shared on Facebook, Instagram, and Whatsapp.
According to Meta, once a user downloads malware, bad actors can immediately launch their attack and are constantly updating their methods to get around security protocols. In one example, bad actors were able to quickly automate the process which takes over business accounts and provides advertising permissions to these bad actors.
Meta says it has reported the malicious links to the various domain registrars and hosting providers that are used by these bad actors.
In their report, security researchers at Meta also dive into the more technical aspects of recent malware, such as Ducktail and NodeStealer. That report can be read in full.