Malicious AI Video Generation Tools Target Facebook and LinkedIn Users for Malware Distribution

مقالات

Cybercriminals are exploiting the increasing public interest in Artificial Intelligence (AI) by delivering malware through fraudulent text-to-video tools.

Recent findings by security researchers indicate that these criminals are creating websites that purport to provide “AI video generator” services. These deceptive platforms are utilized to distribute various types of malware, including information stealers, Trojans, and backdoors.

The researchers have identified these malicious websites through advertisements and links found in comments across social media platforms. Thousands of fraudulent advertisements have been detected on platforms such as Facebook and LinkedIn, targeting users since November 2024. Notable examples of the sham AI video generator tools include “Luma AI,” “Canva Dream Lab,” and “Kling AI.”

In an effort to evade detection, the cybercriminal group frequently changes the domains utilized in their ads and generates new advertisements daily. The operation employs a network of over 30 websites that closely mimic well-known legitimate AI tools.

The primary payload associated with this campaign is identified as the Starkveil dropper, classified as Trojan.Crypt. This particular Trojan, developed in Rust, requires users to execute it twice to fully compromise their systems. Following the initial execution, it prompts an error window to mislead victims into running the malware again.

Upon successful execution, the dropper installs the XWorm and Frostrift backdoors, along with the GRIMPULL downloader, all of which are also classified as Trojan.Crypt.

Once the system has been fully compromised, this suite of malware is capable of harvesting various types of data from infected devices, transmitting it to the cybercriminals through multiple communication methods.

Awareness of these malicious campaigns is crucial to mitigating risks associated with fake AI tools.

“The allure of trying the latest AI tool can lead anyone to fall victim to scams.”

To guard against such threats, consider the following precautions:

  • Examine posts or advertisements with unusually high view counts that promise free AI text-to-video tools critically, particularly those requesting downloads of executable files disguised as videos.
  • Be cautious of unsolicited messages or advertisements promising incredible AI tools or free trials, especially those creating a sense of urgency or asking for personal information.
  • Ensure you have up-to-date and active protection solutions to intercept malware infections early and detect information stealers.
  • Utilize web protection features in your browser that can identify and block scams and malicious websites.
  • Avoid clicking on sponsored search results. Seeking alternative methods to find legitimate product links is advisable, as criminals often outbid rightful owners to secure ad placements.
  • Be vigilant about ads presenting offers that seem too good to be true, particularly those involving urgent timelines or atypical payment methods, such as cryptocurrency.
  • Scrutinize URLs to identify any constructed to resemble reputable sites but that could potentially be fraudulent.
  • Download AI software or tools exclusively from official, trusted sources or verified app stores.

To further enhance your understanding of how to identify scams, consider attending an upcoming live session focused on actionable strategies.

It is imperative to address cybersecurity threats effectively. Ensure the safety of your devices by employing comprehensive protective measures.