Follow BigDATAwire:

August 3, 2023

From WormGPT to DarkBERT, GenAI Boosting Cybercriminal Capabilities

(Maxim Apryatin/Shutterstock)

Companies are flocking to GenAI technologies to help automate business functions, such as reading and writing emails, generating Java and SQL code, and executing marketing campaigns. At the same time, cybercriminals are also finding tools like WormGPT and FraudGPT useful for automating nefarious deeds, such as writing malware, distributing ransomware, and automating the exploitation of computer vulnerabilities around the Internet. With the pending release of API acceess to a language model dubbed DarkBERT into the criminal underground, the GenAI capabilities available to cybercriminals could increase significantly.

On July 13, researchers with SlashNext reported the emergence of WormGPT, an AI-powered tool that’s being actively utilized by cybercriminals. About two weeks later, it let the world know about another digital creation from the criminal underground, dubbed FraudGPT. FraudGPT is being promoted by its creator, who goes by the name “CanadianKingpin12,” as an “exclusive bot” designed for fraudsters, hackers, spammers, SlashNext says in a blog post this week.

FraudGPT is replete with a number of advanced GenAI capabilities, according to an ad posted on a cybercrime forum discovered by SlashNext, including:

  • Write malicious code;
  • Create undetectable malware;
  • Create phishing pages;
  • Create hacking tools;

    Image from a video produced by cybercriminals and shared by SlashNext

  • Write scam pages / letters;
  • Find leaks and vulnerabilities;
  • Find “cardable” sites;
  • “And much more | sky is the limit.”

When SlashNext contacted the malware’s author, the author insisted that FraudGPT was superior to WormGPT, which was the main goal that SlashNext had in the conversation. Then the malware author went to to say that he or she had two more malicious GenAI products in development, including DarkBART and DarkBERT, and that they would be ingrated with Google Lens, which gives the tools the capability to send text accompanied by images.

This perked up the ears of the security researchers at SlashNext, a Pleasanton, California company that provides protection against phishing and human hacking. DarkBERT is a large language model (LLM) created by a South Korean security research firm and trained on a large corpus of data culled from the Dark Web to fight cybercrime. It has not been publicly released, but CanadianKingpin12 claimed to have access to it (although it was not clear whether they actually did).

DarkBERT could potentially provide cybercriminals with a leg up in their malicious schemes. In his blog post, SlashNext’s Daniel Kelley, who identifies as “a reformed black hat computer hacker,” shares some of the potential ways that CanadianKingpin12 envisions the tool being used. They include:

  • “Assisting in executing advanced social engineering attacks to manipulate individuals;”
  • “Exploiting vulnerabilities in computer systems, including critical infrastructure;”
  • “Enabling the creation and distribution of malware, including ransomware;”
  • “The development of sophisticated phishing campaigns for stealing personal information;” and
  • “Providing information on zero-day vulnerabilities to end-users.”

“While it’s difficult to accurately gauge the true impact of these capabilities, it’s reasonable to expect that they will lower the barriers for aspiring cybercriminals,” Kelley writes. “Moreover, the rapid progression from WormGPT to FraudGPT and now ‘DarkBERT’ in under a month, underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape.”

What’s more, just as OpenAI has enabled thousands of companies to leverage powerful GenAI capabilites through the power of APIs, so too will the cybercriminal underground leverage APIs.

“This advancement will greatly simplify the process of integrating these tools into cybercriminals’ workflows and code,” Kelley writes. “Such progress raises significant concerns about potential consequences, as the use cases for this type of technology will likely become increasingly intricate.”

The GenAI criminal activity recently caugh the eye of Cybersixgill, an Israeli security firm.  According to  Delilah Schwartz, who works in threat intel at Cybersixgill, all three products are being advertised for sale.

“Cybersixgill observed threat actors advertising FraudGPT and DarkBARD on cybercrime forums and Telegram, in addition to chatter about the tools,” Schwartz says. “Malicious versions of deep language learning models are currently a hot commodity on the underground, producing malicious code, creating phishing content, and facilitating other illegal activities. While threat actors abuse legitimate artificial intelligence (AI) platforms with workarounds that evade safety restrictions, malicious AI tools go a step further and are specifically designed to facilitate criminal activities.”

The company has noted ads promoting FraudGPT, FraudBot, and DarkBARD as “Swiss Army Knife hacking tools.”

“One ad explicitly stated the tools are designed for ‘fraudsters, hackers, spammers, [and] like-minded individuals,'” Schwartz says. “If the tools perform as advertised, they would certainly enhance a variety of attack chains. With that being said, there appears to be a dearth of actual reviews from users championing the products’ capabilities, despite the abundance of advertisements.”

Related Items:

Feds Boost Cyber Spending as Security Threats to Data Proliferate

Security Concerns Causing Pullback in Open Source Data Science, Anaconda Warns

Filling Cybersecurity Blind Spots with Unsupervised Learning

 

BigDATAwire