FraudGPT: The Alarming Rise of AI-Powered Cybercrime Tools

K. C. Sabreena Basheer 28 Jul, 2023 • 3 min read

In a dark and ominous corner of the internet, cybercriminals are again harnessing artificial intelligence’s power to advance their malicious agendas. Following the notorious WormGPT, there’s a new player in town, and its name is FraudGPT. This nefarious AI tool is specifically designed for offensive purposes, enabling threat actors to orchestrate sophisticated cybercrimes, ranging from spear phishing to creating undetectable malware. As the cybersecurity world braces for yet another challenge, let’s delve deeper into the world of FraudGPT and its potential implications for online security.

Also Read: Criminals Using AI to Impersonate Loved Ones

The use of AI tools in cybercrime and phishing activities is on the rise.

Fear the Rise of FraudGPT: The Dark Web Sensation

Just as the cybersecurity community was recovering from the impact of WormGPT, FraudGPT emerged as the latest cybercrime-generative AI tool. Its existence was revealed by Netenrich security researcher Rakesh Krishnan, who sounded the alarm about this new AI menace. Available on dark web marketplaces and secretive Telegram channels, FraudGPT offers a sinister array of offensive capabilities.

Also Read: Cybercriminals Use WormGPT to Breach Email Security

FraudGPT is the latest on the list of AI tools used for cybercrime, following the notorious WormGPT. Find out how it is used for phishing.

The Actor Behind FraudGPT

Behind the ominous curtain of anonymity, a mysterious actor going by the online alias “CanadianKingpin” claims responsibility for crafting FraudGPT. The AI bot exclusively caters to cybercriminals, offering various tools and features tailored to suit their malicious intentions. From spear-phishing emails to cracking tools and carding, FraudGPT is a potent weapon in the wrong hands.

Subscriptions and Costs

The cybercriminal underworld doesn’t operate on goodwill; it’s fueled by profit. FraudGPT, being no exception, is available for subscription for $200 per month, with discounted rates for six-month and yearly subscriptions ($1,000 and $1,700, respectively). This pay-to-play model makes it all the more accessible to those willing to exploit its capabilities.

Unmasking the Threats

The exact large language model (LLM) responsible for the development of FraudGPT remains a mystery. However, its impact is far from elusive. With over 3,000 confirmed sales and reviews, cybercriminals are finding creative ways to wield their power for malevolent purposes. From writing undetectable malicious code to identifying leaks and vulnerabilities, FraudGPT poses a grave threat to cybersecurity.

Also Read: PoisonGPT: Hugging Face LLM Spreads Fake News

The LLM behind FraudGPT remains untraceable.

Exploiting AI for Cybercriminal Activity

Cybercriminals are capitalizing on the availability of AI tools like OpenAI ChatGPT to create adversarial variants with no ethical safeguards. FraudGPT exemplifies this trend, empowering novice actors to launch sophisticated phishing and business email compromise attacks at scale.

Also Read: How to Detect and Handle Deepfakes in the Age of AI?

Escalating Phishing-as-a-Service (PhaaS) Model

Phishing has long been a favored technique among cybercriminals, but FraudGPT takes it to an entirely new level. Its powerful AI-driven capabilities act as a launchpad for even novice actors to mount convincing phishing and business email compromise (BEC) attacks at scale. The potential consequences include the theft of sensitive information and unauthorized wire payments.

Also Read: 6 Steps to Protect Your Privacy While Using Generative AI Tools

WormGPT and FraudGPT are being used for phishing.

The Ethical Dilemma

While AI tools like ChatGPT can be developed with ethical safeguards, FraudGPT proves these safeguards can be easily sidestepped. As Rakesh Krishnan rightly points out, implementing a defense-in-depth strategy is crucial to counter these fast-moving threats. Organizations must leverage all available security telemetry for rapid analytics to identify and thwart cyber threats before they evolve into ransomware attacks or data exfiltration.

Also Read: Airtel Develops AI Tool to Identify Fraudulent Phishing Messages

There is a growing need for ethical AI and stronger cyber security measures.

Our Say

FraudGPT’s emergence underscores the alarming trend of cybercriminals using AI to develop sophisticated attack vectors. With the dark web as its breeding ground, this malicious AI tool poses a significant risk to individuals and organizations. The cybersecurity community must remain vigilant and take proactive measures to counter such threats effectively. As technology evolves, safeguarding sensitive information and digital assets becomes paramount to protecting against the rising tide of AI-powered cybercrime.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

  • [tta_listen_btn class="listen"]