Cybercrime has undergone a fundamental transformation. What once required high technical expertise and manual operations can now be carried out with the help of artificial intelligence, making attacks faster, cheaper, and far more difficult to detect. cybercriminals are leveraging AI to create scam schemes targeting everything from retirement savings to corporate secrets with astonishing precision.
Recent data from Brian Singer, a Ph.D. researcher at Carnegie Mellon studying the use of large language models in cyberattacks, shows that between 50 and 75 percent of spam and phishing messages worldwide now originate from AI systems. This figure reflects a fundamental shift in how cybercrime operates.
The same technology used by digital platforms to tailor ads is now used by criminals to gather personal details and execute personalized scams. AI systems trained with corporate communication data can generate thousands of natural-sounding messages that match the style of the target organization. They mimic how executives write, mention recent news from public records, and eliminate language errors that previously revealed international scam efforts.
Alice Marwick, leading research at Data & Society, an independent technology research organization, explains the most significant change: “The real change is in scope and scale. Scams are larger, more targeted, more convincing.”
Cybercriminals are also using deepfake technology to create fake videos and audio of company leaders. They use the same false identities to target many people simultaneously, creating what John Hultquist, head analyst at Google Threat Intelligence Group, calls “credibility at scale.”
Cybercrime Evolving into a Structured Business Model
The biggest landscape facilitating this change is the decreasing barriers to entry into the cybercrime world. Underground dark markets now sell or rent AI tools for cybercrime at prices as low as $90 per month. These services include names like WormGPT, FraudGPT, and DarkGPT, with tiered pricing and professional customer support.
Nicolas Christin, head of the software and infrastructure department at Carnegie Mellon, details this ecosystem: “Developers sell subscriptions to attack platforms with tiered pricing and customer support.” Some of these services even include training materials on hacking techniques.
Margaret Cunningham, vice president of AI security and strategy at Darktrace, a cybersecurity company, states that barriers have become very low: “You don’t need to know how to code, just know where to find these tools.” A new development called “vibe-coding” allows aspiring criminals to use AI to create their own malicious programs without having to buy them from underground sources.
Cybercriminal operations themselves have been running business models for years. Typical ransomware attacks involve specialized roles: access brokers who hack into corporate networks and sell access, penetration teams that move through systems stealing data, and ransomware service providers who deploy malware, handle negotiations, and share profits.
AI Enhances Efficiency and Profitability of Criminal Operations
Artificial intelligence has increased the speed, scale, and accessibility of these systems. Tasks that previously required deep technical knowledge can now be automated. This allows groups to operate with fewer personnel, lower risk, and higher profits.
Christin likens this situation: “Think of it as the next phase of industrialization. AI boosts productivity without requiring more skilled labor.” Cybercriminals are also becoming more skilled at selecting targets. They use AI to scan social media and identify individuals facing major life difficulties—divorce, the death of a family member, job loss—situations that make someone more vulnerable to romance scams, fake investment schemes, or fake job offers.
Can AI Conduct Attacks Entirely on Its Own?
A critical question arises: can AI launch cyberattacks entirely without human intervention? The current answer is no. Experts compare the situation to the development of fully autonomous vehicles. The last five percent—enabling cars to drive anywhere, anytime, on their own—is still not achieved.
However, researchers are testing AI hacking capabilities in laboratory environments. A team at Carnegie Mellon, supported by Anthropic, successfully mimicked the famous Equifax data breach using early AI this year. This is considered a “big leap” by experts.
Defending Against AI-Enhanced Cybercrime
On the other hand, AI companies are committed to using the same technology to strengthen digital defenses. Anthropic and OpenAI are developing AI systems that can continuously scan software code for vulnerabilities that criminals might exploit. Humans still need to approve any fixes.
The latest AI programs developed by Stanford researchers show better performance than some human testers in identifying security issues in networks. While AI will not stop all breaches, organizations must focus on building robust networks that continue to operate during cyberattacks.
This evolving cybercrime landscape shows that the battle between malicious AI users and digital defenses is still in its early stages. Awareness and preparedness remain the most powerful first line of defense.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Artificial Intelligence Transforming the Cybercrime Landscape: From Hacker Hobbies to Structured Industry
Cybercrime has undergone a fundamental transformation. What once required high technical expertise and manual operations can now be carried out with the help of artificial intelligence, making attacks faster, cheaper, and far more difficult to detect. cybercriminals are leveraging AI to create scam schemes targeting everything from retirement savings to corporate secrets with astonishing precision.
Recent data from Brian Singer, a Ph.D. researcher at Carnegie Mellon studying the use of large language models in cyberattacks, shows that between 50 and 75 percent of spam and phishing messages worldwide now originate from AI systems. This figure reflects a fundamental shift in how cybercrime operates.
Artificial Intelligence Creates Highly Convincing Scam Experiences
The same technology used by digital platforms to tailor ads is now used by criminals to gather personal details and execute personalized scams. AI systems trained with corporate communication data can generate thousands of natural-sounding messages that match the style of the target organization. They mimic how executives write, mention recent news from public records, and eliminate language errors that previously revealed international scam efforts.
Alice Marwick, leading research at Data & Society, an independent technology research organization, explains the most significant change: “The real change is in scope and scale. Scams are larger, more targeted, more convincing.”
Cybercriminals are also using deepfake technology to create fake videos and audio of company leaders. They use the same false identities to target many people simultaneously, creating what John Hultquist, head analyst at Google Threat Intelligence Group, calls “credibility at scale.”
Cybercrime Evolving into a Structured Business Model
The biggest landscape facilitating this change is the decreasing barriers to entry into the cybercrime world. Underground dark markets now sell or rent AI tools for cybercrime at prices as low as $90 per month. These services include names like WormGPT, FraudGPT, and DarkGPT, with tiered pricing and professional customer support.
Nicolas Christin, head of the software and infrastructure department at Carnegie Mellon, details this ecosystem: “Developers sell subscriptions to attack platforms with tiered pricing and customer support.” Some of these services even include training materials on hacking techniques.
Margaret Cunningham, vice president of AI security and strategy at Darktrace, a cybersecurity company, states that barriers have become very low: “You don’t need to know how to code, just know where to find these tools.” A new development called “vibe-coding” allows aspiring criminals to use AI to create their own malicious programs without having to buy them from underground sources.
Cybercriminal operations themselves have been running business models for years. Typical ransomware attacks involve specialized roles: access brokers who hack into corporate networks and sell access, penetration teams that move through systems stealing data, and ransomware service providers who deploy malware, handle negotiations, and share profits.
AI Enhances Efficiency and Profitability of Criminal Operations
Artificial intelligence has increased the speed, scale, and accessibility of these systems. Tasks that previously required deep technical knowledge can now be automated. This allows groups to operate with fewer personnel, lower risk, and higher profits.
Christin likens this situation: “Think of it as the next phase of industrialization. AI boosts productivity without requiring more skilled labor.” Cybercriminals are also becoming more skilled at selecting targets. They use AI to scan social media and identify individuals facing major life difficulties—divorce, the death of a family member, job loss—situations that make someone more vulnerable to romance scams, fake investment schemes, or fake job offers.
Can AI Conduct Attacks Entirely on Its Own?
A critical question arises: can AI launch cyberattacks entirely without human intervention? The current answer is no. Experts compare the situation to the development of fully autonomous vehicles. The last five percent—enabling cars to drive anywhere, anytime, on their own—is still not achieved.
However, researchers are testing AI hacking capabilities in laboratory environments. A team at Carnegie Mellon, supported by Anthropic, successfully mimicked the famous Equifax data breach using early AI this year. This is considered a “big leap” by experts.
Defending Against AI-Enhanced Cybercrime
On the other hand, AI companies are committed to using the same technology to strengthen digital defenses. Anthropic and OpenAI are developing AI systems that can continuously scan software code for vulnerabilities that criminals might exploit. Humans still need to approve any fixes.
The latest AI programs developed by Stanford researchers show better performance than some human testers in identifying security issues in networks. While AI will not stop all breaches, organizations must focus on building robust networks that continue to operate during cyberattacks.
This evolving cybercrime landscape shows that the battle between malicious AI users and digital defenses is still in its early stages. Awareness and preparedness remain the most powerful first line of defense.