Unknown threat actors have been observed weaponizing v0, a generative artificial intelligence (AI) tool from Vercel, to design fake sign-in pages that impersonate their legitimate counterparts.
“This observation signals a new evolution in the weaponization of Generative AI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts,” Okta Threat Intelligence researchers Houssem Eddine Bordjiba and Paula De la Hoz said.
v0 is an AI-powered offering from Vercel that allows users to create basic landing pages and full-stack apps using natural language prompts.

Protect your privacy by Mullvad VPN. Mullvad VPN is one of the famous brands in the security and privacy world. With Mullvad VPN you will not even be asked for your email address. No log policy, no data from you will be saved. Get your license key now from the official distributor of Mullvad with discount: SerialCart® (Limited Offer).
➤ Get Mullvad VPN with 12% Discount
The identity services provider said it has observed scammers using the technology to develop convincing replicas of login pages associated with multiple brands, including an unnamed customer of its own. Following responsible disclosure, Vercel has blocked access to these phishing sites.
Unlike old phishing kits that needed more effort to set up, tools like v0 let attackers build fake pages just by typing a prompt. It’s faster, easier, and doesn’t need coding skills. This opens the door for less experienced threat actors to create phishing sites that look real without much work. It’s not just about speed—it’s about how simple the process has become.
The threat actors behind the campaign have also been found to host other resources such as the impersonated company logos on Vercel’s infrastructure, likely in an effort to abuse the trust associated with the developer platform and evade detection.
The problem is also exacerbated by the availability of various direct clones of the v0 application on GitHub, making it a lot easier for threat actors to spin up phishing pages without having to rely on phishing kits.
“The observed activity confirms that today’s threat actors are actively experimenting with and weaponizing leading GenAI tools to streamline and enhance their phishing capabilities,” the researchers said.
“The use of a platform like Vercel’s v0.dev allows emerging threat actors to rapidly produce high-quality, deceptive phishing pages, increasing the speed and scale of their operations.”
The development comes as bad actors continue to leverage large language models (LLMs) to aid in their criminal activities, building uncensored versions of these models that are explicitly designed for illicit purposes. One such LLM that has gained popularity in the cybercrime landscape is WhiteRabbitNeo, which advertises itself as an “Uncensored AI model for (Dev) SecOps teams.”
“Cybercriminals are increasingly gravitating towards uncensored LLMs, cybercriminal-designed LLMs, and jailbreaking legitimate LLMs,” Cisco Talos researcher Jaeson Schultz said.
“Uncensored LLMs are unaligned models that operate without the constraints of guardrails. These systems happily generate sensitive, controversial, or potentially harmful output in response to user prompts. As a result, uncensored LLMs are perfectly suited for cybercriminal usage.”
This fits a bigger shift we’re seeing: phishing is being powered by AI in more ways than before. Fake emails, cloned voices, even deepfake videos are showing up in social engineering attacks. These tools help attackers scale up fast, turning small scams into large, automated campaigns. It’s no longer just about tricking users—it’s about building whole systems of deception.
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
Some parts of this article are sourced from:
thehackernews.com