Google Offers Up to $30,000 to Hack Its AI: New Bug Bounty Program Targets AI Vulnerabilities

Google Offers Up to $30,000 to Hack Its AI: New Bug Bounty Program Targets AI Vulnerabilities
X
Google launches an AI-focused bug bounty program, offering up to $30,000 to ethical hackers who uncover critical flaws in its AI systems.

Google is putting serious money behind AI safety. The tech giant has rolled out a new AI bug bounty program that promises rewards of up to $30,000 for security researchers who uncover critical vulnerabilities in its artificial intelligence-driven products — including Search, Gemini Apps, Gmail, and Workspace tools.

The initiative marks a major expansion of Google’s long-running Vulnerability Reward Program (VRP), shifting its attention from traditional software bugs to the evolving world of AI security. The company wants to engage cybersecurity experts and ethical hackers in identifying weaknesses that could lead to rogue actions — incidents where AI behaves unpredictably or dangerously.

Google says it is particularly interested in preventing situations where AI could leak personal information, execute unintended commands, or allow attackers to manipulate connected smart devices. For instance, if an attacker could trick Google Home into unlocking a door or prompt Gmail to summarize private emails and send them to an unauthorized recipient, those would qualify as high-severity exploits under the new program.

The company clarified, however, that AI hallucinations — cases where an AI generates incorrect or nonsensical content — don’t count as bugs for bounty purposes. Likewise, issues related to offensive or copyrighted AI-generated content should be reported directly through product feedback tools rather than the reward program. Google notes that such feedback helps its AI safety teams refine and retrain models to reduce harmful outputs more effectively.

When it comes to payouts, the biggest rewards — up to $20,000 — are reserved for vulnerabilities discovered in flagship AI products such as Search, Gemini, Gmail, and Drive. Reports that demonstrate exceptional creativity or technical insight could earn an additional bonus, bringing the total payout up to $30,000. Smaller yet meaningful rewards will go to researchers who identify weaknesses in other tools like NotebookLM or Google’s experimental AI assistant Jules.

According to Google, this new initiative builds upon its broader efforts to fortify AI security as the technology becomes deeply embedded in everyday tools and workflows. The company revealed that security researchers have already earned over $430,000 in the past two years for uncovering AI-related issues — even before the formal launch of this program.

In parallel, Google also introduced CodeMender, an AI-powered tool that automatically detects and fixes security vulnerabilities in open-source software. Acting as an “AI fixing AI” system, CodeMender has already helped patch more than 70 verified vulnerabilities, each validated by human security reviewers.

By combining human expertise with intelligent automation, Google aims to make its AI ecosystem more resilient against emerging cyber threats. With this new reward program and tools like CodeMender, the company is signaling its commitment to building a safer, more transparent future for artificial intelligence.

Next Story
Share it