OpenClaw Founder Slams Google Over Gemini Ban, Company Cites AI Misuse
Fresh tensions have erupted in the artificial intelligence space after Google restricted access to its Gemini Pro services for some users of OpenClaw, a desktop-based AI coding tool. The move has sparked criticism from OpenClaw’s founder, who described Google’s action as “draconian,” while the tech giant insists the decision was necessary to curb misuse.
At the centre of the controversy is Google’s Antigravity tool, an AI-powered software development platform built on Gemini. Designed to help users write software even without formal coding expertise, Antigravity has gained popularity among developers and tech enthusiasts. However, Google claims that certain OpenClaw users were exploiting the backend in ways that disrupted service quality.
OpenClaw, an open-source coding agent framework, allows developers to connect AI models such as Google’s Gemini and Anthropic’s Claude into a single workflow. Its flexibility has made it especially popular within developer communities. But that same openness appears to have drawn scrutiny from AI providers concerned about platform misuse.
The situation escalated when Peter Steinberger, founder of OpenClaw, publicly criticised Google’s decision. In a post on X, Peter warned developers about relying too heavily on the Antigravity backend and hinted at possibly withdrawing support for it within OpenClaw. “Pretty draconian from Google. Be careful out there if you use Antigravity. I guess I'll remove support (from OpenClaw),” he wrote. He added that while Anthropic users have also encountered issues in the past, the company’s approach felt different. “Even Anthropic pings me and is nice about issues. Google just bans?” he said.
The controversy has also revived broader questions about the power tech companies wield over access to AI systems. As artificial intelligence becomes increasingly embedded in professional workflows, sudden account suspensions can significantly disrupt developers and businesses that depend on these tools. The episode has prompted debate over whether companies like Google, OpenAI, and Anthropic should exercise unilateral authority in restricting user access, especially when AI platforms are evolving into essential infrastructure.
Complicating matters further is the competitive landscape. OpenClaw is now part of OpenAI, positioning it within the orbit of a major rival to both Google and Anthropic. While no official connection has been made between that acquisition and Google’s action, the overlap has fueled speculation about competitive tensions in the rapidly expanding AI market.
Google, however, maintains that its decision was operational rather than competitive. Responding shortly after Peter went public, Varun Mohan, Google Antigravity lead, defended the company’s stance. “We’ve been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended,” he stated on X.
Although the restrictions currently affect a relatively small group of users, the episode underscores a larger issue facing the AI industry: balancing open innovation with responsible usage. As AI tools become more powerful and deeply integrated into everyday work, the question of who controls access — and under what conditions — is likely to remain a point of contention.