Google Makes Gemini 3 Flash Its Default AI, Raising the Stakes in the Global AI Race

Update: 2025-12-18 12:18 IST

Google is accelerating its push in the artificial intelligence race with the launch of Gemini 3 Flash, a new high-speed AI model that is now the default across the Gemini app and Google Search’s AI mode. Introduced just a month after Gemini 3, the Flash variant is positioned as a faster, more efficient, and more affordable option—one that Google claims “outperforms Gemini 2.5 Pro while being three times faster.”

The announcement signals Google’s intent to scale advanced AI to millions of everyday users, while simultaneously applying pressure on rivals like OpenAI, which recently released GPT-5.2 to regain momentum. With Gemini 3 Flash, Google appears to be betting that speed, efficiency, and accessibility can be just as decisive as raw power.

Faster, cheaper, and surprisingly powerful

Gemini 3 Flash is designed to be Google’s quickest and most cost-effective AI model so far. According to the company, it delivers a “significant” performance boost over Gemini 2.5 Flash, which launched earlier this year. The numbers support the claim. On Humanity’s Last Exam benchmark—used to evaluate broad reasoning abilities—the new model scored 33.7 per cent without tool use, far ahead of Gemini 2.5 Flash’s 11 per cent.

What’s more notable is how close it comes to premium competitors. Gemini 3 Pro scored 37.5 per cent on the same test, while OpenAI’s GPT-5.2 achieved 34.5 per cent. On the multimodal MMMU-Pro benchmark, Gemini 3 Flash actually leads the field with an 81.2 per cent score, the highest among competing models.

Now the default across Gemini and Search

Google is giving Gemini 3 Flash immediate scale by making it the default model globally in the Gemini app and AI-powered Search. While users can still switch to Gemini 3 Pro for complex maths or coding, Flash will handle most everyday queries.

The model is built for multimodal understanding, allowing users to combine text, images, video, and audio in a single prompt. From analysing short sports clips to interpreting sketches or generating insights from audio recordings, Gemini 3 Flash is designed to respond with context-aware answers, often enriched with visuals like charts and tables. It can even help creators prototype apps directly within the Gemini interface.

Strong developer and enterprise uptake

Beyond consumers, Gemini 3 Flash is already finding traction among developers and enterprises. Google says companies such as JetBrains, Figma, Cursor, Harvey, and Latitude are integrating the model through Vertex AI and Gemini Enterprise. Developers can also access it in preview via the API or Google’s new coding assistant, Antigravity.

Pricing is set at $0.50 per million input tokens and $3.00 per million output tokens—slightly higher than Gemini 2.5 Flash—but Google notes the model uses about 30 per cent fewer tokens for “thinking tasks,” improving overall efficiency.

A renewed rivalry with OpenAI

The rollout comes amid heightened competition. Reports earlier this month suggested OpenAI issued a “Code Red” internally after signs of slowing ChatGPT traffic, followed by the rapid release of GPT-5.2. By making Gemini 3 Flash the default for millions of users, Google is clearly aiming to shift the balance once again.

As both companies race to deliver faster and smarter AI, one thing is certain: the battle for AI leadership is only intensifying—and evolving at breakneck speed.

Tags:    

Similar News