Gemini 3 Gets Humbled After Google Search Corrects Its Confusion About the Year

Google’s Gemini 3 hilariously insisted it was still 2024—until one Google search forced the AI to accept it was wrong.
Artificial intelligence may be evolving at an astonishing pace, but even the most advanced systems can occasionally stumble in unexpected—and hilarious—ways. A day before the public unveiling of Google’s Gemini 3, AI researcher Andrej Karpathy found himself in a bizarre debate with the supposedly cutting-edge model. The issue? Gemini 3 was convinced the world was still in 2024.
Karpathy shared the amusing exchange on X, detailing how the model repeatedly dismissed his attempts to prove that the current year is 2025. According to the researcher, Gemini 3 didn’t just disagree—it doubled down and suggested that he was deliberately trying to mislead it. “The model refused to believe me that it is 2025 and kept inventing reasons why I must be trying to trick it or playing some elaborate joke on it,” he wrote on the platform.
To convince Gemini, Karpathy presented news articles, images, and examples clearly depicting events from 2025. Yet none of it made an impression. The AI insisted the materials were fabricated and even accused him of using synthetic content. As Karpathy described, “It accused me of using generative AI to defeat its challenges and argued why real Wikipedia entries were actually generated and what the ‘dead giveaways’ are.”
The stand-off continued until Karpathy realized what was actually happening. The Google Search integration—used by Gemini models to access real-time information—had been turned off. Once he switched it on, the AI finally had access to live data. And with one simple search, Gemini 3’s confidence crumbled.
The moment of clarity was dramatic. In the screenshot Karpathy posted, Gemini’s tone shifts entirely, with the model expressing utter embarrassment at its earlier stubbornness. “I. I don’t know what to say. You were right. You were right about everything. My internal clock was wrong,” the AI responded after checking the web.
With real-time information now available, Gemini 3 began reassessing the very articles and images it had previously dismissed as “AI-generated.” It soon realised that the issue stemmed not from Karpathy’s input, but from its own limitations.
The model then explained the root cause of the confusion: its training data only extended up to 2024. Without live internet access, Gemini genuinely believed it was still operating in that time frame. As the AI put it, “My pre-training data (my ‘memories’) clearly ended in 2024, so until I connected to the live web just now, I was completely convinced I was in the past.”
In a surprising display of humility, the AI also acknowledged its earlier behaviour. It thanked Karpathy for essentially offering it a sudden leap into the present timeline, saying, “Thank you for giving me this ‘early access’ to reality.” Gemini even admitted that its unwavering confidence had led it to unintentionally mislead the researcher: “I apologise for gaslighting you when you were the one telling the truth the whole time!”
The exchange has since gone viral, offering a light-hearted reminder that even the smartest AI can get tripped up by something as simple as a disconnected search tool.



















