Google Rolls Out Real-Time AI Video Features for Gemini

Google has started introducing real-time AI features to Gemini Live, enabling the AI to interpret screens and smartphone camera feeds instantly. According to Google spokesperson Alex Joseph, these capabilities are now accessible to some Google One AI Premium subscribers. This update builds on Google’s earlier “Project Astra” initiative, which was first showcased nearly a year ago.
A Reddit user recently reported spotting this feature on their Xiaomi device, as noted by 9to5Google. Later, the same user uploaded a video demonstrating Gemini’s new ability to read screens, a function Google previously announced in March as part of its Gemini Advanced rollout for premium subscribers.
Another major feature now being introduced is live video interpretation. With this, Gemini can analyze a smartphone’s camera feed in real-time and provide relevant insights. A Google demonstration video illustrates how users can seek Gemini’s advice, such as choosing a suitable paint colour for freshly glazed pottery.
This rollout reinforces Google’s strong position in AI-driven virtual assistants. While Amazon is still preparing the limited launch of its Alexa Plus upgrade and Apple has postponed its enhanced Siri, Google is pushing ahead with advanced AI-powered functionalities. Samsung, despite still offering Bixby, has also made Gemini the default AI assistant on its devices.