Gemini, Google and AI
Digest more
Follow live updates from the Google I/O 2025. Get the latest developer news from the annual conference as Google is expected to reveal more on its AI tool Gemini.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
On Tuesday at Google I/O 2025, the company announced Deep Think, an “enhanced” reasoning mode for its flagship Gemini 2.5 Pro model. Deep Think allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
The Google IO 2025 keynote has concluded. We spent almost two hours watching the announcements made at the Shoreline Amphitheater in Mountain View, California and if you’re looking for anything other than AI you’ll be hard pressed to find something.
Google CEO Sundar Pichai said the company's Gemini AI chatbot app has more than 400 million MAUs ahead of Google I/O 2025.
Just don’t confuse Deep Think with DeepMind or Astra with Aura.
Google has launched a new Gemini AI Ultra AI subscription that costs $250 per month: Here's what you get from the most expensive tier.
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.