The latest AI news we announced in June
For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. Teams across Google are working on ways to unlock AI’s benefits in fields as wide-ranging as healthcare, crisis response and education. To keep you posted on our progress, we're doing a regular roundup of Google's most recent AI news.
Here’s a look back at some of our AI announcements from June.
June marked the mid-year solstice — the halfway point for Earth's revolution around the sun. What better opportunity for us to talk about the latest ways that Google AI is revolutionizing (ahem) the way people build and create, learn, search for information and even make scientific breakthroughs?
We expanded our Gemini 2.5 family of models. Along with making Gemini 2.5 Flash and Pro generally available for everyone, we introduced 2.5 Flash-Lite, our most cost-efficient and fastest 2.5 model yet.
We introduced Gemini CLI, an open-source AI agent for developers. Gemini CLI brings Gemini directly into your terminal for coding, problem-solving and task management. You can access Gemini 2.5 Pro free of charge with a personal Google account, or use a Google AI Studio or Vertex AI key for more access.
We released Imagen 4 for developers in the Gemini API and Google AI Studio. Imagen 4, our best text-to-image model yet, is now available for paid preview in the Gemini API and for limited free testing in Google AI Studio. Imagen 4 offers significantly improved text rendering over our prior image models and will be generally available in the coming weeks.
We shared a closer look inside AI Mode. AI Mode is our most powerful AI search, and this month we shared how we brought it to life (and your fingertips) in a look at the history of its development.
We introduced a new way to search with your voice in AI Mode. Search Live with voice lets you talk, listen and explore in real time with AI Mode in the Google app for Android and iOS. That means you can now have free-flowing, back-and-forth voice conversations with Search and explore links from across the web, so you can multitask and do things like find real-time tips for a trip while you’re packing for it. And with helpful transcripts saved in your AI Mode history, you can always revisit your searches and dive deeper on the web.
We released interactive charts in AI Mode for financial data, stocks and mutual funds. With interactive chart visualizations in AI Mode, you can compare and analyze information over a specific time period, get an interactive graph and comprehensive explanations to your question, and ask follow-ups, thanks to our custom Gemini model’s advanced multi-step reasoning and multimodal capabilities in AI Mode.
We improved Ask Photos and brought it to more Google Photos users. Ask Photos uses Gemini models to help you find your photos with complex queries like “what did I eat on my trip to Barcelona?” At the same time, it now returns more photos faster for simpler searches like “beach” or “dogs.”
We released our most advanced Chromebook Plus yet, with new helpful AI features. The new Lenovo Chromebook Plus 14 launched with several AI features to help you get things done — like Smart grouping to organize your open tabs and documents, AI image editing in the Gallery app, and the ability to take text from images and turn it into editable text. (Plus, it comes with custom wallpapers of Jupiter, created using generative AI in partnership with NASA especially for the Lenovo Chromebook Plus 14.)
We created a new way to share your NotebookLM notebooks publicly. Now, you can share a notebook publicly with anyone using NotebookLM with a single link, whether it’s an overview of your nonprofit’s projects, product manuals for your business or study guides for your class.
We introduced Gemini for Education to help students and educators. Gemini for Education is a version of the Gemini app built for the unique needs of the educational community. And at this year’s International Society for Technology in Education (ISTE) conference, we shared how this new AI solution could help every learner and educator, from personalizing learning for students to helping teachers generate compelling content, and more.
We introduced AlphaGenome: AI to better understand the human genome. Our new, unifying DNA sequence model advances regulatory variant-effect prediction and promises to shed new light on genome function. To advance scientific research, we’re making AlphaGenome available in preview via our AlphaGenome API for non-commercial research, with plans to release the model in the future.
We launched Weather Lab to support better tropical cyclone prediction with AI. Google DeepMind and Google Research’s Weather Lab is an interactive website for sharing our AI weather models. Weather Lab features our experimental cyclone predictions, and we’re partnering with the U.S. National Hurricane Center to support their forecasts and warnings this cyclone season.
We shared how AI breakthroughs are bringing hope to cancer research and treatment. Our President and Chief Investment Officer, Ruth Porat, spoke to the American Society of Clinical Oncology and discussed how Google’s AI research shows promise for early detection and treatment of cancer.
We introduced Gemini Robotics On-Device to bring AI to robots. In March, we shared how Gemini Robotics, our most advanced VLA (vision language action) model, brings multimodal reasoning and real-world understanding to machines in the physical world. Gemini Robotics On-Device is the next step. It shows how to equip robots with strong general-purpose dexterity and task generalization and is optimized to run efficiently on the robot itself. We also shared how Gemini 2.5 can enhance robotics and embodied intelligence.