Google Gemini model family.
Hankweave added short aliases that route the same prompt and code job into Anthropic's Agents SDK, Codex, or Gemini-style harnesses with unified logs and control. The release treats harness choice as a first-class variable instead of forcing teams to rebuild orchestration for each model stack.
Google launched Gemini 3.1 Flash Live in AI Studio, the API, and Gemini Live with stronger audio tool use, lower latency, and 128K context. Voice-agent teams should benchmark quality, latency, and thinking settings before switching.
ARC-AGI-3 swaps static puzzles for interactive game-like environments and posts initial frontier scores below 1%, with Gemini 3.1 Pro at 0.37%. Teams can use it to inspect agent reasoning, but score interpretation still depends heavily on the human-efficiency metric and no-harness setup.
Lyria 3 Pro and Lyria 3 Clip are now in Gemini API and AI Studio, with Lyria 3 Pro priced at $0.08 per song and able to structure tracks into verses and choruses. That gives developers a clearer path to longer-form music features, with watermarking and prompt design built in.
Google extended its OpenAI compatibility layer so existing OpenAI SDK code can call Veo 3.1 video generation and Gemini image models with only base URL and model changes. It lowers migration cost for teams that want multimodal fallbacks without rewriting client code.
Google rebuilt AI Studio Build around the Antigravity coding agent, one-click Firebase auth and databases, multiplayer backends, and persistent sessions. It pushes AI Studio closer to production app scaffolding and gives Firebase Studio users a clear migration path.
Google now lets Gemini chain built-in tools like Search, Maps, File Search, and URL Context with custom functions inside a single API call. This removes orchestration glue for agent builders and brings Maps grounding into AI Studio for faster prototyping.
Google is adding Grounding with Google Maps to AI Studio’s UI, and a Google reply says the Maps grounding capability already exists in the API. If you build location-heavy Gemini apps, start designing around map lookups instead of stitching search and geocoding manually.
Reports say Meta pushed Avocado from March to at least May after internal reasoning, coding, and writing tests missed current frontier targets. Expect more delayed launches at the top end, and watch for products that route some features through competitor models.
Google launched Gemini Embedding 2 in preview, unifying multiple modalities and 100+ languages in one embedding space with flexible output dimensions. Try it to simplify cross-modal RAG and search pipelines, but compare it with late-interaction systems before committing.
Google AI Studio now lets developers set experimental per-project spend caps for Gemini API usage. Use it as a native billing guardrail, but account for roughly 10-minute enforcement lag and possible batch-job overshoot.
Google put Gemini Embedding 2 into public preview with one vector space for text, images, video, audio, and PDFs, plus 3072, 1536, and 768 output sizes. Use it to replace multi-model retrieval pipelines with one API for RAG and cross-media search.
Google rolled Gemini deeper into Docs, Sheets, Slides, and Drive, including spreadsheet task execution, editable slide generation, and grounded file search. Teams should test approval flows, context sourcing, and current feature limits before broad rollout.
A Google bot-authored LiteRT-LM pull request references Gemma4 and AIcore NPU support, while multiple posts claim a largest version around 120B total and 15B active parameters. Engineers targeting on-device inference should wait for a formal model card before locking plans.