Google Developers Bloghttps://developers.googleblog.com/rss/Updates on changes and additions to the Google Developers Blog.en-us2025年12月06日 03:45:53 +0000Architecting efficient context-aware multi-agent framework for productionhttps://developers.googleblog.com/architecting-efficient-context-aware-multi-agent-framework-for-production/ADK introduces **Context Engineering** to scale AI agents beyond large context windows. It treats context as a compiled view over a tiered, stateful system (**Session, Memory, Artifacts**). This architecture uses explicit processors for transformation, enables efficient compaction and caching, and allows for strict, scoped context handoffs in multi-agent workflows to ensure reliability and cost-effectiveness in production.https://developers.googleblog.com/architecting-efficient-context-aware-multi-agent-framework-for-production/Announcing the Data Commons Gemini CLI extensionhttps://developers.googleblog.com/announcing-the-data-commons-gemini-cli-extension/The new Data Commons extension for the Gemini CLI makes accessing public data easier. It allows users to ask complex, natural-language questions to query Data Commons' public datasets, grounding LLM responses in authoritative sources to reduce AI hallucinations. Data Commons is an organized library of public data from sources like the UN and World Bank. The extension enables instant data analysis, exploration, and integration with other data-related extensions.https://developers.googleblog.com/announcing-the-data-commons-gemini-cli-extension/New Gemini API updates for Gemini 3https://developers.googleblog.com/new-gemini-api-updates-for-gemini-3/Gemini 3 is available via API with updates for developers: new `thinking_level` for depth control, `media_resolution` for multimodal processing, and enforced `Thought Signatures` for agentic workflows, especially with function calling and image generation. It also introduces combining Google Search/URL Grounding with Structured Outputs and new usage-based pricing for Grounding. Best practices, like using default temperature, are advised for optimal results.https://developers.googleblog.com/new-gemini-api-updates-for-gemini-3/Unlocking Peak Performance on Qualcomm NPU with LiteRThttps://developers.googleblog.com/unlocking-peak-performance-on-qualcomm-npu-with-litert/LiteRT's new Qualcomm AI Engine Direct (QNN) Accelerator unlocks dedicated NPU power for on-device GenAI on Android. It offers a unified mobile deployment workflow, SOTA performance (up to 100x speedup over CPU), and full model delegation. This enables smooth, real-time AI experiences, with FastVLM-0.5B achieving over 11,000 tokens/sec prefill on Snapdragon 8 Elite Gen 5 NPU.https://developers.googleblog.com/unlocking-peak-performance-on-qualcomm-npu-with-litert/Build with Google Antigravity, our new agentic development platformhttps://developers.googleblog.com/build-with-google-antigravity-our-new-agentic-development-platform/Introducing Google Antigravity, a new agentic development platform for orchestrating code. It combines an AI-powered Editor View with a Manager Surface to deploy agents that autonomously plan, execute, and verify complex tasks across your editor, terminal, and browser. Agents communicate progress via Artifacts (screenshots, recordings) for easy verification. Available now in public preview.https://developers.googleblog.com/build-with-google-antigravity-our-new-agentic-development-platform/Building with Gemini 3 in Juleshttps://developers.googleblog.com/jules-gemini-3/Jules, an always-on, multi-step software development agent, now features Gemini 3, offering clearer reasoning and better reliability. Recent improvements include parallel CLI runs, a stable API, and safer Git handling. Upcoming features include directory attachment without GitHub and automatic PR creation. Jules aims to reduce software writing overhead so developers can focus on building.https://developers.googleblog.com/jules-gemini-3/Building AI Agents with Google Gemini 3 and Open Source Frameworkshttps://developers.googleblog.com/building-ai-agents-with-google-gemini-3-and-open-source-frameworks/Gemini 3 Pro Preview is introduced as a powerful, agentic model for complex, (semi)-autonomous workflows. New agentic features include `thinking_level` for reasoning control, Stateful Tool Use via Thought Signatures, and `media_resolution` for multimodal fidelity. It has Day 0 support for open-source frameworks like LangChain, AI SDK, LlamaIndex, Pydantic AI, and n8n. Best practices include simplifying prompts and keeping temperature at 1.0.https://developers.googleblog.com/building-ai-agents-with-google-gemini-3-and-open-source-frameworks/Building production AI on Google Cloud TPUs with JAXhttps://developers.googleblog.com/building-production-ai-on-google-cloud-tpus-with-jax/The JAX AI Stack is a modular, industrial-grade, end-to-end machine learning platform built on the core JAX library, co-designed with Cloud TPUs. It features key components like JAX, Flax, Optax, and Orbax for foundational model development, plus an extended ecosystem for the full ML lifecycle and production. This integration provides a powerful, scalable foundation for AI development, delivering significant performance advantages.https://developers.googleblog.com/building-production-ai-on-google-cloud-tpus-with-jax/5 things to try with Gemini 3 Pro in Gemini CLIhttps://developers.googleblog.com/5-things-to-try-with-gemini-3-pro-in-gemini-cli/Gemini 3 Pro is now integrated into Gemini CLI, unlocking state-of-the-art reasoning, agentic coding, and advanced tool use for enhanced developer productivity. It's available now for Google AI Ultra and paid Gemini API key subscribers (upgrade CLI to 0.16.x). Features include generating 3D apps and code from visual sketches, running complex shell commands, creating documentation, and debugging live Cloud Run services.https://developers.googleblog.com/5-things-to-try-with-gemini-3-pro-in-gemini-cli/Making the terminal beautiful one pixel at a timehttps://developers.googleblog.com/making-the-terminal-beautiful-one-pixel-at-a-time/Google has launched the redesigned **Android AI Sample Catalog**, a dedicated, open-source application to inspire and educate Android developers on building AI-powered apps. It showcases examples using both on-device (Gemini Nano via ML Kit GenAI API) and Cloud models (via Firebase AI Logic SDK), including image generation with Imagen, on-device summarization, and a "Chat with Nano Banana" chatbot. The code is easy to copy and paste to help developers quickly start their own projects.https://developers.googleblog.com/making-the-terminal-beautiful-one-pixel-at-a-time/Introducing Metrax: performant, efficient, and robust model evaluation metrics in JAXhttps://developers.googleblog.com/introducing-metrax-performant-efficient-and-robust-model-evaluation-metrics-in-jax/Metrax is a high-performance JAX-based metrics library developed by Google. It standardizes model evaluation by offering robust, efficient metrics for classification, NLP, and vision, eliminating manual re-implementation after migrating from TensorFlow. Key strengths include parallel computation of "at K" metrics (e.g., PrecisionAtK) for multiple K values and strong integration with the JAX AI Stack, leveraging JAX's performance features. It is open-source on GitHub.https://developers.googleblog.com/introducing-metrax-performant-efficient-and-robust-model-evaluation-metrics-in-jax/Introducing Code Wiki: Accelerating your code understandinghttps://developers.googleblog.com/introducing-code-wiki-accelerating-your-code-understanding/Code Wiki is a new platform that tackles the bottleneck of reading existing code by providing an automated, continuously updated, structured wiki for code repositories. It features hyper-linked documentation, a Gemini-powered chat agent that understands your repo, and automated diagrams. A public preview is available for open-source projects, and a Gemini CLI extension is coming soon for secure use on private repos.https://developers.googleblog.com/introducing-code-wiki-accelerating-your-code-understanding/Google Colab is Coming to VS Codehttps://developers.googleblog.com/google-colab-is-coming-to-vs-code/Google Colab has launched an official VS Code extension, bridging the gap between the popular code editor and the AI/ML platform. The extension combines VS Code's powerful development environment with Colab's seamless access to high-powered runtimes (GPUs/TPUs), allowing users to connect local notebooks to Colab. This aims to meet developers where they are and brings the best of both worlds.https://developers.googleblog.com/google-colab-is-coming-to-vs-code/Announcing User Simulation in ADK Evaluationhttps://developers.googleblog.com/announcing-user-simulation-in-adk-evaluation/The new **User Simulation** feature in the Agent Development Kit (ADK) replaces rigid, brittle manual test scripts with dynamic, LLM-powered conversation generation. Developers define a high-level `conversation_plan`, and the simulator handles the multi-turn interaction to achieve the goal. This dramatically reduces test creation time, builds more resilient tests, and creates a reliable regression suite for AI agents.https://developers.googleblog.com/announcing-user-simulation-in-adk-evaluation/Announcing the Agent Development Kit for Go: Build Powerful AI Agents with Your Favorite Languageshttps://developers.googleblog.com/announcing-the-agent-development-kit-for-go-build-powerful-ai-agents-with-your-favorite-languages/The Agent Development Kit (ADK), an open-source, code-first toolkit for building powerful and sophisticated AI agents, now supports Go. ADK moves LLM orchestration and agent behavior directly into your code, giving you robust debugging, versioning, and deployment freedom. ADK for Go is idiomatic and performant, leveraging Go's strengths, and includes support for over 30+ databases and the Agent-to-Agent (A2A) protocol for collaborative multi-agent systems. Start building today!https://developers.googleblog.com/announcing-the-agent-development-kit-for-go-build-powerful-ai-agents-with-your-favorite-languages/Agent Garden - Samples for learning, discovering and buildinghttps://developers.googleblog.com/agent-garden-samples-for-learning-discovering-and-building/Agent Garden is now available to all users to simplify AI agent creation and deployment using the Agent Development Kit (ADK). It provides curated agent samples, one-click deployment via Agent Starter Pack, and customization through Firebase Studio. It helps developers with complex business challenges and multi-agent workflows, with Renault Group cited as an early success story.https://developers.googleblog.com/agent-garden-samples-for-learning-discovering-and-building/Beyond Request-Response: Architecting Real-time Bidirectional Streaming Multi-agent Systemhttps://developers.googleblog.com/beyond-request-response-architecting-real-time-bidirectional-streaming-multi-agent-system/The blog post argues the request-response model fails for advanced multi-agent AI. It advocates for a real-time bidirectional streaming architecture, implemented by the Agent Development Kit (ADK). This streaming model enables true concurrency, natural interruptibility, and unified multimodal processing. ADK's core features are real-time I/O management, stateful sessions for agent handoffs, and streaming-native tools.https://developers.googleblog.com/beyond-request-response-architecting-real-time-bidirectional-streaming-multi-agent-system/Introducing the Jules extension for Gemini CLIhttps://developers.googleblog.com/introducing-the-jules-extension-for-gemini-cli/Introducing the Jules extension for Gemini CLI, an autonomous sidekick for developers. It accelerates coding workflows by offloading tasks like asynchronous work, bug fixes, and changes in new branches to Jules, while you stay in flow with Gemini CLI. Get started by installing the extension and using the /jules command to initiate and check task statuses.https://developers.googleblog.com/introducing-the-jules-extension-for-gemini-cli/Say hello to a new level of interactivity in Gemini CLIhttps://developers.googleblog.com/say-hello-to-a-new-level-of-interactivity-in-gemini-cli/We're excited to announce an enhancement to Gemini CLI that makes your workflow more powerful a...https://developers.googleblog.com/say-hello-to-a-new-level-of-interactivity-in-gemini-cli/Introducing Coral NPU: A full-stack platform for Edge AIhttps://developers.googleblog.com/introducing-coral-npu-a-full-stack-platform-for-edge-ai/Coral NPU is a full-stack platform for Edge AI, addressing performance, fragmentation, and user trust deficits. It's an AI-first architecture, prioritizing ML matrix engines, and offers a unified developer experience. Designed for ultra-low-power, always-on AI in wearables and IoT, it enables contextual awareness, audio/image processing, and user interaction with hardware-enforced privacy. Synaptics is the first partner to implement Coral NPU.https://developers.googleblog.com/introducing-coral-npu-a-full-stack-platform-for-edge-ai/