Optimize prompts
Stay organized with collections
Save and categorize content based on your preferences.
This document describes how to use the Vertex AI prompt optimizer to automatically optimize prompt performance by improving the system instructions for a set of prompts.
The Vertex AI prompt optimizer can help you improve your prompts quickly at scale, without manually rewriting system instructions or individual prompts. This is especially useful when you want to use system instructions and prompts that were written for one model with a different model.
We offer two approaches for optimizing prompts:
- The zero-shot optimizer is a real-time low-latency optimizer that improves a single prompt or system instruction template. It is fast and requires no additional setup besides providing your original prompt or system instruction.
- The data-driven optimizer is a batch task-level iterative optimizer that improves prompts by evaluating the model's response to sample labeled prompts against specified evaluation metrics for your selected target model. It's for more advanced optimization that lets you configure the optimization parameters and provide a few labeled samples.
These methods are available to users through the user interface (UI) or the Vertex AI SDK.
Supported target models for optimization
The zero-shot optimizer is model-independent and can improve prompts for any Google model. Also, the zero-shot optimizer provides a
gemini_nano mode to specifically optimize
prompts for on-device models, such as
Gemini Nano and
Gemma 3n E4B.
Also, the data-driven optimizer supports optimization for generally-available Gemini models and supports custom models deployed locally or from the Vertex AI Model Garden.
What's next
Learn about zero-shot optimizer
Learn about data-driven optimizer