Module llm (2.21.0)

LLM models.

Classes

Claude3TextGenerator

Claude3TextGenerator(
 *,
 model_name: typing.Optional[
 typing.Literal[
 "claude-3-sonnet", "claude-3-haiku", "claude-3-5-sonnet", "claude-3-opus"
 ]
 ] = None,
 session: typing.Optional[bigframes.session.Session] = None,
 connection_name: typing.Optional[str] = None
)

Claude3 text generator LLM model.

Go to Google Cloud Console -> Vertex AI -> Model Garden page to enable the models before use. Must have the Consumer Procurement Entitlement Manager Identity and Access Management (IAM) role to enable the models. https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-partner-models#grant-permissions

The models only available in specific regions. Check https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#regions for details.
Parameters
Name Description
model_name str, Default to "claude-3-sonnet"

The model for natural language tasks. Possible values are "claude-3-sonnet", "claude-3-haiku", "claude-3-5-sonnet" and "claude-3-opus". "claude-3-sonnet" (deprecated) is Anthropic's dependable combination of skills and speed. It is engineered to be dependable for scaled AI deployments across a variety of use cases. "claude-3-haiku" is Anthropic's fastest, most compact vision and text model for near-instant responses to simple queries, meant for seamless AI experiences mimicking human interactions. "claude-3-5-sonnet" is Anthropic's most powerful AI model and maintains the speed and cost of Claude 3 Sonnet, which is a mid-tier model. "claude-3-opus" is Anthropic's second-most powerful AI model, with strong performance on highly complex tasks. https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude#available-claude-models If no setting is provided, "claude-3-sonnet" will be used by default and a warning will be issued.

session bigframes.Session or None

BQ session to create the model. If None, use the global default session.

connection_name str or None

Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.

GeminiTextGenerator

GeminiTextGenerator(
 *,
 model_name: typing.Optional[
 typing.Literal[
 "gemini-1.5-pro-preview-0514",
 "gemini-1.5-flash-preview-0514",
 "gemini-1.5-pro-001",
 "gemini-1.5-pro-002",
 "gemini-1.5-flash-001",
 "gemini-1.5-flash-002",
 "gemini-2.0-flash-exp",
 "gemini-2.0-flash-001",
 "gemini-2.0-flash-lite-001",
 ]
 ] = None,
 session: typing.Optional[bigframes.session.Session] = None,
 connection_name: typing.Optional[str] = None,
 max_iterations: int = 300
)

Gemini text generator LLM model.

Parameters
Name Description
model_name str, Default to "gemini-2.0-flash-001"

The model for natural language tasks. Accepted values are "gemini-1.5-pro-preview-0514", "gemini-1.5-flash-preview-0514", "gemini-1.5-pro-001", "gemini-1.5-pro-002", "gemini-1.5-flash-001", "gemini-1.5-flash-002", "gemini-2.0-flash-exp", "gemini-2.0-flash-lite-001", and "gemini-2.0-flash-001". If no setting is provided, "gemini-2.0-flash-001" will be used by default and a warning will be issued.

session bigframes.Session or None

BQ session to create the model. If None, use the global default session.

connection_name str or None

Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.

max_iterations Optional[int], Default to 300

The number of steps to run when performing supervised tuning.

MultimodalEmbeddingGenerator

MultimodalEmbeddingGenerator(
 *,
 model_name: typing.Optional[typing.Literal["multimodalembedding@001"]] = None,
 session: typing.Optional[bigframes.session.Session] = None,
 connection_name: typing.Optional[str] = None
)

Multimodal embedding generator LLM model.

Parameters
Name Description
model_name str, Default to "multimodalembedding@001"

The model for multimodal embedding. Can set to "multimodalembedding@001". Multimodal-embedding models returns model embeddings for text, image and video inputs. If no setting is provided, "multimodalembedding@001" will be used by default and a warning will be issued.

session bigframes.Session or None

BQ session to create the model. If None, use the global default session.

connection_name str or None

Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.

TextEmbeddingGenerator

TextEmbeddingGenerator(
 *,
 model_name: typing.Optional[
 typing.Literal[
 "text-embedding-005",
 "text-embedding-004",
 "text-multilingual-embedding-002",
 ]
 ] = None,
 session: typing.Optional[bigframes.session.Session] = None,
 connection_name: typing.Optional[str] = None
)

Text embedding generator LLM model.

Parameters
Name Description
model_name str, Default to "text-embedding-004"

The model for text embedding. Possible values are "text-embedding-005", "text-embedding-004" or "text-multilingual-embedding-002". text-embedding models returns model embeddings for text inputs. text-multilingual-embedding models returns model embeddings for text inputs which support over 100 languages. If no setting is provided, "text-embedding-004" will be used by default and a warning will be issued.

session bigframes.Session or None

BQ session to create the model. If None, use the global default session.

connection_name str or None

Connection to connect with remote service. str of the format <PROJECT_NUMBER/PROJECT_ID>.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025年10月27日 UTC.