I’m building an internal LLM testing platform where users can select any Amazon Bedrock model and dynamically adjust its input parameters (e.g., max_tokens, temperature, top_p, embedding options, etc.) through a UI.
My goals:
1. Test all Bedrock models, including:
• Text-generation models
• Embedding models
• Rerank models
• Multimodal / video models (e.g., TwelveLabs)
2. Allow users to modify each model’s parameters before calling the model.
3. Send the correct request payload depending on the selected model.
However, I ran into a problem:
Problem
Amazon Bedrock does not provide an API that returns:
• The required parameter schema for each model
• Valid parameter names
• Min/max values
• Which parameters are supported
• Which format the request body must follow
Also:
• InvokeModel supports more models than Converse, but each provider (Anthropic, Cohere, Meta, Amazon, etc.) uses provider-native request schemas.
• Even within the same provider, different models may use different request formats
(e.g., Cohere Command vs Cohere Embed vs Cohere Rerank).
• There is no built-in "unified schema" in Bedrock that I can fetch programmatically.
Because of this, I don’t know how to automatically generate a correct request payload for each model.
Question
How do teams typically solve this?
Is there any recommended way to:
• Standardize or abstract Bedrock model request schemas?
• Programmatically discover which parameters a model supports?
• Build a UI that allows users to adjust parameters safely?
• Handle providers/models that have different request formats?
Do I need to manually maintain a model registry that stores:
• Provider
• OutputModalities
• Supported parameters
• Min/max values
• Example request templates
...or is there a better approach?
Any guidance or best practices for building a multi-model Bedrock testing platform would be greatly appreciated.