Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

feat: Adds the atlas-list-performance-advisor base tool #528

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
kylelai1 wants to merge 6 commits into atlas-list-performance-advisor-tool
base: atlas-list-performance-advisor-tool
Choose a base branch
Loading
from atlas-list-performance-advisor-base-tool

Conversation

Copy link
Collaborator

@kylelai1 kylelai1 commented Sep 8, 2025
edited
Loading

Proposed changes

This PR adds the atlas-list-performance-advisor tool to the MCP server, which retrieves the following performance advisor recommendations from the admin API: index suggestions, drop index suggestions, schema suggestions, slow query logs.

This PR merges the changes into the atlas-list-performance-advisor-tool branch.

Testing

  • Manually tested that the MCP server is able to retrieve performance advisor suggestions.

Checklist

@kylelai1 kylelai1 marked this pull request as ready for review September 8, 2025 20:24
@kylelai1 kylelai1 requested a review from a team as a code owner September 8, 2025 20:24
@kylelai1 kylelai1 changed the title (削除) Adds the atlas-list-performance-advisor base tool (削除ここまで) (追記) feat: Adds the atlas-list-performance-advisor base tool (追記ここまで) Sep 8, 2025
Copy link
Collaborator

@nirinchev nirinchev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a quick pass - overall, looks reasonable, but I'm worried that it might not be too LLM-friendly. I suggest testing it thoroughly with different agents/models and confirming it's outputting meaningful insights.

.array(z.nativeEnum(PerformanceAdvisorOperation))
.describe("Operations to list performance advisor recommendations"),
since: z.number().describe("Date to list slow query logs since").optional(),
processId: z.string().describe("Process ID to list slow query logs").optional(),
Copy link
Collaborator

@nirinchev nirinchev Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this something we expect the LLM to know how to get? As far as I can tell, you get it by calling atlas processes list but we don't have any tools that mirror that behavior in the MCP server.

operations: z
.array(z.nativeEnum(PerformanceAdvisorOperation))
.describe("Operations to list performance advisor recommendations"),
since: z.number().describe("Date to list slow query logs since").optional(),
Copy link
Collaborator

@nirinchev nirinchev Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be z.date instead? Does the LLM do a good job at converting dates to unix epoch?


// If operations is empty, get all performance advisor recommendations
// Otherwise, get only the specified operations
const operationsToExecute = operations.length === 0 ? Object.values(PerformanceAdvisorOperation) : operations;
Copy link
Collaborator

@nirinchev nirinchev Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we mark operations as optional and provide a default instead? Right now there's nothing to hint to the LLM it could provide an empty array here.


try {
if (operationsToExecute.includes(PerformanceAdvisorOperation.SUGGESTED_INDEXES)) {
const { suggestedIndexes } = await getSuggestedIndexes(this.session.apiClient, projectId, clusterName);
Copy link
Collaborator

@nirinchev nirinchev Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably not super critical, but right now, all of these async operations are evaluated sequentially, which means that we need to wait for one to finish before starting the next one. Instead, it would be a good idea to run them in parallel.

Comment on lines +90 to +92
return {
content: [{ type: "text", text: JSON.stringify(data, null, 2) }],
};
Copy link
Collaborator

@nirinchev nirinchev Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should wrap the response in formatUntrustedData to avoid injection attacks where someone creates a slow query that contains llm instructions. Also, it might be helpful to give hints to the llm what the different fields in the json data represent and how those can be used.


interface DropIndexSuggestion {
accessCount?: number;
index?: Array<{ [key: string]: 1 | -1 }>;
Copy link
Collaborator

@nirinchev nirinchev Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this type definition true? Would PA not suggest dropping geo or text indexes?

Comment on lines +55 to +71
type SchemaTriggerType =
| "PERCENT_QUERIES_USE_LOOKUP"
| "NUMBER_OF_QUERIES_USE_LOOKUP"
| "DOCS_CONTAIN_UNBOUNDED_ARRAY"
| "NUMBER_OF_NAMESPACES"
| "DOC_SIZE_TOO_LARGE"
| "NUM_INDEXES"
| "QUERIES_CONTAIN_CASE_INSENSITIVE_REGEX";

type SchemaRecommedationType =
| "REDUCE_LOOKUP_OPS"
| "AVOID_UNBOUNDED_ARRAY"
| "REDUCE_DOCUMENT_SIZE"
| "REMOVE_UNNECESSARY_INDEXES"
| "REDUCE_NUMBER_OF_NAMESPACES"
| "OPTIMIZE_CASE_INSENSITIVE_REGEX_QUERIES"
| "OPTIMIZE_TEXT_QUERIES";
Copy link
Collaborator

@nirinchev nirinchev Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to translate these to something the LLM would have an easier time interpreting?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Reviewers

@nirinchev nirinchev nirinchev left review comments

@blva blva Awaiting requested review from blva

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /