Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

feat(a2a): added a2a protocol #2784

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
waleedlatif1 wants to merge 7 commits into staging
base: staging
Choose a base branch
Loading
from feat/a2a
Open

feat(a2a): added a2a protocol #2784

waleedlatif1 wants to merge 7 commits into staging from feat/a2a

Conversation

@waleedlatif1
Copy link
Collaborator

@waleedlatif1 waleedlatif1 commented Jan 13, 2026

Summary

  • added a2a protocol
    • added block, uses trigger for notifs and redis for caching (optionally) and has full DB-support as well
  • consolidated workspace exists util into lib
  • made output-select for selecting outputs use the combobox instead of a custom button
  • made the tags for the templates and A2A use the tag-input with a new variant that is not as colorful for consistency

Type of Change

  • New feature

Testing

Tested manually

Checklist

  • Code follows project style guidelines
  • Self-reviewed my changes
  • Tests added/updated and passing
  • No new warnings introduced
  • I confirm that I have read and agree to the terms outlined in the Contributor License Agreement (CLA)

greptile-apps[bot] reacted with thumbs up emoji
Copy link

vercel bot commented Jan 13, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

1 Skipped Deployment
Project Deployment Review Updated (UTC)
docs Skipped Skipped Jan 13, 2026 4:03am

Copy link
Contributor

greptile-apps bot commented Jan 13, 2026

Greptile Overview

Greptile Summary

This PR introduces comprehensive support for the A2A (Agent-to-Agent) protocol v0.3, enabling Sim Studio workflows to be exposed as A2A-compatible agents and interact with external A2A agents.

Key Additions

Core Protocol Implementation:

  • Complete A2A server implementation with JSON-RPC 2.0 support for all required methods (message/send, message/stream, tasks/get, tasks/cancel, tasks/resubscribe)
  • Push notification system with optional Trigger.dev integration for reliable webhook delivery
  • Redis-based distributed locking for task concurrency control (gracefully degrades without Redis)
  • Agent card (discovery document) generation and caching

Database Schema:

  • Three new tables: a2a_agent, a2a_task, and a2a_push_notification_config
  • Proper indexes and foreign key constraints with cascade deletes
  • Support for task history, artifacts, and conversation context tracking

Authentication & Security:

  • Hybrid authentication supporting session, API key, and internal JWT tokens
  • HTTPS-only requirement for push notification webhooks (validated)
  • Configurable authentication schemes per agent (none, apiKey, bearer)

Developer Experience:

  • New A2A block for workflows to interact with external agents
  • Complete UI for agent creation, configuration, and publishing
  • Tool implementations for all A2A operations
  • Consolidated workspace permission utilities for better code reuse

Issues Identified

Critical (Logic):

  1. API key extraction security risk (serve route): Accepts API keys from Authorization header with simple string replacement, potentially bypassing proper validation
  2. Race condition without Redis: When Redis is unavailable, distributed locking fails and concurrent task operations can conflict
  3. Missing API key pass-through: Tool endpoints don't pass API keys to ClientFactory when calling external agents
  4. Missing input validation: skillTags parameter lacks array validation in agent update endpoint

Important (Logic):
5. Timeout error handling: Workflow timeouts caught generically without specific timeout messaging
6. Base64 validation gap: File bytes not validated before creating data URLs
7. Streaming content loss: Empty finalContent could overwrite accumulated streaming content

Best Practices (Style):

  • Push notification retry strategy could be less aggressive
  • Polling mechanism lacks early disconnection detection
  • Lock timeout for streaming (5 minutes) might be too long
  • Various validation improvements for agent cards and error messages

Architectural Strengths

  • Clean separation between A2A protocol logic and workflow execution
  • Proper use of distributed locking pattern with fallback behavior
  • Well-structured database schema with appropriate indexes
  • Good code organization with lib utilities separated from API routes
  • Consistent error handling with JSON-RPC error codes

Confidence Score: 3/5

  • This PR has several logic issues that should be addressed before merging, particularly around authentication handling and race conditions
  • Score of 3 reflects a solid implementation with architectural strengths but notable issues:

What's Good (+):

  • Comprehensive A2A protocol implementation following v0.3 spec
  • Well-structured database schema with proper constraints and indexes
  • Good separation of concerns and code organization
  • Proper authentication checks and HTTPS enforcement for webhooks
  • Graceful degradation when optional services (Redis, Trigger.dev) are unavailable

Issues Requiring Attention (-):

  • Authentication bypass risk in API key extraction (critical security concern)
  • Race condition vulnerability when Redis is unavailable (could cause data corruption)
  • Missing API key pass-through to external agents (breaks authentication for external calls)
  • Several input validation gaps (skillTags, base64 data)
  • Timeout and error handling could be more specific

Why not higher: The authentication and concurrency issues are functional problems that could impact production usage. The API key extraction flaw is a security risk that should be fixed.

Why not lower: The core implementation is sound, changes are well-tested manually, and most issues are fixable without major refactoring. The architecture supports the requirements well.

  • Pay close attention to apps/sim/app/api/a2a/serve/[agentId]/route.ts (authentication and concurrency logic) and apps/sim/app/api/tools/a2a/send-message*.ts files (missing API key pass-through)

Important Files Changed

File Analysis

Filename Score Overview
apps/sim/app/api/a2a/serve/[agentId]/route.ts 3/5 Main A2A protocol handler with authentication bypass risk in API key extraction, race condition without Redis, and timeout handling issues
apps/sim/app/api/a2a/agents/[agentId]/route.ts 3/5 Agent management with missing validation for skillTags input parameter
apps/sim/lib/a2a/utils.ts 3/5 Core utility functions with base64 validation gap in file conversion
apps/sim/app/api/tools/a2a/send-message/route.ts 2/5 Tool endpoint missing API key pass-through to external A2A agents
apps/sim/app/api/tools/a2a/send-message-stream/route.ts 2/5 Streaming tool endpoint with same API key pass-through issue as send-message
packages/db/schema.ts 4/5 Database schema additions for A2A protocol with proper indexes and constraints, minor documentation needs

Sequence Diagram

sequenceDiagram
 participant Client as External A2A Client
 participant API as /api/a2a/serve/[agentId]
 participant Auth as Authentication
 participant DB as Database
 participant Redis as Redis (Optional)
 participant Workflow as Workflow Executor
 participant Trigger as Trigger.dev (Optional)
 participant Webhook as Client Webhook
 Note over Client,Webhook: A2A Message Send Flow (message/send)
 Client->>API: POST JSON-RPC request (message/send)
 API->>Auth: Check authentication (session/API key/internal JWT)
 Auth-->>API: Authentication result
 
 alt Authentication Failed
 API-->>Client: 401 Unauthorized
 end
 API->>DB: Query agent and workflow
 alt Agent not published or workflow not deployed
 API-->>Client: 404 Agent unavailable
 end
 API->>Redis: Acquire task lock (if Redis available)
 alt Lock not acquired
 API-->>Client: 409 Task being processed
 end
 API->>DB: Create or update task (status: working)
 
 API->>Workflow: POST /api/workflows/[id]/execute
 Note right of Workflow: Execute with workflow input,<br/>files, and data parts
 
 alt Workflow execution succeeds
 Workflow-->>API: Success response with output
 API->>DB: Update task (status: completed, save result)
 API->>Trigger: Queue push notification (if configured)
 Trigger->>Webhook: POST task update notification
 API-->>Client: Task response (completed)
 else Workflow execution fails
 Workflow-->>API: Error response
 API->>DB: Update task (status: failed)
 API->>Trigger: Queue push notification (if configured)
 API-->>Client: Error response
 end
 API->>Redis: Release task lock
 Note over Client,Webhook: A2A Streaming Flow (message/stream)
 Client->>API: POST JSON-RPC request (message/stream)
 API->>Auth: Check authentication
 API->>DB: Query agent and workflow
 API->>Redis: Acquire task lock (5min timeout)
 API->>DB: Create or update task
 
 API->>Workflow: POST /api/workflows/[id]/execute (stream=true)
 Note over API: Opens SSE stream
 
 loop Stream chunks
 Workflow-->>API: SSE chunk events
 API-->>Client: Forward as A2A SSE events
 end
 
 Workflow-->>API: Final event (completed)
 API->>DB: Update task (status: completed)
 API->>Trigger: Queue push notification
 API->>Redis: Release task lock
 API-->>Client: Close stream
 Note over Client,Webhook: Task Query Flow (tasks/get)
 Client->>API: POST JSON-RPC request (tasks/get)
 API->>DB: Query task by ID
 alt Task found
 API-->>Client: Task with history/artifacts
 else Task not found
 API-->>Client: 404 Task not found
 end
 Note over Client,Webhook: Agent Discovery Flow (GET /api/a2a/serve/[agentId])
 Client->>API: GET request for agent card
 API->>Redis: Check cache (if available)
 alt Cache hit
 API-->>Client: Cached agent card
 else Cache miss
 API->>DB: Query agent details
 API->>Redis: Store in cache (1 hour TTL)
 API-->>Client: Agent card with capabilities/skills
 end
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

22 files reviewed, 22 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +387 to +388
signal: AbortSignal.timeout(A2A_DEFAULT_TIMEOUT),
})
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AbortSignal.timeout(A2A_DEFAULT_TIMEOUT) will throw an error when the timeout is reached, but this error is caught in the catch block at line 429. However, if the fetch is aborted due to timeout, the agent message added to history (line 395-398) will not reflect the actual failure state.

The task should be marked as 'failed' with an appropriate timeout message rather than allowing the catch block to handle it generically. Consider checking if the error is a timeout error specifically:

Suggested change
signal: AbortSignal.timeout(A2A_DEFAULT_TIMEOUT),
})
} catch (error) {
const isTimeout = error instanceof Error && error.name === 'TimeoutError'
logger.error(`Error executing workflow for task ${taskId}:`, { error, isTimeout })
const errorMessage = isTimeout
? `Workflow execution timed out after ${A2A_DEFAULT_TIMEOUT}ms`
: (error instanceof Error ? error.message : 'Workflow execution failed')
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/a2a/serve/[agentId]/route.ts
Line: 387:388
Comment:
The `AbortSignal.timeout(A2A_DEFAULT_TIMEOUT)` will throw an error when the timeout is reached, but this error is caught in the catch block at line 429. However, if the fetch is aborted due to timeout, the agent message added to history (line 395-398) will not reflect the actual failure state. 
The task should be marked as 'failed' with an appropriate timeout message rather than allowing the catch block to handle it generically. Consider checking if the error is a timeout error specifically:
```suggestion
 } catch (error) {
 const isTimeout = error instanceof Error && error.name === 'TimeoutError'
 logger.error(`Error executing workflow for task ${taskId}:`, { error, isTimeout })

 const errorMessage = isTimeout 
 ? `Workflow execution timed out after ${A2A_DEFAULT_TIMEOUT}ms`
 : (error instanceof Error ? error.message : 'Workflow execution failed')
```
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +298 to +305
const lockValue = uuidv4()
const acquired = await acquireLock(lockKey, lockValue, 60)

if (!acquired) {
return NextResponse.json(
createError(id, A2A_ERROR_CODES.INTERNAL_ERROR, 'Task is currently being processed'),
{ status: 409 }
)
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Race condition: If two requests arrive simultaneously for the same task, both could pass the acquireLock check before either updates the database. While Redis locking is attempted, when Redis is unavailable (line 92 in redis.ts returns true), the lock is not actually acquired.

Without Redis, concurrent requests could:

  1. Both read the same task state
  2. Both attempt to execute the workflow
  3. Both update the task with different execution results

This is particularly problematic for terminal state checks. Consider adding a database-level check after acquiring the lock or implementing optimistic locking with a version field in the task table.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/a2a/serve/[agentId]/route.ts
Line: 298:305
Comment:
Race condition: If two requests arrive simultaneously for the same task, both could pass the `acquireLock` check before either updates the database. While Redis locking is attempted, when Redis is unavailable (line 92 in redis.ts returns `true`), the lock is not actually acquired.
Without Redis, concurrent requests could:
1. Both read the same task state
2. Both attempt to execute the workflow
3. Both update the task with different execution results
This is particularly problematic for terminal state checks. Consider adding a database-level check after acquiring the lock or implementing optimistic locking with a version field in the task table.
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +414 to +418
if (isTerminalState(finalState)) {
notifyTaskStateChange(taskId, finalState).catch((err) => {
logger.error('Failed to trigger push notification', { taskId, error: err })
})
}
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The push notification is triggered with notifyTaskStateChange but errors are only logged and not handled. If the push notification delivery fails silently, external clients won't receive task completion updates.

While this is acceptable for fire-and-forget notifications, consider:

  1. At least tracking failed notification attempts in metrics/logs for monitoring
  2. Adding a retry mechanism for critical notifications
  3. Documenting that push notifications are best-effort delivery
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/a2a/serve/[agentId]/route.ts
Line: 414:418
Comment:
The push notification is triggered with `notifyTaskStateChange` but errors are only logged and not handled. If the push notification delivery fails silently, external clients won't receive task completion updates.
While this is acceptable for fire-and-forget notifications, consider:
1. At least tracking failed notification attempts in metrics/logs for monitoring
2. Adding a retry mechanism for critical notifications
3. Documenting that push notifications are best-effort delivery
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +646 to +662
}

const messageContent = finalContent || accumulatedContent || 'Task completed'
const agentMessage = createAgentMessage(messageContent)
agentMessage.taskId = taskId
if (contextId) agentMessage.contextId = contextId
history.push(agentMessage)

await db
.update(a2aTask)
.set({
status: 'completed',
messages: history,
completedAt: new Date(),
updatedAt: new Date(),
})
.where(eq(a2aTask.id, taskId))
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the streaming handler, when parsing SSE chunks, if parsed.finalContent is available but empty, it will overwrite the accumulatedContent. This could result in losing streamed content if the final event has an empty content field.

Suggested change
}
const messageContent = finalContent || accumulatedContent || 'Task completed'
const agentMessage = createAgentMessage(messageContent)
agentMessage.taskId = taskId
if (contextId) agentMessage.contextId = contextId
history.push(agentMessage)
await db
.update(a2aTask)
.set({
status: 'completed',
messages: history,
completedAt: new Date(),
updatedAt: new Date(),
})
.where(eq(a2aTask.id, taskId))
const messageContent = finalContent || accumulatedContent || 'Task completed'

Consider: const messageContent = (finalContent !== undefined ? finalContent : accumulatedContent) || 'Task completed' to properly handle empty strings vs undefined.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/a2a/serve/[agentId]/route.ts
Line: 646:662
Comment:
In the streaming handler, when parsing SSE chunks, if `parsed.finalContent` is available but empty, it will overwrite the `accumulatedContent`. This could result in losing streamed content if the final event has an empty content field.
```suggestion
 const messageContent = finalContent || accumulatedContent || 'Task completed'
```
Consider: `const messageContent = (finalContent !== undefined ? finalContent : accumulatedContent) || 'Task completed'` to properly handle empty strings vs undefined.
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +933 to +935
const pollInterval = 3000 // 3 seconds (reduced from 1s to lower DB load)
const maxPolls = 100 // 5 minutes max (100 * 3s = 300s)

Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The polling mechanism in handleTaskResubscribe uses a 3-second interval with a maximum of 100 polls (5 minutes total). However, there's no graceful degradation if the client disconnects early.

The cancel() method is implemented (line 1031-1037), but consider adding a connection check or using request.signal to detect early disconnection and stop polling immediately to reduce unnecessary database load.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/a2a/serve/[agentId]/route.ts
Line: 933:935
Comment:
The polling mechanism in `handleTaskResubscribe` uses a 3-second interval with a maximum of 100 polls (5 minutes total). However, there's no graceful degradation if the client disconnects early.
The `cancel()` method is implemented (line 1031-1037), but consider adding a connection check or using `request.signal` to detect early disconnection and stop polling immediately to reduce unnecessary database load.
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +26 to +33
* @returns The workspace if found, null otherwise
*/
export async function getWorkspaceById(workspaceId: string): Promise<WorkspaceBasic | null> {
const [ws] = await db
.select({ id: workspace.id })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The getWorkspaceById function only selects the id field, which is already provided as input. This seems redundant - the function is essentially checking existence.

Consider renaming this to workspaceExists and returning a boolean directly, or if you need the workspace object for future use, document why only the ID is selected:

Suggested change
* @returns The workspace if found, null otherwise
*/
export async function getWorkspaceById(workspaceId: string): Promise<WorkspaceBasic | null> {
const [ws] = await db
.select({ id: workspace.id })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
/**
* Check if a workspace exists by ID
*
* @param workspaceId - The workspace ID to check
* @returns True if workspace exists, false otherwise
*/
export async function workspaceExists(workspaceId: string): Promise<boolean> {
const [ws] = await db
.select({ id: workspace.id })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
return !!ws
}

Note: There's already a workspaceExists function at line 59-62, but it calls getWorkspaceById which seems circular.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/lib/workspaces/permissions/utils.ts
Line: 26:33
Comment:
The `getWorkspaceById` function only selects the `id` field, which is already provided as input. This seems redundant - the function is essentially checking existence.
Consider renaming this to `workspaceExists` and returning a boolean directly, or if you need the workspace object for future use, document why only the ID is selected:
```suggestion
/**
 * Check if a workspace exists by ID
 * 
 * @param workspaceId - The workspace ID to check
 * @returns True if workspace exists, false otherwise
 */
export async function workspaceExists(workspaceId: string): Promise<boolean> {
 const [ws] = await db
 .select({ id: workspace.id })
 .from(workspace)
 .where(eq(workspace.id, workspaceId))
 .limit(1)

 return !!ws
}
```
Note: There's already a `workspaceExists` function at line 59-62, but it calls `getWorkspaceById` which seems circular.
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +13 to +21
export const a2aPushNotificationTask = task({
id: 'a2a-push-notification-delivery',
retry: {
maxAttempts: 5,
minTimeoutInMs: 1000,
maxTimeoutInMs: 60000,
factor: 2,
},
run: async (params: A2APushNotificationParams) => {
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Trigger.dev task configuration has a maximum of 5 retry attempts with exponential backoff (factor: 2), which means:

  • Attempt 1: immediate
  • Attempt 2: ~1s later
  • Attempt 3: ~2s later
  • Attempt 4: ~4s later
  • Attempt 5: ~8s later

Total: ~15 seconds of retries. For webhook deliveries, this might be too aggressive if the recipient server is temporarily down. Consider:

  1. Increasing maxTimeoutInMs to allow longer delays between retries
  2. Reducing maxAttempts but with longer delays
  3. Adding jitter to prevent thundering herd if many notifications fail simultaneously

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/background/a2a-push-notification-delivery.ts
Line: 13:21
Comment:
The Trigger.dev task configuration has a maximum of 5 retry attempts with exponential backoff (factor: 2), which means:
- Attempt 1: immediate
- Attempt 2: ~1s later 
- Attempt 3: ~2s later
- Attempt 4: ~4s later
- Attempt 5: ~8s later
Total: ~15 seconds of retries. For webhook deliveries, this might be too aggressive if the recipient server is temporarily down. Consider:
1. Increasing maxTimeoutInMs to allow longer delays between retries
2. Reducing maxAttempts but with longer delays
3. Adding jitter to prevent thundering herd if many notifications fail simultaneously
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +484 to +499
const acquired = await acquireLock(lockKey, lockValue, 300) // 5 minute timeout for streaming

if (!acquired) {
const encoder = new TextEncoder()
const errorStream = new ReadableStream({
start(controller) {
controller.enqueue(
encoder.encode(
`event: error\ndata: ${JSON.stringify({ code: A2A_ERROR_CODES.INTERNAL_ERROR, message: 'Task is currently being processed' })}\n\n`
)
)
controller.close()
},
})
return new NextResponse(errorStream, { headers: SSE_HEADERS })
}
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the streaming lock cannot be acquired (line 486), an error stream is created and returned. However, the lock timeout for streaming is set to 300 seconds (5 minutes, line 484), which is quite long.

If a streaming request hangs or fails without properly releasing the lock, subsequent streaming requests for the same task will be blocked for up to 5 minutes. Consider:

  1. Using a shorter lock timeout for streaming operations
  2. Implementing a heartbeat mechanism to extend the lock while streaming is active
  3. Adding cleanup logic to detect and remove stale locks

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/a2a/serve/[agentId]/route.ts
Line: 484:499
Comment:
When the streaming lock cannot be acquired (line 486), an error stream is created and returned. However, the lock timeout for streaming is set to 300 seconds (5 minutes, line 484), which is quite long.
If a streaming request hangs or fails without properly releasing the lock, subsequent streaming requests for the same task will be blocked for up to 5 minutes. Consider:
1. Using a shorter lock timeout for streaming operations
2. Implementing a heartbeat mechanism to extend the lock while streaming is active
3. Adding cleanup logic to detect and remove stale locks
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +1892 to +1902
sessionIdIdx: index('a2a_task_session_id_idx').on(table.sessionId),
statusIdx: index('a2a_task_status_idx').on(table.status),
executionIdIdx: index('a2a_task_execution_id_idx').on(table.executionId),
createdAtIdx: index('a2a_task_created_at_idx').on(table.createdAt),
})
)

/**
* A2A Push Notification Config - Webhook configuration for task updates
* Stores push notification webhooks for async task updates
*/
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The a2aTask table stores messages as JSONB with a default of '[]', but there's no database-level constraint on array size. With A2A_MAX_HISTORY_LENGTH set to 100 in constants.ts, message arrays could grow unbounded if the application-level truncation fails.

Consider:

  1. Adding application-level cleanup of old messages when updating tasks
  2. Documenting the expected maximum size for capacity planning
  3. Adding monitoring for tasks with excessive message counts
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/db/schema.ts
Line: 1892:1902
Comment:
The `a2aTask` table stores `messages` as JSONB with a default of `'[]'`, but there's no database-level constraint on array size. With `A2A_MAX_HISTORY_LENGTH` set to 100 in constants.ts, message arrays could grow unbounded if the application-level truncation fails.
Consider:
1. Adding application-level cleanup of old messages when updating tasks
2. Documenting the expected maximum size for capacity planning
3. Adding monitoring for tasks with excessive message counts
How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +139 to +149
export function extractAgentContent(executeResult: {
output?: { content?: string; [key: string]: unknown }
error?: string
}): string {
return (
executeResult.output?.content ||
(typeof executeResult.output === 'object'
? JSON.stringify(executeResult.output)
: String(executeResult.output || executeResult.error || 'Task completed'))
)
}
Copy link
Contributor

@greptile-apps greptile-apps bot Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The extractAgentContent function has a fallback chain that could return misleading results. If executeResult.output is an empty object, it will stringify to a string representation instead of using the fallback message.

Consider checking for meaningful content before stringifying objects to avoid returning unhelpful empty object representations.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/a2a/serve/[agentId]/utils.ts
Line: 139:149
Comment:
The `extractAgentContent` function has a fallback chain that could return misleading results. If `executeResult.output` is an empty object, it will stringify to a string representation instead of using the fallback message.
Consider checking for meaningful content before stringifying objects to avoid returning unhelpful empty object representations.
How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

1 more reviewer

@greptile-apps greptile-apps[bot] greptile-apps[bot] left review comments

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /