Skip to main content

Agent Execution History

Track, analyze, and export all agent executions with comprehensive filtering and detail views.

Overview

Every time an agent is executed - whether through the dashboard, API, or workflow - a complete record is saved. The Execution History feature gives you full visibility into:

  • What inputs were provided
  • What prompts were sent to the AI
  • What outputs were generated
  • Performance metrics (latency, tokens, cost)
  • Error details (if execution failed)

Accessing Execution History

From Agent Detail Page

  1. Navigate to /agents/[agentId]
  2. Click the Executions tab
  3. View paginated list of all executions

From API

GET /api/agents/{agentId}/executions?limit=50&offset=0

See API Reference for complete documentation.

Execution History Table

Columns

The execution table displays:

ColumnDescription
StatusSuccess, Failed, Rate Limited, Canceled, Unauthorized
SourceDashboard, API, or Workflow
ExecutedTime since execution (e.g., "2 hours ago")
LatencyResponse time in milliseconds
TokensTotal tokens consumed
CostCost in USD
VersionPrompt version used
ActionsView details button

Status Indicators

Executions are marked with color-coded badges:

  • Success 🟢 - Execution completed successfully
  • Failed 🔴 - Execution failed with error
  • Rate Limited 🟡 - Rate limit exceeded
  • Canceled ⚪ - User canceled execution
  • Unauthorized 🔴 - Authentication failed

Filtering Executions

Filter by Status

Click the Status Filter dropdown to show only:

  • All Statuses
  • Success
  • Failed
  • Rate Limited
  • Canceled
  • Unauthorized

Use Case: Find all failed executions to debug issues

Filter by Source

Click the Source Filter dropdown to show only:

  • All Sources
  • Dashboard (executions from PrompTick UI)
  • API (executions via API key)
  • Workflow (executions from automated workflows)

Use Case: Compare API vs dashboard execution performance

Filter by Date Range

Use the date pickers to filter executions:

  • Start Date - Show executions after this date
  • End Date - Show executions before this date

Use Case: Compare metrics before and after a version update

Combined Filters

Filters can be combined for precise queries:

Example: Show all failed API executions from last week

  1. Status: Failed
  2. Source: API
  3. Start Date: 7 days ago
  4. End Date: Today

Pagination

Navigate through large datasets efficiently:

  • Page Size: 20 executions per page (default)
  • Navigation: Previous/Next buttons
  • Page Info: Shows "Showing 1 to 20 of 234 executions"
  • Total Pages: Display current page and total pages

Performance

  • Fast loading even with thousands of executions
  • Efficient database queries with offset/limit
  • Smart caching for better performance

Execution Details

Click the eye icon (👁️) to view complete execution details.

Detail Modal Sections

Performance Metrics

Displayed as cards at the top:

  • Latency: Total response time (milliseconds)
  • Tokens: Total tokens consumed
  • Cost: Execution cost in USD
  • Model: AI model used (e.g., gemini-1.5-pro)

Input Variables

Shows all variables provided:

product_name: "Smart Watch Pro"
target_audience: "fitness enthusiasts"
tone: "energetic"

Copy Button: Copy all input variables as JSON

Output

For Successful Executions:

  • Full AI-generated response
  • Formatted in a code block
  • Copy button to copy output text

For Failed Executions:

  • Error message with details
  • Error code (e.g., RATE_LIMIT_EXCEEDED)
  • Troubleshooting hints

Prompt Configuration

View the exact prompts sent to the AI:

  • Version: Prompt version label
  • Temperature: Temperature setting used
  • System Prompt: Complete system prompt
  • User Prompt: Complete user prompt with variables substituted

Use Case: Debug unexpected outputs by reviewing exact prompts

Metadata

Additional execution information:

  • Execution ID: Unique identifier
  • Source: Dashboard, API, or Workflow
  • Executed By: User ID who triggered execution
  • Executed At: Full timestamp
  • API Key ID: Which API key was used (if applicable)
  • IP Address: Request IP (for API calls)
  • User Agent: Client user agent (for API calls)

CSV Export

Export execution data for external analysis.

How to Export

  1. Filter executions as desired (optional)
  2. Click "Export CSV" button
  3. CSV file downloads automatically

CSV Format

The exported CSV includes:

Execution ID,Status,Source,Executed At,Latency (ms),Tokens,Cost (USD),Model,Version
exec_xyz789,success,api,2025-11-18T09:45:00Z,1250,450,0.0023,gemini-1.5-pro,V3
exec_abc123,failed,dashboard,2025-11-18T09:30:00Z,150,0,0,gemini-1.5-pro,V3

Use Cases for Export

  • Spreadsheet Analysis: Import into Excel/Google Sheets
  • Data Visualization: Create custom charts
  • Reporting: Generate reports for stakeholders
  • Cost Tracking: Detailed cost analysis by time period
  • Compliance: Export for audit trails

Common Use Cases

1. Debugging Failed Executions

Goal: Find and fix execution failures

Steps:

  1. Filter by Status: "Failed"
  2. Sort by most recent
  3. Click on a failed execution
  4. Review error message and error code
  5. Check input variables (were they valid?)
  6. Review prompts (did substitution work correctly?)
  7. Fix the underlying issue

2. Monitoring Version Updates

Goal: Compare performance before/after version change

Steps:

  1. Note the date/time of version update
  2. Filter executions before update (last 7 days before)
  3. Export CSV
  4. Filter executions after update (last 7 days after)
  5. Export CSV
  6. Compare metrics in spreadsheet:
    • Success rates
    • Average latency
    • Average tokens
    • Average cost

3. Cost Analysis

Goal: Understand and optimize costs

Steps:

  1. Export all executions for a month
  2. Create pivot table by:
    • Model used
    • Prompt version
    • Source (API vs dashboard)
  3. Identify highest-cost categories
  4. Optimize:
    • Use cheaper models for simple tasks
    • Reduce token usage in prompts
    • Cache frequent requests

4. Performance Tracking

Goal: Ensure agent meets SLAs

Steps:

  1. Filter by Status: "Success"
  2. Export executions
  3. Calculate:
    • Average latency
    • 95th percentile latency
    • Success rate
  4. Set up alerts if metrics degrade

5. API Usage Auditing

Goal: Track who is using the API and how

Steps:

  1. Filter by Source: "API"
  2. Review IP addresses and API key usage
  3. Identify:
    • Most active API keys
    • Unusual usage patterns
    • Potential abuse
  4. Take action:
    • Rotate compromised keys
    • Adjust rate limits
    • Contact heavy users

Real-Time Monitoring

Live Updates

The execution history updates automatically:

  • New executions appear at the top
  • No need to refresh the page
  • Real-time status updates

Monitoring Dashboard

For active monitoring:

  1. Keep Executions tab open
  2. Set filters (e.g., show only failures)
  3. Watch for new entries
  4. Respond to issues immediately

Performance Metrics Explained

Latency

What: Total time from request to response

Factors:

  • AI model speed
  • Prompt complexity
  • Network latency
  • Queue waiting time

Good Values:

  • Flash models: 500-1500ms
  • Pro models: 1000-3000ms
  • GPT-4: 2000-5000ms

Red Flags:

  • Latency > 10 seconds (investigate)
  • Sudden latency spikes (model issues?)

Tokens

What: Number of tokens processed (input + output)

Factors:

  • Prompt length (system + user)
  • Variable values length
  • Response length
  • Model-specific tokenization

Optimization:

  • Shorter prompts = fewer tokens
  • Limit max tokens in config
  • Remove unnecessary context

Cost

What: Execution cost based on tokens and model

Calculation:

Cost = (Input Tokens × Input Price) + (Output Tokens × Output Price)

Cost Ranges (approximate):

  • Gemini Flash: $0.00001 - $0.0001
  • Gemini Pro: $0.0001 - $0.001
  • GPT-4: $0.001 - $0.01

API Access

List Executions Endpoint

GET /api/agents/{agentId}/executions

Query Parameters:

  • limit - Results per page (default: 20, max: 100)
  • offset - Results to skip (for pagination)
  • status - Filter by status
  • source - Filter by source
  • startDate - Filter by start date
  • endDate - Filter by end date

Response:

{
"executions": [...],
"pagination": {
"limit": 20,
"offset": 0,
"total": 234,
"hasMore": true
}
}

Get Execution Detail Endpoint

GET /api/agents/{agentId}/executions/{executionId}

Response:

{
"execution": {
"executionId": "exec_xyz789",
"status": "success",
"inputVariables": {...},
"outputText": "...",
"latencyMs": 1250,
"tokensUsed": 450,
"costUSD": 0.0023,
...
}
}

See API Reference for complete details.

Best Practices

1. Regular Monitoring

  • Check execution history daily
  • Set up alerts for failures
  • Monitor cost trends weekly
  • Review performance metrics monthly

2. Use Filters Effectively

  • Save common filter combinations
  • Use date ranges for trend analysis
  • Combine filters for precise debugging

3. Export Regularly

  • Export monthly for reporting
  • Keep CSV archives for compliance
  • Use exports for deeper analysis

4. Investigate Failures Promptly

  • Review failed executions daily
  • Fix issues before they escalate
  • Document solutions for recurring issues

5. Track Version Performance

  • Export data before version updates
  • Compare metrics after updates
  • Rollback if metrics degrade significantly

Troubleshooting

Executions Not Showing

Problem: Expected executions don't appear

Solutions:

  • Check filter settings (reset to "All")
  • Verify correct agent is selected
  • Refresh the page
  • Check if executions are very old (beyond pagination)

Export Not Working

Problem: CSV export fails or is empty

Solutions:

  • Ensure there are executions to export
  • Check filter settings
  • Try smaller date range
  • Check browser download settings

Slow Loading

Problem: Execution history loads slowly

Solutions:

  • Use date range filters to limit results
  • Reduce page size
  • Clear browser cache
  • Check network connection

Missing Detail Data

Problem: Execution detail modal shows incomplete data

Solutions:

  • Verify execution completed successfully
  • Check permissions
  • Ensure execution is not too old (data retention)
  • Refresh and try again

Privacy & Data Retention

What Data Is Stored

For each execution:

  • ✅ Inputs (variables provided)
  • ✅ Prompts (system and user prompts)
  • ✅ Outputs (AI responses)
  • ✅ Metadata (timing, cost, model, version)
  • ✅ Error details (if failed)

Data Retention

  • Standard: 90 days
  • Pro Plan: 180 days
  • Enterprise: Custom retention (1 year+)

Privacy

  • Execution data is private to your organization
  • Not shared with other users
  • Not used for training AI models
  • Encrypted at rest and in transit

Next Steps


Need help? Check our FAQ or contact support.