-
Couldn't load subscription status.
- Fork 0
Performance Intelligence
Query optimization, workload analysis, and index tuning for maximum PostgreSQL performance.
6 specialized performance tools for comprehensive query optimization:
| Tool | Purpose | Key Feature |
|---|---|---|
get_top_queries |
Identify slow and resource-heavy queries | pg_stat_statements |
explain_query |
Analyze query execution plans | EXPLAIN ANALYZE |
index_usage_stats |
Analyze index effectiveness | Index hit ratios |
suggest_indexes |
AI-powered index recommendations | DTA algorithm |
hypothetical_index_test |
Test indexes without creating | HypoPG |
workload_analysis |
Comprehensive workload insights | Multi-metric analysis |
Requirements: pg_stat_statements extension (optional: hypopg for hypothetical indexes)
Find the slowest and most resource-intensive queries.
Parameters:
-
sort_by(string, required):total_time,mean_time,calls,rows -
limit(integer, optional): Number of results (default: 10)
Example:
# Find slowest queries by total time result = get_top_queries(sort_by="total_time", limit=10) # Find most frequently called queries result = get_top_queries(sort_by="calls", limit=20)
Use Cases:
- Performance troubleshooting
- Query optimization prioritization
- Application profiling
Get detailed execution plans with timing and cost analysis.
Parameters:
-
sql(string, required): Query to explain -
params(list, optional): Parameter values for parameterized queries -
analyze(boolean, optional): Run EXPLAIN ANALYZE (default: False) -
buffers(boolean, optional): Include buffer usage stats -
format(string, optional):text,json,xml,yaml
Example:
result = explain_query( sql="SELECT * FROM orders WHERE customer_id = %s", params=[12345], analyze=True, buffers=True, format="text" )
Use Cases:
- Query optimization
- Index effectiveness validation
- Execution plan analysis
Analyze how effectively indexes are being used.
Parameters:
-
schema(string, optional): Limit to specific schema -
include_unused(boolean, optional): Include unused indexes
Returns:
- Index hit ratios
- Unused indexes
- Missing index opportunities
Example:
result = index_usage_stats( schema="public", include_unused=True )
Use Cases:
- Index maintenance
- Performance tuning
- Storage optimization
AI-powered index recommendations using DTA (Database Tuning Advisor) algorithm.
Parameters:
-
workload_queries(list, optional): Queries to analyze -
use_query_log(boolean, optional): Use pg_stat_statements data -
limit(integer, optional): Maximum recommendations
Returns:
- Recommended indexes with impact estimates
- CREATE INDEX statements
- Cost/benefit analysis
Example:
result = suggest_indexes( use_query_log=True, limit=5 )
Use Cases:
- Performance optimization
- Index strategy planning
- Workload analysis
Test index performance without actually creating indexes (requires HypoPG).
Parameters:
-
table_name(string, required): Target table -
column_names(list, required): Columns for index -
test_queries(list, required): Queries to test
Returns:
- Performance improvements
- Index size estimates
- Cost comparison
Example:
result = hypothetical_index_test( table_name="orders", column_names=["customer_id", "order_date"], test_queries=[ "SELECT * FROM orders WHERE customer_id = 123", "SELECT * FROM orders WHERE order_date > '2025-01-01'" ] )
Use Cases:
- Zero-risk index testing
- Index design validation
- Performance forecasting
Comprehensive analysis of database workload patterns.
Parameters:
-
time_range_hours(integer, optional): Analysis period -
include_query_patterns(boolean, optional): Pattern analysis -
include_lock_stats(boolean, optional): Lock contention analysis
Returns:
- Query type distribution (SELECT/INSERT/UPDATE/DELETE)
- Peak load times
- Resource bottlenecks
- Lock contention hotspots
Example:
result = workload_analysis( time_range_hours=24, include_query_patterns=True, include_lock_stats=True )
Use Cases:
- Capacity planning
- Performance baseline
- Architecture optimization
# Find the slowest queries slow_queries = get_top_queries(sort_by="total_time", limit=10) # Find high-frequency queries frequent_queries = get_top_queries(sort_by="calls", limit=10)
# Get execution plan for slow query plan = explain_query( sql=slow_queries["queries"][0]["query"], analyze=True, buffers=True )
# AI-powered suggestions recommendations = suggest_indexes( use_query_log=True, limit=5 )
# Test without creating for rec in recommendations["recommendations"]: test = hypothetical_index_test( table_name=rec["table_name"], column_names=rec["columns"], test_queries=[slow_query] )
# Overall workload insights workload = workload_analysis( time_range_hours=24, include_query_patterns=True )
# 1. Top queries by total time top_total = get_top_queries(sort_by="total_time", limit=5) # 2. Index usage stats indexes = index_usage_stats(include_unused=True) # 3. Check for unused indexes if indexes["unused_indexes"]: print("Unused indexes found:", indexes["unused_indexes"])
# 1. Workload analysis workload = workload_analysis(time_range_hours=168) # 7 days # 2. Get recommendations recommendations = suggest_indexes(use_query_log=True, limit=10) # 3. Test top recommendations for rec in recommendations["recommendations"][:3]: test = hypothetical_index_test( table_name=rec["table_name"], column_names=rec["columns"], test_queries=rec["sample_queries"] )
# β Good: Analyze real execution explain_query(sql="...", analyze=True, buffers=True) # β Avoid: Planning estimates only explain_query(sql="...", analyze=False)
# β Good: Test with HypoPG first hypothetical_index_test(...) # β Avoid: Creating indexes blindly
# Regular cleanup stats = index_usage_stats(include_unused=True) # Review stats["unused_indexes"] for removal
# Prioritize by total time (cumulative impact) get_top_queries(sort_by="total_time", limit=5) # Not just slow individual queries
- Core Database Tools - Schema and health
- Monitoring & Alerting - Real-time metrics
- Extension Setup - pg_stat_statements, HypoPG
See Home for more tool categories.