A hands-on PostgreSQL learning environment using Go, pgx, and sqlc. Built for engineers who want to master PostgreSQL's real-world features.
# Start PostgreSQL and pgAdmin make up # Run migrations make migrate # Generate sqlc code make generate # Seed with 100k+ rows of realistic data make seed # Run examples go run examples/01_basic_queries.go go run examples/02_json_queries.go go run examples/03_batch_operations.go go run examples/04_advanced_patterns.go
Access pgAdmin at http://localhost:5050 (grug@cave.com / grug_password)
- Connection pooling with pgx
- Prepared statements
- JOIN queries (INNER, LEFT, complex)
- Window functions
- Aggregations with ROLLUP
- Query nested JSON with operators (
->,->>,#>,#>>) - JSON containment (
@>,<@) - Array operations with
jsonb_array_elements - GIN indexes for JSON
- Building JSON responses with
jsonb_build_object
- Multi-value INSERT (fast for <1000 rows)
COPY FROMfor bulk loading (100k+ rows/sec)- Batch updates with
unnest() - Pipeline mode for maximum throughput
- Temporary tables for complex updates
- Table partitioning by date range
- LISTEN/NOTIFY for real-time events
- Advisory locks for distributed coordination
- Materialized views with concurrent refresh
- Query optimization techniques
postgres_playground/
βββ docker-compose.yml # PostgreSQL + pgAdmin
βββ Makefile # Common tasks
βββ migrations/ # Schema versioning
β βββ 001_initial_schema.up.sql
β βββ 001_initial_schema.down.sql
βββ queries/ # sqlc SQL files
β βββ sites.sql
β βββ assets.sql
βββ pkg/database/ # Connection management
βββ internal/db/ # Generated sqlc code
βββ cmd/
β βββ migrate/ # Migration runner
β βββ seed/ # Data generator
βββ examples/ # Learning examples
- Physical locations with coordinates
- Messy JSONB metadata (simulating real-world data)
- GIN indexes for JSON queries
- Hardware at sites (routers, switches, sensors)
- Complex JSONB for config and telemetry
- Foreign key to sites with CASCADE delete
Our seed data creates intentionally messy JSON to simulate production systems:
// Old format from legacy system {"type":"warehouse","manager":"John Smith","phone":"+1-555-123-4567","legacy_id":12345} // New format with deep nesting {"facility":{"type":"warehouse","classification":"A"},"contact":{"name":"Jane Doe","email":"jane@example.com"}} // Mixed conventions {"facilityType":"WAREHOUSE","Manager":"Bob","contact_phone":"+1-555-555-5555"}
- Indexes: Use partial indexes for common WHERE clauses
- COPY: Fastest bulk insert method (see benchmarks in examples)
- Prepared Statements: Reuse for repeated queries
- Connection Pooling: Configure based on workload
- EXPLAIN ANALYZE: Your best friend for optimization
-- Slow queries SELECT * FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 10; -- Connection status SELECT state, count(*) FROM pg_stat_activity GROUP BY state; -- Index usage SELECT schemaname, tablename, indexname, idx_scan FROM pg_stat_user_indexes ORDER BY idx_scan;
The Docker Compose setup includes performance tuning:
- Shared memory and work_mem configured
- pg_stat_statements enabled
- Connection limits set appropriately
- Experiment with the examples
- Check query plans with EXPLAIN ANALYZE
- Try different index strategies
- Build your own queries in
queries/ - Monitor performance with pg_stat_statements
Following the grug manifesto:
- SQL is SQL (don't hide it)
- Understand what's happening
- Measure everything
- Keep it simple
Happy learning! π