A Python-based video summarization tool that extracts contours from video frames to create condensed summaries. Perfect for analyzing surveillance footage, time-lapse videos, or any static camera recording where you want to extract and visualize movement over time.
- Movement Detection: Automatically detects and extracts moving objects from static camera footage
- Layer-Based Processing: Groups related movements across frames into coherent layers
- Heatmap Generation: Visualizes areas of activity in the video
- Configurable: Extensive configuration options for fine-tuning detection sensitivity
- Efficient: Processes video faster than real-time on modern hardware
- Caching: Saves intermediate results for faster re-processing with different parameters
# Clone the repository git clone https://github.com/Askill/Video-Summary.git cd Video-Summary # Create virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt # Install system dependencies (Linux) sudo apt-get install ffmpeg libsm6 libxext6 libxrender-dev
For a consistent environment without system dependency issues:
# Build the Docker image docker build -t video-summary . # Run with Docker docker run -v $(pwd)/input:/app/input -v $(pwd)/output:/app/output video-summary /app/input/video.mp4 /app/output # Or use Docker Compose docker-compose run --rm video-summary /app/input/video.mp4 /app/output
# Process a video with default settings python main.py input_video.mp4 output_dir # Use custom configuration python main.py input_video.mp4 output_dir config.json # Enable verbose logging python main.py input_video.mp4 output_dir --verbose
A 15-second excerpt of a 2-minute overlaid synopsis of a 2.5-hour video from a campus webcam.
The heatmap shows areas of activity throughout the video, with brighter regions indicating more movement.
Video-Summary supports both JSON and YAML configuration files. YAML is recommended for its readability and support for comments.
# Detection sensitivity min_area: 300 # Minimum contour area in pixels max_area: 900000 # Maximum contour area in pixels threshold: 7 # Movement detection sensitivity (lower = more sensitive) # Processing parameters resizeWidth: 700 # Processing width (smaller = faster but less accurate) videoBufferLength: 250 # Frame buffer size # Layer management maxLayerLength: 5000 # Maximum frames per layer minLayerLength: 40 # Minimum frames per layer tolerance: 20 # Pixel distance for grouping contours ttolerance: 50 # Frame gap tolerance # Advanced LayersPerContour: 220 # Max layers per contour avgNum: 10 # Frame averaging (higher = less noise, slower)
Use the provided configuration profiles in the configs/ directory:
# Default balanced settings python main.py video.mp4 output configs/default.yaml # High sensitivity - detect smaller movements python main.py video.mp4 output configs/high-sensitivity.yaml # Low sensitivity - outdoor scenes, reduce noise python main.py video.mp4 output configs/low-sensitivity.yaml # Fast processing - optimized for speed python main.py video.mp4 output configs/fast.yaml
Override any configuration parameter using environment variables:
export VIDEO_SUMMARY_THRESHOLD=10 export VIDEO_SUMMARY_MIN_AREA=500 python main.py video.mp4 output
| Parameter | Description | Default |
|---|---|---|
min_area |
Minimum contour area in pixels (smaller ignored) | 300 |
max_area |
Maximum contour area in pixels (larger ignored) | 900000 |
threshold |
Luminance difference threshold for movement detection | 7 |
resizeWidth |
Video is scaled to this width internally for processing | 700 |
maxLayerLength |
Maximum length of a layer in frames | 5000 |
minLayerLength |
Minimum length of a layer in frames | 40 |
tolerance |
Max distance (pixels) between contours to aggregate into layer | 20 |
ttolerance |
Number of frames movement can be apart before creating new layer | 50 |
videoBufferLength |
Buffer length of Video Reader component | 250 |
LayersPerContour |
Number of layers a single contour can belong to | 220 |
avgNum |
Number of images to average before calculating difference | 10 |
Note:
avgNumis computationally expensive but needed in outdoor scenarios with clouds, leaves moving in wind, etc.
Test Configuration:
- Hardware: Ryzen 3700X (8 cores, 16 threads), 32GB RAM
- Video: 10-minute clip
- Processing Speed: ~20 seconds per minute of video (1:3 ratio)
- Memory Usage: Max 6GB RAM
Component Breakdown:
- CE = Contour Extractor
- LF = Layer Factory
- LM = Layer Manager
- EX = Exporter
Video-Summary/
โโโ Application/ # Core processing modules
โ โโโ Config.py # Configuration management
โ โโโ ContourExctractor.py # Movement detection
โ โโโ LayerFactory.py # Layer extraction
โ โโโ LayerManager.py # Layer optimization
โ โโโ Exporter.py # Output generation
โ โโโ VideoReader.py # Video I/O
โ โโโ HeatMap.py # Heatmap generation
โ โโโ Importer.py # Cache loading
โ โโโ Layer.py # Layer data structure
โ โโโ Logger.py # Logging utilities
โโโ main.py # CLI entry point
โโโ pyproject.toml # Package configuration
โโโ requirements.txt # Dependencies
- Video Reading: Load and preprocess video frames
- Contour Extraction: Detect movement by comparing consecutive frames
- Layer Creation: Group related contours across frames
- Layer Management: Filter and optimize layers based on configuration
- Export: Generate output video with overlaid movement and heatmap
We use modern Python development tools:
- Black: Code formatting
- isort: Import sorting
- flake8: Linting
- mypy: Type checking
- pre-commit: Automated checks
# Install development dependencies pip install -e ".[dev]" # Install pre-commit hooks pre-commit install # Run formatting black . isort . # Run linting flake8 . mypy Application/ main.py
# Run all tests pytest # Run with coverage pytest --cov=Application --cov-report=html
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
The original Creative Commons licensed documentation can be found in licens.txt.
- Built with OpenCV, NumPy, and imageio
- Inspired by video synopsis research in computer vision
For questions or issues, please open an issue on GitHub.
Note: TensorFlow support is optional and not required for core functionality. The project works perfectly fine without GPU acceleration, though processing times will be longer for large videos.