-
-
Notifications
You must be signed in to change notification settings - Fork 339
Add comprehensive notification system with Apprise integration #924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This commit introduces a complete notification system that allows users to receive alerts about research events via 70+ services (Discord, Slack, Email, Telegram, etc.) using the Apprise library. ## Core Features ### Notification Service (`src/local_deep_research/notifications/`) - Low-level Apprise wrapper with exponential backoff retry (3 attempts) - Cached Apprise instances for performance (max 1000 entries, FIFO eviction) - Event-based notifications via template system - URL validation and masking for security - Service URL testing endpoint ### Notification Manager - High-level manager with database integration - Thread-safe settings snapshot pattern for background tasks - Shared singleton rate limiter across all instances - Per-event notification preferences (research_completed, research_failed, etc.) - Granular user control over notification types ### Rate Limiting - In-memory rate limiter with configurable hourly/daily limits - Thread-safe with automatic cleanup of inactive users - Default: 10/hour, 50/day per user - Prevents notification spam and abuse ### Settings Integration - 11 new settings in default_settings.json: - notifications.enabled (master toggle) - notifications.service_url (encrypted Apprise URLs) - notifications.rate_limit_per_hour / per_day - notifications.on_research_completed / failed / subscription_update - app.external_url (for generating notification links) - Encrypted storage of service URLs in user databases - Settings snapshot for thread-safe background access ### Queue Integration - Research completion/failure notifications in processor_v2.py - Password-based session access for encrypted databases - Comprehensive docstrings explaining password requirements - Removed retry logic that held database sessions during sleep ## Improvements & Fixes ### Settings Manager Enhancements - Added orphaned settings debug logging (helps with debugging) - Improved N+1 query prevention with prefetch - Fixed settings merge logic documentation ### Defensive Programming - Apprise cache size limit (1000 entries) with FIFO eviction - Prevents unbounded memory growth in long-running deployments - Debug logging for cache evictions ### Code Quality - All exception handlers use logger.exception() (pre-commit compliance) - Comprehensive docstrings with examples - Thread-safe patterns throughout - Security: URL masking in logs, CSRF protection, encrypted credentials ## Documentation - docs/NOTIFICATIONS.md - User setup guide - docs/NOTIFICATION_FLOW.md - Technical implementation details - Inline code documentation with examples ## Testing - tests/notifications/ - Unit tests for service, manager, templates - tests/settings/test_boolean_parsing.py - HTML checkbox semantics tests ## Migration Notes - All notification settings are optional with sensible defaults - Existing users are unaffected (notifications disabled by default) - No database migrations required (settings stored as key-value pairs) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
GitHub Advanced Security identified that exception details (via str(e)) were being exposed to external users in the /api/notifications/test-url endpoint. This could potentially leak sensitive system internals. Changes: - Removed str(e) from error response in api_test_notification_url() - Changed from: "Failed to test notification: {str(e)}" - Changed to: "Failed to test notification service. Check logs for details." - Exception details still logged server-side via logger.exception() Security: Prevents information disclosure while maintaining debugging capability via server logs. Resolves: GitHub Advanced Security alerts #365 and #366 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
...hrough an exception Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Ruff identified an unused exception variable 'e' in the test_service method's exception handler. The variable was captured but never used since we're not exposing exception details to users (security best practice). Changes: - Changed from: except Exception as e: - Changed to: except Exception: This aligns with the security fix in settings_routes.py and follows best practices of not exposing exception details to external users. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The CI was timing out after 600s because the Ollama integration tests were attempting to initialize OllamaEmbeddings and Ollama LLM instances even when Ollama was not running. Although individual tests had skipif decorators, fixtures were still being evaluated before the skip check. Solution: Added module-level pytestmark to skip the entire test module when Ollama is not running, preventing any fixture initialization. The _is_ollama_running() check uses a 1-second timeout, so the skip decision happens quickly without hanging. This resolves the timeout issue in the 'Full Test Suite' CI job.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we not want this to be enabled by default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this should be False
by default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like all the caching logic is probably not needed. Are we realistically ever going to have enough instances that it's a problem to just leave them sitting around? Conversely, does creating a new instance every time really have a significant amount of overhead? I'm not saying you need to remove it, it's just something to consider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use Tenacity
to handle this like we do for the search engine rate limiting system?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it could be an option but I think for this use case a simple backoff suffices. The notification does not carry the same weight as the search engine for research and if a notification fails, then it's not something that breaks functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a notification for queuing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember seeing API quota and auth issue notification toggles in the settings... Should those be there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use an actual template system here if we wanted to... We already use Jinja for Flask
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it's needed? We don't do any complicated conditionals or loops or HTML formatting. I can add it in if you think it's better for consistency.
Changes: - Remove Apprise instance caching in favor of temporary instances that are automatically garbage collected - Simplify NotificationService by removing _apprise_cache logic and cache size management - Update default values for event-specific notification settings (most now default to false) - Add new notification event types: research_queued, api_quota_warning, auth_issue - Integrate error notifications in ErrorReporter for API quota and authentication issues - Improve documentation in NOTIFICATIONS.md and NOTIFICATION_FLOW.md with clearer examples and reduced verbosity - Remove unused utils.py module from notifications package Rationale: For small deployments (~5 users), complex caching adds unnecessary complexity without meaningful performance benefits. Temporary Apprise instances are lightweight and automatically cleaned up by Python's garbage collector, making the code simpler and easier to maintain. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The recent refactoring (1c55d27) simplified the notification caching approach by removing the _apprise_cache and creating temporary instances. This commit updates the tests to match the new implementation: - Remove assertion for non-existent _apprise_cache attribute - Update test_send_caches_apprise_instances to test_send_creates_temporary_instances and verify new behavior (creating temporary instances per send) - Update test_should_notify_default_true to test_should_notify_default_false to match intentional default behavior change (False to avoid infinite loops) All 92 notification tests now pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Summary
This PR adds a comprehensive notification system to Local Deep Research, enabling users to receive notifications about research events through multiple services via Apprise integration.
Key Features
Changes
Core Implementation:
src/local_deep_research/notifications/
- Complete notification modulemanager.py
- High-level NotificationManager with settings integrationservice.py
- Low-level NotificationService with Apprise wrappertemplates.py
- Notification templates for different event typesrate_limiter.py
- Thread-safe per-user rate limitingexceptions.py
- Custom exception typesutils.py
- URL building utilitiesIntegration Points:
src/local_deep_research/web/queue/processor_v2.py
- Queue processor integration for research eventssrc/local_deep_research/web/services/research_service.py
- Research completion/failure hookssrc/local_deep_research/settings/
- Settings integration for notification configurationWeb Interface:
src/local_deep_research/web/routes/notifications.py
- REST API endpoints for notification managementsrc/local_deep_research/web/templates/notifications.html
- Settings UI for configuring notificationssrc/local_deep_research/web/static/js/notifications.js
- Frontend JavaScript for notifications pageTesting:
tests/notifications/
- Comprehensive test suiteDocumentation:
docs/NOTIFICATIONS.md
- Complete user guide with examplesdocs/NOTIFICATION_FLOW.md
- Detailed trace of notification flow through the systemCLAUDE.md
with notification architecture patternsSettings
New settings in
notifications.*
namespace:service_url
- Comma-separated Apprise URLs (encrypted at rest)on_research_completed
- Enable research completion notificationson_research_failed
- Enable research failure notificationson_subscription_update
- Enable subscription update notificationson_subscription_error
- Enable subscription error notificationsrate_limit_per_hour
- Max notifications per hour per user (default: 10)rate_limit_per_day
- Max notifications per day per user (default: 50)Architecture Highlights
Security:
Thread Safety:
Performance:
Testing
All tests pass:
Test coverage includes:
Breaking Changes
None - this is a new feature with no changes to existing functionality.
Migration Guide
No migration needed. Users can optionally configure notifications through the settings UI or programmatically via the API.
🤖 Generated with Claude Code
Co-Authored-By: Claude noreply@anthropic.com