An audiobook data aggregation API, combining multiple sources of data into one, consistent source.
- About
- Getting Started
- Running the tests
- Error Handling
- Usage
- Deployment
- Built Using
- TODO
- Contributing
- Authors
- Acknowledgments
Nexus - noun: a connection or series of connections linking two or more things.
Looking around for audiobook metadata, we realized there's no solid (or open) single source of truth. Further, some solutions had community curated data, only to close their API. As such, this project has been created to enable development to include multiple sources of audiobook content in one response.
This project also makes integration into existing media servers very streamlined. Since all data can be returned with 1-2 API calls, there's little to no overhead processing on the client side. This enables rapid development of stable client plugins. Audnexus serves as a provider during the interim of waiting for a community driven audiobook database, at which time audnexus will be a seeder for such a database.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
- There are 3 ways to deploy this project:
- Coolify - Self-hosted PaaS platform with automatic deployments from Git
- Docker Swarm - Docker Compose stack with Traefik reverse proxy
- Directly, via
pnpm runorpm2- Mongo 4 or greater
- Node/NPM 16 or greater
- Redis
- Registered Audible device keys,
ADP_TOKENandPRIVATE_KEY, for chapters. You will need Python andaudiblefor this. More on that here
- Install Mongo, Node and Redis on your system
pnpm installfrom project directory to get dependencies
Required environment variables:
MONGODB_URI: MongoDB connection URL (e.g.,mongodb://localhost:27017/audnexus)
Optional environment variables:
- Set
ADP_TOKENandPRIVATE_KEYfor the chapters endpoint - Configure other optional variables as described in the Deployment section
Then start the server:
pnpm run watch-debug
Test an API call with
http://localhost:3000/books/${ASIN}
Tests for this project use the Jest framework. Tests can be done locally in a dev environment:
pnpm test
After the tests have run, you may also browse the test coverage. This is generated in coverage/lcov-report/index.html under the project directory.
The API returns structured error responses with error codes, HTTP status codes, and detailed messages.
All errors follow this structure. The details field is optional and may be omitted or set to null:
{
"error": {
"code": "ERROR_CODE",
"message": "Human-readable error message",
"details": null
}
}| Code | HTTP Status | Description |
|---|---|---|
CONTENT_TYPE_MISMATCH |
400 | Content type doesn't match the requested endpoint (e.g., podcast ASIN on /books endpoint) |
VALIDATION_ERROR |
422 | Schema validation failed |
REGION_UNAVAILABLE |
404 | Content not available in the requested region |
NOT_FOUND |
404 | Generic not found error |
BAD_REQUEST |
400 | Bad request |
RATE_LIMIT_EXCEEDED |
429 | Too many requests β client has exceeded allowed request rate |
Content Type Mismatch (Podcast on Book endpoint):
{
"error": {
"code": "CONTENT_TYPE_MISMATCH",
"message": "Item is a podcast, not a book. ASIN: B017V4U2VQ",
"details": {
"asin": "B017V4U2VQ",
"requestedType": "book",
"actualType": "PodcastParent"
}
}
}Region Unavailable:
{
"error": {
"code": "REGION_UNAVAILABLE",
"message": "Item not available in region 'us' for ASIN: B12345",
"details": {
"asin": "B12345"
}
}
}Validation Error:
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Schema validation failed for request",
"details": {
"field": "asin",
"issue": "Invalid ASIN format"
}
}
}Not Found:
{
"error": {
"code": "NOT_FOUND",
"message": "Book with ASIN B12345 not found",
"details": {
"asin": "B12345",
"endpoint": "/books"
}
}
}Bad Request:
{
"error": {
"code": "BAD_REQUEST",
"message": "Invalid request parameters",
"details": {
"parameter": "region",
"issue": "Unsupported region 'xx'"
}
}
}Rate Limit Exceeded:
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests β client has exceeded allowed request rate",
"details": {
"retryAfterSeconds": 60
}
}
}API usage documentation can be read here: https://audnex.us/
Pre-rendered HTML documentation is also included in docs/index.html.
HTML can be re-generated from the spec, using:
pnpm run build-docs
Audnexus can be deployed to Coolify, a self-hosted open-source alternative to Vercel.
Setup Steps:
-
Connect repository to Coolify:
- In Coolify, create a new application
- Select "Git" and connect your GitHub repository
- Select the branch (e.g.,
mainordevelop)
-
Configure environment variables:
- Set up the following environment variables in Coolify:
Core Configuration:
MONGODB_URI: MongoDB connection URL (e.g.,mongodb://mongo:27017/audnexus) [required]REDIS_URL: Redis connection URL (e.g.,redis://redis:6379) [optional]HOST: Server host address (default:0.0.0.0)PORT: Server port (default:3000)LOG_LEVEL: Log level -trace,debug,info,warn,error,fatal(default:info)TRUSTED_PROXIES: Comma-separated list of trusted proxy IPs/CIDR ranges (optional)DEFAULT_REGION: Default region for batch processing (default:us)
Audible API Configuration:
ADP_TOKEN: Audible ADP_TOKEN value (optional, for chapters endpoint)PRIVATE_KEY: Audible PRIVATE_KEY value (optional, for chapters endpoint)
Rate Limiting:
MAX_REQUESTS: Max requests per minute per source (default: 100)
Update Scheduling:
UPDATE_INTERVAL: Update interval in days (default: 30)UPDATE_THRESHOLD: Minimum days before checking updates again (default: 7)
Performance Tuning:
MAX_CONCURRENT_REQUESTS: HTTP connection pool size for concurrent API calls (default: 50)SCHEDULER_CONCURRENCY: Max concurrent scheduler operations (default: 5)SCHEDULER_MAX_PER_REGION: Hard cap for max per-region concurrency in batch processing (default: 5)HTTP_MAX_SOCKETS: Maximum HTTP sockets (hard limit: 50, default: 50) - values above 50 will be clamped to 50HTTP_TIMEOUT_MS: HTTP request timeout in milliseconds (default: 30000)
Feature Flags (Boolean - supports
true,True,TRUE,1):USE_PARALLEL_SCHEDULER: Enable parallel UpdateScheduler (default:false) - HIGH RISK, requires testingUSE_CONNECTION_POOLING: Enable HTTP connection pooling for API calls (default:true)USE_COMPACT_JSON: Use compact JSON format in Redis (default:true)USE_SORTED_KEYS: Sort object keys in responses (adds O(n log n) overhead, default:false)CIRCUIT_BREAKER_ENABLED: Enable circuit breaker pattern for external API calls (default:true)METRICS_ENABLED: Enable performance metrics collection and /metrics endpoint (default:true)
Metrics Endpoint Security:
METRICS_AUTH_TOKEN: Authentication token for /metrics endpoint (optional)METRICS_ALLOWED_IPS: Comma-separated list of allowed IPs/CIDR ranges for /metrics (supports CIDR notation, optional)
-
Configure build and deployment:
- Build command: Coolify will automatically use the Dockerfile
- Port: 3000
- Health check: Coolify can use the
/healthendpoint
-
Enable GitHub webhook (optional):
- In Coolify, get your webhook URL from the application's "Webhook" section
- Add
COOLIFY_WEBHOOKto your GitHub repository secrets with this URL - In Coolify, create an API token from "Keys & Tokens" > "API Tokens" (enable "Deploy" permission)
- Add
COOLIFY_TOKENto your GitHub repository secrets with the API token - The workflow
.github/workflows/deploy-coolify.ymlwill trigger deployments automatically on pushes tomainordevelop
Note: The .github/workflows/docker-publish.yml workflow builds and pushes Docker images to GitHub Container Registry (ghcr.io) but does not deploy them. The Coolify workflow builds, pushes, and deploys the Docker image using the Coolify API.
- Optional: Configure persistent volumes for MongoDB/Redis:
- For production, consider using external MongoDB and Redis services
- Or configure Coolify to use managed databases
Important: The audnexus application requires MongoDB and Redis services to run. You must either:
- Use Coolify's managed database services or external databases
- Deploy the full stack (including MongoDB and Redis containers) using the Docker Compose method in the Docker Swarm section below
Do not proceed with Coolify deployment until you have the MONGODB_URI and REDIS_URL values ready.
Note: For production deployments, consider using Coolify's managed database services for MongoDB and Redis, or deploy the full stack using the Docker Compose method below.
Once you have Docker Swarm setup, grab the docker-compose.yml from this repo, and use it to start the stack. Using something like Portainer for a Swarm GUI will make this much easier.
The stack defaults to 15 replicas for the node-server container. Customize this as needed.
Core Environment Variables:
MONGODB_URI: MongoDB connection URL, such asmongodb://mongo/audnexusREDIS_URL: Redis connection URL, such asredis://redis:6379HOST: Server host address (default:0.0.0.0)PORT: Server port (default:3000)LOG_LEVEL: Log level -trace,debug,info,warn,error,fatal(default:info)TRUSTED_PROXIES: Comma-separated list of trusted proxy IPs/CIDR ranges (optional)DEFAULT_REGION: Default region for batch processing (default:us)
Audible API Configuration:
ADP_TOKEN: Audible ADP_TOKEN value (optional, for chapters endpoint)PRIVATE_KEY: Audible PRIVATE_KEY value (optional, for chapters endpoint)
Rate Limiting:
MAX_REQUESTS: Maximum number of requests per 1-minute period from a single source (default: 100)
Update Scheduling:
UPDATE_INTERVAL: Frequency (in days) to run scheduled update tasks (default: 30). Update task is also run at startup.UPDATE_THRESHOLD: Minimum number of days after an item is updated, to allow it to check for updates again (either scheduled or parameter).
Performance Tuning:
MAX_CONCURRENT_REQUESTS: HTTP connection pool size for concurrent API calls (default: 50)SCHEDULER_CONCURRENCY: Max concurrent scheduler operations (default: 5)SCHEDULER_MAX_PER_REGION: Hard cap for max per-region concurrency in batch processing (default: 5)HTTP_MAX_SOCKETS: Maximum HTTP sockets (hard limit: 50, default: 50) - values above 50 will be clamped to 50HTTP_TIMEOUT_MS: HTTP request timeout in milliseconds (default: 30000)
Feature Flags (Boolean - supports true, True, TRUE, 1):
USE_PARALLEL_SCHEDULER: Enable parallel UpdateScheduler (default:false) - HIGH RISK, requires testingUSE_CONNECTION_POOLING: Enable HTTP connection pooling for API calls (default:true)USE_COMPACT_JSON: Use compact JSON format in Redis (default:true)USE_SORTED_KEYS: Sort object keys in responses (adds O(n log n) overhead, default:false)CIRCUIT_BREAKER_ENABLED: Enable circuit breaker pattern for external API calls (default:true)METRICS_ENABLED: Enable performance metrics collection and /metrics endpoint (default:true)
Metrics Endpoint Security:
METRICS_AUTH_TOKEN: Authentication token for /metrics endpoint (optional)METRICS_ALLOWED_IPS: Comma-separated list of allowed IPs/CIDR ranges for /metrics (supports CIDR notation, optional)
Traefik Configuration:
TRAEFIK_DOMAIN: FQDN for the API serverTRAEFIK_EMAIL: Email to register SSL cert with
Once the stack is up, test an API call with
https://${TRAEFIK_DOMAIN}/books/${ASIN}
-
Connect to the DB either from inside the mongodb container terminal or a MongoDB Compass/MongoSH session.
-
Switch to the correct DB:
use audnexus -
Create the recommended indexes:
db.authors.createIndex( { asin: 1, region: 1 } )db.books.createIndex( { asin: 1, region: 1 } )db.chapters.createIndex( { asin: 1, region: 1 } )db.authors.createIndex( { name: "text" } )
- Fastify - Server Framework
- MongoDB - Database
- NodeJs - Server Environment
- Papr - Databse connection
- Redis - Cached responses
- @djdembeck - Idea & Initial work