npm version npm node version deno version
rate-limiter-flexible counts and limits the number of actions by key and protects from DoS and brute force attacks at any scale.
It works with Valkey, Redis, Prisma, DynamoDB, process Memory, Cluster or PM2, Memcached, MongoDB, MySQL, SQLite, and PostgreSQL.
Memory limiter also works in the browser.
Atomic increments. All operations in memory or distributed environment use atomic increments against race conditions.
Fast. Average request takes 0.7ms in Cluster and 2.5ms in Distributed application. See benchmarks.
Flexible. Combine limiters, block key for some duration, delay actions, manage failover with insurance options, configure smart key blocking in memory and many others.
Ready for growth. It provides a unified API for all limiters. Whenever your application grows, it is ready. Prepare your limiters in minutes.
Friendly. No matter which node package you prefer: valkey-glide or iovalkey, redis or ioredis, sequelize/typeorm or knex, memcached, native driver or mongoose. It works with all of them.
In-memory blocks. Avoid extra requests to store with inMemoryBlockOnConsumed.
Deno compatible See this example
It uses a fixed window, as it is much faster than a rolling window. See comparative benchmarks with other libraries here
npm i --save rate-limiter-flexible
yarn add rate-limiter-flexible
import { RateLimiterMemory } from "rate-limiter-flexible"; // or import directly import RateLimiterMemory from "rate-limiter-flexible/lib/RateLimiterMemory.js";
Points can be consumed by IP address, user ID, authorisation token, API route or any other string.
const opts = { points: 6, // 6 points duration: 1, // Per second }; const rateLimiter = new RateLimiterMemory(opts); rateLimiter.consume(remoteAddress, 2) // consume 2 points .then((rateLimiterRes) => { // 2 points consumed }) .catch((rateLimiterRes) => { // Not enough points to consume });
The Promise's resolve and reject callbacks both return an instance of the RateLimiterRes class if there is no error.
Object attributes:
RateLimiterRes = { msBeforeNext: 250, // Number of milliseconds before next action can be done remainingPoints: 0, // Number of remaining points in current duration consumedPoints: 5, // Number of consumed points in current duration isFirstInDuration: false, // action is first in current duration }
You may want to set HTTP headers for the response:
const headers = { "Retry-After": rateLimiterRes.msBeforeNext / 1000, "X-RateLimit-Limit": opts.points, "X-RateLimit-Remaining": rateLimiterRes.remainingPoints, "X-RateLimit-Reset": Math.ceil((Date.now() + rateLimiterRes.msBeforeNext) / 1000) }
- no race conditions
- no production dependencies
- TypeScript declaration bundled
- Block Strategy against really powerful DDoS attacks (like 100k requests per sec) Read about it and benchmarking here
- Insurance Strategy as emergency solution if database/store is down Read about Insurance Strategy here
- works in Cluster or PM2 without additional software See RateLimiterCluster benchmark and detailed description here
- useful
get,set,block,delete,penaltyandrewardmethods
Full documentation is on Wiki
- Express middleware
- Koa middleware
- Hapi plugin
- GraphQL graphql-rate-limit-directive
- NestJS nestjs-rate-limiter
- Fastify based NestJS app try nestjs-fastify-rate-limiter
Some copy/paste examples on Wiki:
- Minimal protection against password brute-force
- Login endpoint protection
- Apply Block Strategy
- Setup Insurance Strategy
- Websocket connection prevent flooding
- Dynamic block duration
- Authorized users specific limits
- Different limits for different parts of application
- Third-party API, crawler, bot rate limiting
- express-brute Bonus: race conditions fixed, prod deps removed
- limiter Bonus: multi-server support, respects queue order, native promises
-
Drizzle Atomic and non-atomic counters.
-
Etcd Atomic and non-atomic counters.
-
Mongo (with sharding support)
-
MySQL (support Sequelize and Knex)
-
Postgres (support Sequelize, TypeORM and Knex)
-
Valkey: iovalkey or ValkeyGlide
-
BurstyRateLimiter Traffic burst support
-
RateLimiterUnion Combine 2 or more limiters to act as single
-
RLWrapperBlackAndWhite Black and White lists
-
RateLimiterQueue Rate limiter with FIFO queue
-
AWS SDK v3 Client Rate Limiter Prevent punishing rate limit.
See releases for detailed changelog.
-
points
Default: 4Maximum number of points that can be consumed over duration
-
duration
Default: 1Number of seconds before consumed points are reset.
Points are never reset if
durationis set to 0. -
storeClient
Required for store limitersMust be
@valkey/valkey-glide,iovalkey,redis,ioredis,memcached,mongodb,pg,mysql2,mysqlor any other related pool or connection.
- keyPrefix Make keys unique among different limiters.
- blockDuration Block for N seconds, if consumed more than points.
- inMemoryBlockOnConsumed Avoid extra requests to store.
- inMemoryBlockDuration
- insuranceLimiter Make it more stable with less efforts.
- storeType Have to be set to
knex, if you use it. - dbName Where to store points.
- tableName Table/collection.
- tableCreated Is table already created in MySQL, SQLite or PostgreSQL.
- clearExpiredByTimeout For MySQL, SQLite and PostgreSQL.
See full list of options.
Read detailed description on Wiki.
- consume(key, points = 1) Consume points by key.
- get(key) Get
RateLimiterResornull. - set(key, points, secDuration) Set points by key.
- block(key, secDuration) Block key for
secDurationseconds. - delete(key) Reset consumed points.
- deleteInMemoryBlockedAll
- penalty(key, points = 1) Increase number of consumed points in current duration.
- reward(key, points = 1) Decrease number of consumed points in current duration.
- getKey(key) Get internal prefixed key.
Appreciated, feel free!
Make sure you've launched npm run eslint before creating PR, all errors have to be fixed.
You can try to run npm run eslint-fix to fix some issues.
Any new limiter with storage must be extended from RateLimiterStoreAbstract.
It has to implement 4 methods:
-
_getRateLimiterResparses raw data from store toRateLimiterResobject. -
_upsertmay be atomic or non-atomic upsert (increment). It inserts or updates the value by key and returns raw data. If it doesn't make an atomic upsert (increment), the class should be suffixed withNonAtomic, e.g.RateLimiterRedisNonAtomic.It must support
forceExpiremode to overwrite key expiration time. -
_getreturns raw data by key ornullif there is no key. -
_deletedeletes all key-related data and returnstrueon deleted,falseif key is not found.
All other methods depends on the store. See RateLimiterRedis or RateLimiterPostgres for examples.
Note: all changes should be covered by tests.