ratelimit-headers
aiometer
ratelimit-headers | aiometer | |
---|---|---|
2 | 3 | |
47 | 385 | |
- | 0.8% | |
7.7 | 3.4 | |
4 months ago | 9 months ago | |
JavaScript | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ratelimit-headers
-
How to Implement Rate Limiting in Express for Node.js
standardHeaders: To enable support for the RateLimit headers recommended by the IETF. The default value is false.
-
HTTP Rate Limit
ratelimit-headers has a test implementation of this draft.
aiometer
-
Rewriting Rust
I agree wholeheartedly (and I'm not surprised that you of all people often write raw futures!). I want to push back on the "async rust bad/failure/not ready" meme because
- it's perfectly possible to be a successful user of the async ecosystem as it is now while building great software;
- this two-tiered phenomenon is not unique to Rust, JS and Python struggle with it just as much (if not more due to less refined and messier design). As an example, [1] is elegant, but complex, and I'm less sure it's correct compared to a gnarly async Rust future, because the underlying async semantics are in flux.
Of course I'd love for the remaining snags (like AFIT) to go away, and simplified Pin story or better APIs would be great, but this negativity around async Rust is just wrong. It's a massive success already and should be celebrated.
[1]: https://github.com/florimondmanca/aiometer/blob/master/src/a...
-
HTTP Rate Limit
class RateLimitTransport(httpx.AsyncHTTPTransport): def __init__(self, max_per_second: float = 5, **kwargs) -> None: """ Async HTTP transport with rate limit. Args: max_per_second: Maximum number of requests per second. Other args are passed to httpx.AsyncHTTPTransport. """ self.interval = 1 / max_per_second self.next_start_time = 0 super().__init__(**kwargs) async def notify_task_start(self): """ https://github.com/florimondmanca/aiometer/blob/358976e0b60bce29b9fe8c59807fafbad3e62cbc/src/aiometer/_impl/meters.py#L57 """ loop = asyncio.get_running_loop() while True: now = loop.time() next_start_time = max(self.next_start_time, now) until_now = next_start_time - now if until_now <= self.interval: break await asyncio.sleep(max(0, until_now - self.interval)) self.next_start_time = max(self.next_start_time, now) + self.interval async def handle_async_request(self, request: httpx.Request) -> httpx.Response: await self.notify_task_start() return await super().handle_async_request(request) async def __aenter__(self) -> Self: await self.notify_task_start() return await super().__aenter__() async def __aexit__(self, *args: Any) -> None: await super().__aexit__(*args)
-
Limiting concurrency in Python asyncio: the story of async imap_unordered()
I recommend anyone interested in this topic to take a look at aiometer.
What are some alternatives?
express-rate-limit - Basic rate-limiting middleware for the Express web server
asyncer - Asyncer, async and await, focused on developer experience.
express-slow-down - Slow down repeated requests; use as an alternative (or addition) to express-rate-limit
pyee - A rough port of Node.js's EventEmitter to Python with a few tricks of its own
nodejs-rate-limiting-demo
httpx - A next generation HTTP client for Python. 🦋
trio - Trio – a friendly Python library for async concurrency and I/O
fastapi-azure-auth - Easy and secure implementation of Azure Entra ID (previously AD) for your FastAPI APIs 🔒 B2C, single- and multi-tenant support.
efsw - efsw is a C++ cross-platform file system watcher and notifier.
pg-purepy - Pure-python structurally concurrent PostgreSQL driver
incrstruct - Build self-referencing structs using two-phase initialization.