infer-types
typedload
Our great sponsors
infer-types | typedload | |
---|---|---|
6 | 5 | |
56 | 180 | |
- | - | |
10.0 | 9.6 | |
10 days ago | 5 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
infer-types
We haven't tracked posts mentioning infer-types yet.
Tracking mentions began in Dec 2020.
typedload
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Author of typedload here!
FastAPI relies on (not so fast) pydantic, which is one of the slowest libraries in that category.
Don't expect to find such benchmarks on the pydantic documentation itself, but the competing libraries will have them.
-
Pydantic vs Protobuf vs Namedtuples vs Dataclasses
I wrote typedload, which is significantly faster than pydantic. Just uses normal dataclasses/attrs/NamedTuple, has a better API and is pure Python!
-
Show HN: Python framework is faster than Golang Fiber
I read all the perftests in the repo. I think they nearly all parse a structure that contains a repetition of the same or similar thing a couple hundred thousand times times and the timing function returns the min and max of 5 attempts. I just picked one example for posting.
Not a Python expert, but could the Pydantic tests be possibly not realistic and/or misleading because they are using kwargs in __init__ [1] to parse the object instead of calling the parse_obj class method [2]? According to some PEPs [3], isn't Python creating a new dictionary for that parameter which would be included in the timing? That would be unfortunate if that accounted for the difference.
Something else I think about is if a performance test doesn't produce a side effect that is checked, a smart compiler or runtime could optimize the whole benchmark away. Or too easy for the CPU to do branch prediction, etc. I think I recall that happening to me in Java in the past, but probably not happened here in Python.
[1] https://github.com/ltworf/typedload/blob/37c72837e0a8fd5f350...
[2] https://docs.pydantic.dev/usage/models/#helper-functions
The perf part of the tests just seems to be a microbenchmark for seeing how fast the various frameworks can parse a 30000x300 dict of strings representing numbers [1].
If that is all one's application does, and can use your library in their organization/team, that's great. However a 2-3x performance boost for the parsing stage for a use case like an API call might not matter when that could be overshadowed by validation and/or upstream API calls. A realistic app would likely use a validation library like Pydantic's [2] to throw a custom typed-error that can be processed, e.g., localization, before returning it downstream.
[1] https://github.com/ltworf/typedload/blob/37c72837e0a8fd5f350...
What are some alternatives?
codon - A high-performance, zero-overhead, extensible Python compiler using LLVM