typedload
cachew
typedload | cachew | |
---|---|---|
5 | 1 | |
254 | 207 | |
- | - | |
8.1 | 7.4 | |
7 days ago | about 1 month ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
typedload
-
Show HN: Up to 100x Faster FastAPI with simdjson and io_uring on Linux 5.19
Author of typedload here!
FastAPI relies on (not so fast) pydantic, which is one of the slowest libraries in that category.
Don't expect to find such benchmarks on the pydantic documentation itself, but the competing libraries will have them.
[0] https://ltworf.github.io/typedload/
-
Pydantic vs Protobuf vs Namedtuples vs Dataclasses
I wrote typedload, which is significantly faster than pydantic. Just uses normal dataclasses/attrs/NamedTuple, has a better API and is pure Python!
- Informatica serve a qualcosa?
-
Show HN: Python framework is faster than Golang Fiber
I read all the perftests in the repo. I think they nearly all parse a structure that contains a repetition of the same or similar thing a couple hundred thousand times times and the timing function returns the min and max of 5 attempts. I just picked one example for posting.
Not a Python expert, but could the Pydantic tests be possibly not realistic and/or misleading because they are using kwargs in __init__ [1] to parse the object instead of calling the parse_obj class method [2]? According to some PEPs [3], isn't Python creating a new dictionary for that parameter which would be included in the timing? That would be unfortunate if that accounted for the difference.
Something else I think about is if a performance test doesn't produce a side effect that is checked, a smart compiler or runtime could optimize the whole benchmark away. Or too easy for the CPU to do branch prediction, etc. I think I recall that happening to me in Java in the past, but probably not happened here in Python.
[1] https://github.com/ltworf/typedload/blob/37c72837e0a8fd5f350...
[2] https://docs.pydantic.dev/usage/models/#helper-functions
[3] https://peps.python.org/pep-0692/
cachew
-
How I collect and use 50 sources of my personal data
Yep! In fact I've tried interoperating with Datasette (e.g. shared here https://news.ycombinator.com/item?id=25090643 )
One secret sauce is using 'automatic' caching of data in sqlite -- this allows both for faster access and having an additional interface for the data as a collateral https://github.com/karlicoss/cachew#readme
Still need to polish this a bit, but ultimately hoping to properly plug into Datasette, I'm impressed by its data exploration capabilities!
What are some alternatives?
codon - A high-performance, zero-overhead, extensible Python compiler using LLVM
patina - Python adaptations of Rust's Result, Option, and HashMap types. Ready for Python 3.10 pattern matching!
ustore - Multi-Modal Database replacing MongoDB, Neo4J, and Elastic with 1 faster ACID solution, with NetworkX and Pandas interfaces, and bindings for C 99, C++ 17, Python 3, Java, GoLang 🗄️
dashboard
pydantic-core - Core validation logic for pydantic written in rust
RightToBeRemembered - A ław requiring services to enable auto experts of personal data
peps - Python Enhancement Proposals
requests-cache - Transparent persistent cache for python requests
msgspec - A fast serialization and validation library, with builtin support for JSON, MessagePack, YAML, and TOML
docarray - Represent, send, store and search multimodal data
koda-validate - Typesafe, Composable Validation
socketify.py - Bringing Http/Https and WebSockets High Performance servers for PyPy3 and Python3