msgspec
Dask
Our great sponsors
msgspec | Dask | |
---|---|---|
31 | 32 | |
1,839 | 11,965 | |
- | 1.3% | |
8.9 | 9.7 | |
21 days ago | 6 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
msgspec
- Htmx, Rust and Shuttle: A New Rapid Prototyping Stack
-
Litestar 2.0
Full support for validation and serialisation of attrs classes and msgspec Structs. Where previously only Pydantic models and types where supported, you can now mix and match any of these three libraries. In addition to this, adding support for another modelling library has been greatly simplified with the new plugin architecture
-
FastAPI 0.100.0:Release Notes
> Maybe it was very slow before
That is at least partly the case. I maintain msgspec[1], another Python JSON validation library. Pydantic V1 was ~100x slower at encoding/decoding/validating JSON than msgspec, which was more a testament to Pydantic's performance issues than msgspec's speed. Pydantic V2 is definitely faster than V1, but it's still ~10x slower than msgspec, and up to 2x slower than other pure-python implementations like mashumaro.
Recent benchmark here: https://gist.github.com/jcrist/d62f450594164d284fbea957fd48b...
-
Pydantic 2.0
While it's definitely much faster than pydantic V1 (which is a huge accomplishment!), it's still not exactly what I'd call "fast".
I maintain msgspec (https://github.com/jcrist/msgspec), a serialization/validation library which provides similar functionality to pydantic. Recent benchmarks of pydantic V2 against msgspec show msgspec is still 15-30x faster at JSON encoding, and 6-15x faster at JSON decoding/validating.
Benchmark (and conversation with Samuel) here: https://gist.github.com/jcrist/d62f450594164d284fbea957fd48b...
This is not to diminish the work of the pydantic team! For many users pydantic will be more than fast enough, and is definitely a more feature-filled tool. It's a good library, and people will be happy using it! But pydantic is not the only tool in this space, and rubbing some rust on it doesn't necessarily make it "fast".
-
Need help developing a high performance Redis ORM for Python
https://github.com/jcrist/msgspec so I am using this instead of Pydantic.
-
Blog post: Writing Python like it’s Rust
Another thing: why pyserde rather than stuff like msgspec? https://github.com/jcrist/msgspec
- Show HN: Msgspec, a fast serialization/validation library for Python
-
[Guide] A Tour Through the Python Framework Galaxy: Discovering the Stars
Try msgspec | Maat | turbo for fast serialization and validation
-
Pydantic V2 rewritten in Rust is 5-50x faster than Pydantic V1
Congratulations to the team, Pydantic is an amazing library.
If you find JSON serialization/deserialization a bottleneck, another interesting library (with much less features) for Python is msgspec: https://github.com/jcrist/msgspec
-
Starlite updates March '22 | 2.0 is coming
This feature is yet to be released, but it will allow you to seamlessly use data modelled with for example Pydantic, SQLAlchemy, msgspec or dataclasses in your route handlers, without the need for an intermediary model; The conversion will be handled by the specific DTO "backend" implementation. This new paradigm also makes it trivial to add support for any such modelling library, by simply implementing an appropriate backend.
Dask
- The Distributed Tensor Algebra Compiler (2022)
-
A peek into Location Data Science at Ola
Data scientists work on phenomenally large datasets, and Dask is a handy tool for exploration within the confines of a single cloud VM or their local PCs. Location data visualization is an essential part of deciding further algorithm development and roadmap for projects. This lays the foundation for data engineering and science to work at scale, with petabytes of data.
- File format for large data with many columns
-
What is the best way to save a csv.file in number only ? PC hangs when my file is more than 2GB
Dask
-
Large Scale Hydrology: Geocomputational tools that you use
We're using a lot of Python. In addition to these, gridMET, Dask, HoloViz, and kerchunk.
-
msgspec - a fast & friendly JSON/MessagePack library
I wrote this for speeding up the RPC messaging in dask, but figured it might be useful for others as well. The source is available on github here: https://github.com/jcrist/msgspec.
-
What does it mean to scale your python powered pipeline?
Dask: Distributed data frames, machine learning and more
-
Data pipelines with Luigi
To do that, we are efficiently using Dask, simply creating on-demand local (or remote) clusters on task run() method:
-
Is Numpy always more efficient than Pandas? And how much should we rely on Python anyway?
Look into Dask, see: https://dask.org/
-
Ask HN: Is PySPark a Dead-End?
[1] https://dask.org/
What are some alternatives?
pydantic - Data validation using Python type hints
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
orjson - Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy
Numba - NumPy aware dynamic Python compiler using LLVM
fastapi - FastAPI framework, high performance, easy to learn, fast to code, ready for production
Kedro - Kedro is a toolbox for production-ready data science. It uses software engineering best practices to help you create data engineering and data science pipelines that are reproducible, maintainable, and modular.
mashumaro - Fast and well tested serialization library
NetworkX - Network Analysis in Python
MessagePack - MessagePack serializer implementation for Java / msgpack.org[Java]
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
marshmallow - A lightweight library for converting complex objects to and from simple Python datatypes.
Interactive Parallel Computing with IPython - IPython Parallel: Interactive Parallel Computing in Python