k8s-openapi
polars
Our great sponsors
k8s-openapi | polars | |
---|---|---|
7 | 144 | |
360 | 26,218 | |
- | 6.7% | |
8.3 | 10.0 | |
12 days ago | 2 days ago | |
Rust | Rust | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k8s-openapi
-
WinBtrfs – an open-source btrfs driver for Windows
It's called sans-io in Python land, which is where I heard it first.
https://sans-io.readthedocs.io/
I did it for one of my projects back in 2018 https://github.com/Arnavion/k8s-openapi/commit/9a4fbb718b119...
-
The bane of my existence: Supporting both async and sync code in Rust
Another option is to implement your API in a sans-io form. Since k8s-openapi was mentioned (albeit for a different reason), I'll point out that its API gave you a request value that you could send using whatever sync or async HTTP client you want to use. It also gave you a corresponding function to parse the response, that you would call with the response bytes however you got them from your client.
https://github.com/Arnavion/k8s-openapi/blob/v0.19.0/README....
(Past tense because I removed all the API features from k8s-openapi after that release, for unrelated reasons.)
-
Welcome to Comprehensive Rust
Macro expansion is slow, but only noticeably in the specific situation of a) third-party proc macros, b) a debug build, and c) a few thousand invocations of said proc macros. This is because debug builds compile proc macros in debug mode too, so while the macro itself compiles quickly (because it's a debug build), it ends up running slowly (because it's a debug build).
I know this from observing this on a mostly auto-generated crate that had a couple of thousand types with `#[derive(serde::)]` on each. [1]
This doesn't affect most users, because first-party macros like `#[derive(Debug)]` etc are not slow because they're part of rustc and are thus optimized regardless of the profile, and even with third-party macros it is unlikely that they have thousands of invocations. Even if it is* a problem, users can opt in to compiling just the proc macros in release mode. [2]
[1]: https://github.com/Arnavion/k8s-openapi/issues/4
[2]: https://github.com/rust-lang/cargo/issues/5622
-
OpenAPI Generator allows generation of API client libraries from OpenAPI Specs
>OpenAPI Generator allows generation of API client libraries from OpenAPI Specs
It does, but the generated code can be very shitty for some combinations of spec and output language. I maintain Rust bindings for the Kubernetes API server's API, and I chose to write my own code generator instead. The README at https://github.com/Arnavion/k8s-openapi has more details.
-
Any good toy Rust project for k8s application?
k8s_openapi - https://github.com/Arnavion/k8s-openapi
-
Approaches for Chaining Access to Deeply Nested Optional Structs
For example: I have a routine that checks the value of (from k8s-openapi): Ingress -> IngressStatus -> LoadBalancerStatus -> Vec[0] -> String
-
Writing a Kubernetes CRD Controller in Rust
As the maintainer of the Rust bindings that the library used in the article (kube) is backed by, I can confirm that Kubernetes' openapi spec requires a lot of Kubernetes-specific handling to generate a good client than generic openapi generators do not provide.
See https://github.com/Arnavion/k8s-openapi/blob/master/README.m... for a full description.
I also confirm that I keep it up-to-date with Kubernetes releases and have been doing so for the ~3 years that it's been around. Not just the minor ones every few months, but even the point ones; these days the latter usually only involves updating the test cases instead of code changes and they're done within a few hours of the upstream release.
polars
-
Why Python's Integer Division Floors (2010)
This is because 0.1 is in actuality the floating point value value 0.1000000000000000055511151231257827021181583404541015625, and thus 1 divided by it is ever so slightly smaller than 10. Nevertheless, fpround(1 / fpround(1 / 10)) = 10 exactly.
I found out about this recently because in Polars I defined a // b for floats to be (a / b).floor(), which does return 10 for this computation. Since Python's correctly-rounded division is rather expensive, I chose to stick to this (more context: https://github.com/pola-rs/polars/issues/14596#issuecomment-...).
-
Polars
https://github.com/pola-rs/polars/releases/tag/py-0.19.0
-
Stuff I Learned during Hanukkah of Data 2023
That turned out to be related to pola-rs/polars#11912, and this linked comment provided a deceptively simple solution - use PARSE_DECLTYPES when creating the connection:
- Polars 0.20 Released
- Segunda linguagem
- Polars: Dataframes powered by a multithreaded query engine, written in Rust
- Summing columns in remote Parquet files using DuckDB
- Polars 0.34 is released. (A query engine focussing on DataFrame front ends)
What are some alternatives?
kube - Rust Kubernetes client and controller runtime
vaex - Out-of-Core hybrid Apache Arrow/NumPy DataFrame for Python, ML, visualization and exploration of big tabular data at a billion rows per second 🚀
fusionauth-openapi - FusionAuth OpenAPI client
modin - Modin: Scale your Pandas workflows by changing a single line of code
go - The Go programming language
datafusion - Apache DataFusion SQL Query Engine
spectrum - OpenAPI Spec SDK and Converter for OpenAPI 3.0 and 2.0 Specs to Postman 2.0 Collections. Example RingCentral spec included.
DataFrames.jl - In-memory tabular data in Julia
smithy - Smithy is a protocol-agnostic interface definition language and set of tools for generating clients, servers, and documentation for any programming language.
datatable - A Python package for manipulating 2-dimensional tabular data structures
tokio - A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing