k8s-openapi
dfdx
Our great sponsors
k8s-openapi | dfdx | |
---|---|---|
7 | 22 | |
360 | 1,607 | |
- | - | |
8.3 | 8.7 | |
12 days ago | about 2 months ago | |
Rust | Rust | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
k8s-openapi
-
WinBtrfs – an open-source btrfs driver for Windows
It's called sans-io in Python land, which is where I heard it first.
https://sans-io.readthedocs.io/
I did it for one of my projects back in 2018 https://github.com/Arnavion/k8s-openapi/commit/9a4fbb718b119...
-
The bane of my existence: Supporting both async and sync code in Rust
Another option is to implement your API in a sans-io form. Since k8s-openapi was mentioned (albeit for a different reason), I'll point out that its API gave you a request value that you could send using whatever sync or async HTTP client you want to use. It also gave you a corresponding function to parse the response, that you would call with the response bytes however you got them from your client.
https://github.com/Arnavion/k8s-openapi/blob/v0.19.0/README....
(Past tense because I removed all the API features from k8s-openapi after that release, for unrelated reasons.)
-
Welcome to Comprehensive Rust
Macro expansion is slow, but only noticeably in the specific situation of a) third-party proc macros, b) a debug build, and c) a few thousand invocations of said proc macros. This is because debug builds compile proc macros in debug mode too, so while the macro itself compiles quickly (because it's a debug build), it ends up running slowly (because it's a debug build).
I know this from observing this on a mostly auto-generated crate that had a couple of thousand types with `#[derive(serde::)]` on each. [1]
This doesn't affect most users, because first-party macros like `#[derive(Debug)]` etc are not slow because they're part of rustc and are thus optimized regardless of the profile, and even with third-party macros it is unlikely that they have thousands of invocations. Even if it is* a problem, users can opt in to compiling just the proc macros in release mode. [2]
[1]: https://github.com/Arnavion/k8s-openapi/issues/4
[2]: https://github.com/rust-lang/cargo/issues/5622
-
OpenAPI Generator allows generation of API client libraries from OpenAPI Specs
>OpenAPI Generator allows generation of API client libraries from OpenAPI Specs
It does, but the generated code can be very shitty for some combinations of spec and output language. I maintain Rust bindings for the Kubernetes API server's API, and I chose to write my own code generator instead. The README at https://github.com/Arnavion/k8s-openapi has more details.
-
Any good toy Rust project for k8s application?
k8s_openapi - https://github.com/Arnavion/k8s-openapi
-
Approaches for Chaining Access to Deeply Nested Optional Structs
For example: I have a routine that checks the value of (from k8s-openapi): Ingress -> IngressStatus -> LoadBalancerStatus -> Vec[0] -> String
-
Writing a Kubernetes CRD Controller in Rust
As the maintainer of the Rust bindings that the library used in the article (kube) is backed by, I can confirm that Kubernetes' openapi spec requires a lot of Kubernetes-specific handling to generate a good client than generic openapi generators do not provide.
See https://github.com/Arnavion/k8s-openapi/blob/master/README.m... for a full description.
I also confirm that I keep it up-to-date with Kubernetes releases and have been doing so for the ~3 years that it's been around. Not just the minor ones every few months, but even the point ones; these days the latter usually only involves updating the test cases instead of code changes and they're done within a few hours of the upstream release.
dfdx
- Shape Typing in Python
-
Candle: Torch Replacement in Rust
I keep checking the progress on dfdx for this reason. It does what I (and, I assume from context, you) want: Provides static checking of tensor shapes. Which is fantastic. Not quite as much inference as I'd like but I love getting compile-time errors that I forgot to transpose before a matmul.
It depends on the generic_const_exprs feature which is still, to quote, "highly experimental":
https://github.com/rust-lang/rust/issues/76560
Definitely not for production use, but it gives a flavor for where things can head in the medium term, and it's .. it's nice. You could imagine future type support allowing even more inference for some intermediate shapes, of course, but even what it has now is really nice. Like this cute little convnet example:
https://github.com/coreylowman/dfdx/blob/main/examples/night...
- Dfdx: Shape Checked Deep Learning in Rust
- Are there some machine or deep learning crates on Rust?
-
[Discussion] What crates would you like to see?
And for transformers, it's really early days for dfdx, but it's a library that aims to sit basically at the Pytorch level of abstraction, that the difference is it's not just coded in Rust, but it follows the Rust-y/functional-y philosophy of "if it compiles it runs".
-
rapl: Rank Polymorphic array library for Rust.
Wow that is super interesting. I actually tried to use GATs at first to be generic over shapes, but I couldn't do it, I'm sure it would be possible in the future though. There is this library dfdx that does something similar to what you mentioned, but it feels a little clumsy to me.
-
Announcing cudarc and fully GPU accelerated dfdx: ergonomic deep learning ENTIRELY in rust, now with CUDA support and tensors with mixed compile and runtime dimensions!
Awesome, I added an issue here https://github.com/coreylowman/dfdx/issues/597. We can discuss more there! The first step will just be adding the device and implementing tensor creation methods for it.
-
In which circumstances is C++ better than Rust?
The next release of dfdx includes a CUDA device and implements many ops. The same dev created a new crate, cudarc, for a wrapper around CUDA toolkit.
- This year I tried solving AoC using Rust, here are my impressions coming from Python!
-
Deep Learning in Rust: Burn 0.4.0 released and plans for 2023
A question I have is: what are the philosophical/design differences with dfdx? As someone who's played around with dfdx and only skimmed the README of burn, it seems like dfdx leans into Rust's type system/type inference for compile time checking of as much as is possible to check at compile time. I wonder if you've gotten a chance to look at dfdx and would like to outline what you think the differences are. Thanks!
What are some alternatives?
kube - Rust Kubernetes client and controller runtime
burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals. [Moved to: https://github.com/Tracel-AI/burn]
fusionauth-openapi - FusionAuth OpenAPI client
burn - Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
go - The Go programming language
DiffSharp - DiffSharp: Differentiable Functional Programming
spectrum - OpenAPI Spec SDK and Converter for OpenAPI 3.0 and 2.0 Specs to Postman 2.0 Collections. Example RingCentral spec included.
executorch - On-device AI across mobile, embedded and edge for PyTorch
smithy - Smithy is a protocol-agnostic interface definition language and set of tools for generating clients, servers, and documentation for any programming language.
rust - Empowering everyone to build reliable and efficient software.
tokio - A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...
triton - Development repository for the Triton language and compiler