-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
tpr
An anonymous and decentralized routing protocol. The code will be up once it is done, but the paper is already available.
-
burn
Discontinued Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals. [Moved to: https://github.com/Tracel-AI/burn] (by burn-rs)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
simdutf
Unicode routines (UTF8, UTF16, UTF32) and Base64: billions of characters per second using SSE2, AVX2, NEON, AVX-512, RISC-V Vector Extension. Part of Node.js and Bun.
At Meilisearch we are currently trying to add a better error handling in heed v0.20, our LMDB key-value store wrapper. Unfortunately, when there are a lot of generics it can become harder to play with…
Rewriting in Rust my hierarchical image album generator, because I need a better design than what I had in my old Albumin for my albums containing thousands images (many other features also coming with).
I'm implementing tpr, so I'll definitely take a look at this
Recently I learned about Burn (https://github.com/burn-rs/burn) project. I started contributing. My initial task was to make inference work with no_std so I can build a model in WebAssembly.
First Rust project, a fountain screenplay markup parser called Farce.
I'm continuing to experiment with parsing performance, unicode and parallelization (https://github.com/garlicbreadcleric/parsing-sandbox). Motivation outline:
The next big thing is making it LSP-compatible. All language servers must implement UTF-16 based character offsets, which is kinda unfortunate considering that files are much more likely to be stored in UTF-8 (I think?). I don't want to do the UTF-8 -> UTF-16 transcoding, so instead I'll use the excellent simdutf library to count how much code points a UTF-8 string would take if it was transcoded into UTF-16 — which is much faster than actual transcoding. So this is what I'm going to do this week — rewriting parsers to produce UTF-16 offsets + some final benchmarking. After that is done, I'll consider the "research" part of this project completed and will start writing an actual Markdown parser.
Shameless plugs - GitHub: https://github.com/moali87/jirust Twitch: https://www.twitch.tv/mo_ali141
Related posts
-
Burn: Deep Learning Framework built using Rust
-
Transitioning From PyTorch to Burn
-
Burn Deep Learning Framework Release 0.12.0 Improved API and PyTorch Integration
-
Supercharge Web AI Model Testing: WebGPU, WebGL, and Headless Chrome
-
Show HN: Skytable's new NoSQL engine BlueQL with injection safety, improved perf