simdutf
burn
simdutf | burn | |
---|---|---|
11 | 34 | |
960 | 4,845 | |
4.8% | - | |
9.1 | 8.9 | |
3 days ago | 5 months ago | |
C++ | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simdutf
- Glibc Buffer Overflow in Iconv
-
Vectorizing Unicode conversions on real RISC-V hardware
The project was mostly inspired by simdutf [0] which has been around for a couple of years already, and I don't think iconv has any of its vectorized implementations for other architectures.
[0] https://github.com/simdutf/simdutf
-
Cray-1 performance vs. modern CPUs
I'm actually doing something quite similar in my, in progress, unicode conversion routines.
For utf8 validation there is a clever algorithm that uses three 4-bit look-ups to detect utf8 errors: https://github.com/simdutf/simdutf/blob/master/src/icelake/i...
Aside on LMUL, if you haven't encountered it yet: rvv allows you to group vector registers when configuring the vector configuration with vsetvl such that vector instruction operate on multiple vector registers at once. That is, with LMUL=1 you have v0,v1...v31. With LMUL=2 you effectively have v0,v2,...v30, where each vector register is twice as large. with LMUL=4 v0,v4,...v28, with LMUL=8 v0,v8,...v24.
In my code, I happen to read the data with LMUL=2. The trivial implementation would just call vrgather.vv with LMUL=2, but since we only need a lookup table with 128 bits, LMUL=1 would be enough to store the lookup table (V requires a minimum VLEN of 128 bits).
So instead I do six LMUL=1 vrgather.vv's instead of three LMUL=2 vrgather.vv's because there is no lane crossing required and this will run faster in hardware: (see [0] for a relevant mico benchmark)
# codegen for equivalent of that function
-
What C++ library do you wish existed but hasn’t been created yet?
utf8 normalization, stemming, case insensitive comparison. https://github.com/unicode-rs example for rust What are options for C++? 1. translate to utf16 ( https://github.com/simdutf/simdutf ) and use icu -- slow 2. boost text, https://github.com/tzlaine/text , also slow (because the author doesn't care or couldn't care), we made a lot of patches to make our library faster than lucene, but still this part is slower than icu for utf16 (icu for utf16 also very slow...)
-
[Preprint] Transcoding Unicode Characters with AVX-512 Instructions
You can find the corresponding assembly code in this repository. The main branch only contains implementations based on C++ with intrinsics.
-
What's everyone working on this week (10/2023)?
The next big thing is making it LSP-compatible. All language servers must implement UTF-16 based character offsets, which is kinda unfortunate considering that files are much more likely to be stored in UTF-8 (I think?). I don't want to do the UTF-8 -> UTF-16 transcoding, so instead I'll use the excellent simdutf library to count how much code points a UTF-8 string would take if it was transcoded into UTF-16 — which is much faster than actual transcoding. So this is what I'm going to do this week — rewriting parsers to produce UTF-16 offsets + some final benchmarking. After that is done, I'll consider the "research" part of this project completed and will start writing an actual Markdown parser.
-
Why would a language not natively support SIMD?
You can find the assembly code here: https://github.com/simdutf/simdutf/tree/clausecker The corresponding C++ code is in the main branch.
- High speed Unicode routines using SIMD
-
text-2.0-rc1 with UTF8 underlying representation is available for testing!
Or via an ultrafast simdutf.
- Simdutf: Unicode validation and transcoding at billions of characters per second
burn
-
Burn 0.10.0 Released 🔥 (Deep Learning Framework)
Release Note: https://github.com/burn-rs/burn/releases/tag/v0.10.0
- Deep Learning Framework in Rust: Burn 0.10.0 Released
-
Why Rust Is the Optimal Choice for Deep Learning, and How to Start Your Journey with the Burn Deep Learning Framework
The comprehensive, open-source deep learning framework in Rust, Burn, has recently undergone significant advancements in its latest release, highlighted by the addition of The Burn Book 🔥. There has never been a better moment to embark on your deep learning journey with Rust, as this book will guide you through your initial project, providing extensive explanations and links to relevant resources.
-
Candle: Torch Replacement in Rust
Burn (deep learning framework in rust) has WGPU backend (WebGPU) already. Check it out https://github.com/burn-rs/burn. It was released recently.
- Burn – A Flexible and Comprehensive Deep Learning Framework in Rust
-
Announcing Burn-Wgpu: New Deep Learning Cross-Platform GPU Backend
For more details about the latest release see the release notes: https://github.com/burn-rs/burn/releases/tag/v0.8.0.
-
Are there any ML crates that would compile to WASM?
Tract is the most well known ML crate in Rust, which I believe can compile to WASM - https://github.com/sonos/tract/. Burn may also be useful - https://github.com/burn-rs/burn.
-
Any working wgpu compute example that would run in a browser?
We, the burn team, are working on the wgpu backend (WebGPU) for Burn deep learning framework. You can check out the current state: https://github.com/burn-rs/burn/tree/main/burn-wgpu
-
I’ve fallen in love with rust so now what?
Here is the project: https://github.com/burn-rs/burn
-
Is anyone doing Machine Learning in Rust?
Disclaimer, I'm the main author of Burn https://burn-rs.github.io.
What are some alternatives?
simdutf8 - SIMD-accelerated UTF-8 validation for Rust.
candle - Minimalist ML framework for Rust
DirectXMath - DirectXMath is an all inline SIMD C++ linear algebra library for use in games and graphics apps
dfdx - Deep learning in Rust, with shape checked tensors and neural networks
simde - Implementations of SIMD instruction sets for systems which don't natively support them.
tch-rs - Rust bindings for the C++ api of PyTorch.
eve - Expressive Vector Engine - SIMD in C++ Goes Brrrr
Graphite - 2D raster & vector editor that melds traditional layers & tools with a modern node-based, non-destructive, procedural workflow.
Vc - SIMD Vector Classes for C++
tract - Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference [Moved to: https://github.com/sonos/tract]
simdjson - Parsing gigabytes of JSON per second : used by Facebook/Meta Velox, the Node.js runtime, ClickHouse, WatermelonDB, Apache Doris, Milvus, StarRocks
L2 - l2 is a fast, Pytorch-style Tensor+Autograd library written in Rust