rust-bert
memory64
rust-bert | memory64 | |
---|---|---|
7 | 7 | |
2,427 | 179 | |
- | 2.2% | |
6.8 | 8.5 | |
about 2 months ago | 2 days ago | |
Rust | WebAssembly | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rust-bert
-
How to leverage the state-of-the-art NLP models in Rust
brew install libtorch brew link libtorch brew ls --verbose libtorch | grep dylib export LIBTORCH=$(brew --cellar pytorch)/$(brew info --json pytorch | jq -r '.[0].installed[0].version') export LD_LIBRARY_PATH=${LIBTORCH}/lib:$LD_LIBRARY_PATH git clone https://github.com/guillaume-be/rust-bert.git cd rust-bert ORT_STRATEGY=system cargo run --example sentence_embeddings
-
Transformers.js
I'd like to use this transformer model in rust (because it's on the backend, because I can use data munging and it will be faster, and for other reasons). It looks like a good model! But, it doesn't compile on Apple Silicon for wierd linking issues that aren't apparent - https://github.com/guillaume-be/rust-bert/issues/338. I've spent a large part of today and yesterday attempting to find out why. The only other library that I've found for doing this kind of thing programmatically (particularly sentiment analysis) is this (https://github.com/JohnSnowLabs/spark-nlp). Some of the models look a little older, which is OK, but it does mean that I'd have to do this in another language.
Does anyone know of any sentiment analysis software that can be tuned (other than VADER - I'm looking for more along the lines of a transformer model) - like BERT, but is pretrained and can be used in Rust or Python? Otherwise I'll probably using spark-nlp and having to spin another process.
Thanks.
-
Running large language models like ChatGPT on a single GPU
Give this a look: https://github.com/guillaume-be/rust-bert
If you have Pytorch configured correctly, this should "just work" for a lot of the smaller models. It won't be a 1:1 ChatGPT replacement, but you can build some pretty cool stuff with it.
> it's basically Python or bust in this space
More or less, but that doesn't have to be a bad thing. If you're on Apple Silicon, you have plenty of performance headroom to deploy Python code for this. I've gotten this library to work on systems with as little as 2gb of memory, so outside of ultra-low-end use cases, you should be fine.
-
Self-hosted Whisper-based voice recognition server for open Android phones
I suspect something similar is possible with ChatGPT. Using the GPT-neo-125m model I've been able to get some really convincing (if lackluster) answers on 4 core ARM hardware and less than 2gb of memory. With enough sampling, you can get legible paragraph-length responses out in less than 10 seconds; that's pretty good for an offline program in my book.
I'm using rust-bert to serve it over a Discord bot, similar to one of their examples[0]. It's running on Oracle VCPUs right now, but with dedi hardware and ML acceleration I can imagine the field moving really quickly.
[0] https://github.com/guillaume-be/rust-bert/blob/master/exampl...
-
Ask HN: What AI developer tools do you wish you'd discovered sooner?
Maybe a little played-out, but I've been having a blast with the rust-bert library this weekend: https://github.com/guillaume-be/rust-bert
With a little fanagling, you can get the GPT-Neo-1.3b model running on those free Oracle ARM VMs you can provision. I'm impressed, especially with the performance of the smallest model that uses less than a gig of memory.
-
Ask HN: Has anyone made a toy that integrates ChatGPT with voice into a toy?
Nope, but it's probably possible on a smaller, hobbyist scale. I've been playing with a few GPT libraries this week (namely rust-bert[0]) and I've been really impressive with local generation results on my crappy 2 core netbook. I can get 2 sentences to generate in ~5 seconds, which is pretty good in my book.
Armed with a Pi-style SBC and your AI library of choice, I bet you could get pretty far implementing some stuff. Bonus points if you use Whisper for speech-to-text, and double brownie points if you can get an AI voice to read the generation back.
[0] https://github.com/guillaume-be/rust-bert/tree/master/exampl...
-
[D] Is Rust stable/mature enough to be used for production ML? Is making Rust-based python wrappers a good choice for performance heavy uses and internal ML dependencies in 2021?
If you are using BERT models and some miscellaneous other related stuff then you should check out the rust-bert and Bert Sentence repos https://github.com/guillaume-be/rust-bert
memory64
-
Top 8 Recent V8 Updates
A completed implementation of memory64 for memory-hungry applications.
-
Extism Makes WebAssembly Easy
Indeed, webassembly is moving extremely slowly. I started a project years ago expecting https://github.com/WebAssembly/memory-control/blob/main/prop... and https://github.com/WebAssembly/memory64 to be fixed at some point. Neither are yet, and the project still suffers from it to this day.
I think wasm is still great without these fixes, but I have lost confidence in the idea that wasm will reach its full potential any time soon.
-
How Photoshop solved working with files larger than can fit into memory
It's in the works: https://github.com/WebAssembly/memory64
Starting with 32bit had some performance advantages because 64bit runtimes can use virtual memory shenanigans to implement bounds checking with zero overhead. In wasm64 they'll have to do explicit bounds checking instead.
-
Transformers.js
Right - currently, everything runs using WASM (32-bit, with 64-bit coming soon [1,2]), and I plan to add support for WebGPU soon!
(WebGPU is the successor to WebGL, which is coming out in April 2023 [3])
[1] https://github.com/WebAssembly/memory64/issues/36#issuecomme...
-
What was the rational for 32-bit memory addresses in WebAssembly? It seems very short-sighted, considering it only came out pretty recently in 2017
It shouldn't be a big surprise that a 64-bit pointer extension is out there and being worked on. The great thing about a VM is you can integrate major changes like this when they are needed and with the benefit of experience and hindsight. If the 4GB limit turns out to be restrictive then it can be lifted.
- Why Am I Excited About WebAssembly?
-
Increasing Smart Contract Canister Memory Proposal is live for review
The goal of this proposal is to increase the amount of memory that canisters can access [eventually] bound only by the actual capacity of the subnet. Since, the Memory64 proposal is not standardized 1 yet and its implementation 1 in Wasmtime is not production ready yet, this proposal enables the increase by introducing a new stable memory API.
What are some alternatives?
Dlib - A toolkit for making real world machine learning and data analysis applications in C++
interface-types
speak - Talk with your machine in this minimalistic Rust crate!
wasmtime - A fast and secure runtime for WebAssembly
FlexGen - Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen]
botnet - Multiplayer programming game using Rust and WebAssembly
are-we-learning-yet - How ready is Rust for Machine Learning?
temporal-polyfill - A lightweight polyfill for Temporal, successor to the JavaScript Date object
ggml - Tensor library for machine learning
proposal-temporal - Provides standard objects and functions for working with dates and times.
lightseq - LightSeq: A High Performance Library for Sequence Processing and Generation
component-sandbox-demo