blindai
onnxruntime-rs
Our great sponsors
blindai | onnxruntime-rs | |
---|---|---|
6 | 2 | |
489 | 261 | |
1.2% | - | |
8.0 | 0.0 | |
about 1 month ago | about 2 months ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
blindai
-
[D] Any options for using GPT models using proprietary data ?
We are working on an open-source project, BlindAI (https://github.com/mithril-security/blindai) to answer exactly that: privacy when sending data to remote AI models.
-
[P] Secret Whisper: Deploy OpenAI Whisper model with privacy using BlindAI
BlindAI (https://github.com/mithril-security/blindai) is an open-source confidential AI deployment. By using secure enclaves (Intel SGX for now, soon AMD SEV and Nvidia Confidential Computing), we provide end-to-end protection for users’ data, even when sending it to the Cloud for AI inference. You can see the gains of BlindAI on the scheme below:
-
[P] Introducing BlindAI, an Open-source, fast and privacy-friendly AI deployment solution. Benefit from state-of-the-art AI without ever revealing your data!
Good thing with enclave is that the hardware protection enable us to use regular AES to secure communication with the enclave, which means no ciphertext expansion and lightweight client side. We do not need to have a complicated client side, we just need a slightly modified TLS client with additional security checks, like remote attestation but you can have a look on our client side it's light (https://github.com/mithril-security/blindai/tree/master/client).
-
BlindAI: fast and privacy-friendly AI deployment solution in Rust
I am glad to introduce BlindAI, an AI deployment solution, leveraging secure enclaves, to make remotely hosted AI models privacy friendly. We leverage the tract project as our inference engine to serve AI models in ONNX format inside an enclave. We also use the Rust SGX SDK to use Rust for our secure enclave for AI.
- BlindAI: Open-source, fast and privacy-friendly AI deployment solution in Rust
onnxruntime-rs
-
Deep Learning in Rust on GPU with onnxruntime-rs
I did: https://github.com/nbigaouette/onnxruntime-rs/pull/87 but the maintainer seems to be off. I sent an email.
-
Interesting results comparing TF and Rust
I have used the https://github.com/nbigaouette/onnxruntime-rs ONNX C++ wrapper on a Pytorch model, and did not see any difference in compute time between ONNX Python and ONNX Rust for GPU.
What are some alternatives?
incubator-teaclave-sgx-sdk - Apache Teaclave (incubating) SGX SDK helps developers to write Intel SGX applications in the Rust programming language, and also known as Rust SGX SDK.
tract - Tiny, no-nonsense, self-contained, Tensorflow and ONNX inference
ire - I2P router implementation in Rust
rust-gpu - 🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧
rsrl - A fast, safe and easy to use reinforcement learning framework in Rust.
deno - A modern runtime for JavaScript and TypeScript.
steelix - Your one stop CLI for ONNX model analysis.
ort - A Rust wrapper for ONNX Runtime
whatlang-rs - Natural language detection library for Rust. Try demo online: https://whatlang.org/
csl-mobile-bridge - React-native bindings for Emurgo's cardano-serialization-lib (Cardano haskell Shelley)
L2 - l2 is a fast, Pytorch-style Tensor+Autograd library written in Rust
tractjs - Run ONNX and TensorFlow inference in the browser.