TransformerEngine
liberate-fhe
Our great sponsors
TransformerEngine | liberate-fhe | |
---|---|---|
2 | 1 | |
1,428 | 93 | |
13.1% | - | |
9.5 | 8.0 | |
4 days ago | 2 months ago | |
Python | Python | |
Apache License 2.0 | BSD 3-clause Clear License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TransformerEngine
-
Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1)
4090 now has its 8-bit float enabled as well, see the [transformer engine issue](https://github.com/NVIDIA/TransformerEngine/issues/15)
-
GPUs for Deep Learning in 2023 – An In-depth Analysis
Would be curious to see your benchmarks. Btw, Nvidia will be providing support for fp8 in a future release of CUDA - https://github.com/NVIDIA/TransformerEngine/issues/15
I think TMA may not matter as much for consumer cards given the disproportionate amount of fp32 / int32 compute that they have.
Would be interesting to see how close to theoretical folks are able to get once CUDA support comes through.
liberate-fhe
What are some alternatives?
Whisper - High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
concrete-numpy - Concrete-Numpy: A library to turn programs into their homomorphic equivalent.
autocvd - Tool to automatically set CUDA_VISIBLE_DEVICES based on GPU utilization. Usable from command line and code.
openfhe-development - This is the development repository for the OpenFHE library. The current (stable) version is v1.1.4 (released on March 8, 2024).
warp-drive - Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
ivy - The Unified AI Framework
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
fastaudio - 🔊 Audio and fastai v2
FastFold - Optimizing AlphaFold Training and Inference on GPU Clusters
PyTorch-Guide - PyTorch Guide
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration