lightseq
intel-extension-for-transformers
lightseq | intel-extension-for-transformers | |
---|---|---|
1 | 3 | |
3,098 | 1,941 | |
0.9% | 3.4% | |
3.7 | 9.9 | |
12 months ago | 2 days ago | |
C++ | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lightseq
intel-extension-for-transformers
- Intel Extension for Transformers
- How do you think LLM inference on CPUs?
- 📢Excited to announce https://github.com/intel/intel-extension-for-transformers v1.1 released. Congrats team! 🔥Supported efficient fine-tuning and inference on Xeon SPR and Habana Gaudi 🎯Enabled 4-bits LLM inference on Xeon (better than llama.cpp); improved lm-eval-harness for multiple frameworks
What are some alternatives?
accelerate-kullback-liebler
diffusion-expert - A software for drawing with stable-diffusion support
rust-bert - Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Stable-Diffusion-NCNN - Stable Diffusion in NCNN with c++, supported txt2img and img2img
FasterTransformer - Transformer related optimization, including BERT, GPT
athena - an open-source implementation of sequence-to-sequence based speech processing engine
cuhnsw - CUDA implementation of Hierarchical Navigable Small World Graph algorithm
xbyak_aarch64
cuml - cuML - RAPIDS Machine Learning Library
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
wenet - Production First and Production Ready End-to-End Speech Recognition Toolkit