transformers
llama-cpu
Our great sponsors
transformers | llama-cpu | |
---|---|---|
175 | 9 | |
125,021 | 775 | |
3.1% | - | |
10.0 | 3.1 | |
4 days ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you wantđź’ˇ
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
llama-cpu
-
Why is ChatGPT 3.5 API 10x cheaper than GPT3?
You've probably heard, but LLaMA just released, and its 13B parameter model outperforms GPT-3 on most metrics (because they trained it on a lot more data). Someone's already quantized it to 4 and 3 bits and it performs virtually the same. It also apparently performs well on CPUs (several words per second on a 7900X). Running something equivalent to GPT3.5 on a phone is not out that far out.
- Fork of Facebook’s LLaMa model to run on CPU
- Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
-
[D] Tutorial: Run LLaMA on 8gb vram on windows (thanks to bitsandbytes 8bit quantization)
I tried to port the llama-cpu version to a gpu-accelerated mps version for macs, it runs, but the outputs are not as good as expected and it often gives "-1" tokens. Any help and contributions on fixing it are welcome!
-
Facebook LLAMA is being openly distributed via torrents | Hacker News
You can run it with only a CPU and 32 gigs of RAM: https://github.com/markasoftware/llama-cpu
- [D] Is it possible to run Meta's LLaMA 65B model on consumer-grade hardware?
-
Facebook LLAMA is being openly distributed via torrents
I was able to run 7B on a CPU, inferring several words per second: https://github.com/markasoftware/llama-cpu
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
llama - Inference code for Llama models
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
wrapyfi-examples_llama - Inference code for facebook LLaMA models with Wrapyfi support
bitsandbytes-win-prebuilt
huggingface_hub - The official Python client for the Huggingface Hub.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.