llama
sentencepiece
Our great sponsors
llama | sentencepiece | |
---|---|---|
3 | 19 | |
35 | 9,480 | |
- | 4.6% | |
1.6 | 8.1 | |
about 1 year ago | 16 days ago | |
C++ | ||
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama
-
Alpaca- An Instruct Tuned Llama 7B. Responses on par with txt-DaVinci-3. Demo up
> All the magic of "7B LLaMA running on a potato" seems to involve lowering precision down to f16 and then further quantizing to int4.
LLaMa weights are f16s to start out with, no lowering necessary to get to there.
You can stream weights from RAM to the GPU pretty efficiently. If you have >= 32GB ram and >=2GB vram my code here should work for you: https://github.com/gmorenz/llama/tree/gpu_offload
There's probably a cleaner version of it somewhere else. Really you should only need >= 16 GB ram, but the (meta provided) code to load the initial weights is completely unnecessarily making two copies of the weights in RAM simultaneously.
-
LLaMA-7B in Pure C++ with full Apple Silicon support
My code for this is very much not high quality, but I have a CPU + GPU + SSD combination: https://github.com/gmorenz/llama/tree/ssd
Usage instructions in the commit message: https://github.com/facebookresearch/llama/commit/5be06e56056...
At least with my hardware this runs at "[size of model]/[speed of SSD reads]" tokens per second, which (up to some possible further memory reduction so you can run larger batches at once on the same GPU) is a good as it gets when you need to read the whole model from disk each token.
At a 125GB and a 2MB/s read (largest model, what I get from my ssd) that's 60 seconds per token (1 day per 1440 words), which isn't exactly practical. Which is really the issue here, if you need to stream the model from an SSD because you don't have enough RAM, it is just a fundamentally slow process.
You could probably optimize quite a bit for batch throughput if you're ok with the latency though.
-
Llama-CPU: Fork of Facebooks LLaMa model to run on CPU
I don't know about this fork specifically, but in general yes absolutely.
Even without enough ram, you can stream model weights from disk and run at [size of model/disk read speed] seconds per token.
I'm doing that on a small GPU with this code, but it should be easy to get this working with the CPU as compute instead (and at least with my disk/CPU, I'm not even sure that it would run even slower, I think disk read would probably still be the bottleneck)
https://github.com/gmorenz/llama/tree/ssd
sentencepiece
- sentencepiece
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
you need to train the model on 1 trillion tokens (https://platform.openai.com/tokenizer https://github.com/google/sentencepiece) anyways for it to get reasoning capacities, which it feels very unlikely that your data is that much.
I'm highly skeptical that you have enough data to pretrain if you don't have enough data to fine tune.
fine tuning + vector search + prompting of as much stuff as you can, on a LLM like palm2 or gpt4 is what I would do. otherwise you can use falcon 40B ofc.
maybe I should charge for this ahah
-
[P] TokenMonster Ungreedy ~ 35% faster inference and 35% increased context-length for large language models (compared to tiktoken). Benchmarks included.
a) Comparison with SentencePiece tokenizer with comparable settings (It can also ignore word-boundaries and create phrase tokens)
-
LLaMA tokenizer: is a JavaScript implementation available anywhere?
LLaMA uses the sentencepiece tokenizer: https://github.com/google/sentencepiece
-
[P] New tokenization method improves LLM performance & context-length by 25%+
Besides, are you familiar with SentencePiece? What you are doing looks very similar (generate a large vocab, prune worst token until vocab size is reached), only the token selection criterion is different. It's also purely data driven in the sense that there are no assumption specific to language (and it can optionally segment across whitespace, as you are doing).
-
Code runs without definition of function (automatically calls a different function instead)
Hi, I'm studying the implementation of encode and decode functions for Google's SentencePiece tokenizer.
-
How to handle multiple languages in a sentence?
I think many LMs nowadays use unicode tokenizers, that are not tied to specific languages. E.g. sentencepiece is the most popular one: https://github.com/google/sentencepiece
- Large language models are having their Stable Diffusion moment
-
LLaMA-7B in Pure C++ with full Apple Silicon support
If you are interested in implementing LLaMA yourself or learning, I noticed that the reference code by Facebook is one of the cleaner, easier to read ML code I've seen in a while. https://github.com/facebookresearch/llama/blob/main/llama/mo... It's about 200 lines long. You probably do need a bit of knowledge to understand what you are reading but I was pleasantly surprised.
For example in comparison, StableDiffusion torch code in diffusers and transformers Python libraries has lots of conditionals, experiments etc. that are not being used that can make it hard to follow what is going on.
Last weekend I got the "main loop" of the transformer working in pure CPU Rust code, following the reference code. My crappy code is just very very slow as I focused on getting it to run, not making it fast. The tokenizer uses some Google thing https://github.com/google/sentencepiece but luckily for inference it seems that you just need to be able to parse the tokenizer model file and not understand how it was created; I was able to strip out the protobuf files from that repository and add it to Rust and read the tokens.
I am optimistic that someone makes a high quality CPU or some CPU+GPU+SSD combination thingmaling that will make it somewhat practical to run even the large LLM models without needing an A100 or two.
- ChatGPT in an iOS Shortcut – Worlds Smartest HomeKit Voice Assistant
What are some alternatives?
llama.cpp - LLM inference in C/C++
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
CTranslate2 - Fast inference engine for Transformer models
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
llama - Inference code for Llama models
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
OpenNMT-Tutorial - Neural Machine Translation (NMT) tutorial. Data preprocessing, model training, evaluation, and deployment.