tokenmonster
sentencepiece
tokenmonster | sentencepiece | |
---|---|---|
9 | 19 | |
495 | 9,480 | |
- | 1.7% | |
8.8 | 8.1 | |
3 months ago | 18 days ago | |
Go | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tokenmonster
-
Tokenizer benchmark comparing 16 language models pre-trained from scratch
The actual analysis: https://github.com/alasdairforsythe/tokenmonster/blob/main/b...
> Summary of Findings:
> - Comparable (50256-strict-nocapcode) TokenMonster vocabularies perform better than both GPT-2 Tokenizer and tiktoken p50k_base on all metrics.
> - Optimal vocabulary size is 32,000.
> - Simpler vocabularies converge faster but do not necessarily produce better results when converged.
> - Higher compression (more chr/tok) does not negatively affect model quality alone.
> - Vocabularies with multiple words per token have a 5% negative impact on SMLQA (Ground Truth) benchmark, but a 13% better chr/tok compression.
> - Capcode takes longer to learn, but once the model has converged, does not appear to affect SMLQA (Ground Truth) or SQuAD (Data Extraction) benchmarks significantly in either direction.
> - Validation loss and F1 score are both meaningless metrics when comparing different tokenizers.
> - Flaws and complications in the tokenizer affect the model's ability to learn facts more than they affect its linguistic capability.
-
How best to benchmark the accuracy of a model for comparing different tokenizers? [D]
I need to benchmark the performance of my tokenizer against standard tokenizers. It would be best for reproducibility if I benchmark against an existing model on a standard benchmark, swapping out the existing tokenizer for my tokenizer.
-
Benchmark a vocabulary by training a small model -- Any plug & play solutions?
Having just released my ungreedy subword tokenizer (TokenMonster), I keep being ask to provide benchmarks on how it performs when actually used to train a model, vs other tokenizers.
-
TokenMonster Ungreedy Subword Tokenizer V4: Enables Models to be 4x Smaller and Whilst Achieving Higher Chr/Token (With Evidence) [P]
This is all I've been doing 16 hours per day, 7 days per week for the past couple of months. If you like it please ☆ star the GitHub so people will find it. If you have any questions feel free to ask on here or on the GitHub Discussions tab. Thank you.
- Tokenmonster: Determine tokens to optimally represents a dataset
- TokenMonster: Ungreedy tokenizer, outperforming tiktoken by 35%
-
TokenMonster Ungreedy ~ 35% faster inference and 35% increased context-length for large language models (compared to tiktoken). Benchmarks included
TokenMonster is an ungreedy tokenizer and vocabulary builder, outperforming tiktoken by 35%. In fact, TokenMonster's smallest 24000 vocabulary consistently uses less tokens than tiktoken's largest 100256 vocabulary to tokenize the same text. Save the tokens! See benchmark.
-
[P] TokenMonster Ungreedy ~ 35% faster inference and 35% increased context-length for large language models (compared to tiktoken). Benchmarks included.
From the GitHub:
-
[P] New tokenization method improves LLM performance & context-length by 25%+
Code at Github.
sentencepiece
- sentencepiece
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
you need to train the model on 1 trillion tokens (https://platform.openai.com/tokenizer https://github.com/google/sentencepiece) anyways for it to get reasoning capacities, which it feels very unlikely that your data is that much.
I'm highly skeptical that you have enough data to pretrain if you don't have enough data to fine tune.
fine tuning + vector search + prompting of as much stuff as you can, on a LLM like palm2 or gpt4 is what I would do. otherwise you can use falcon 40B ofc.
maybe I should charge for this ahah
-
[P] TokenMonster Ungreedy ~ 35% faster inference and 35% increased context-length for large language models (compared to tiktoken). Benchmarks included.
a) Comparison with SentencePiece tokenizer with comparable settings (It can also ignore word-boundaries and create phrase tokens)
-
LLaMA tokenizer: is a JavaScript implementation available anywhere?
LLaMA uses the sentencepiece tokenizer: https://github.com/google/sentencepiece
-
[P] New tokenization method improves LLM performance & context-length by 25%+
Besides, are you familiar with SentencePiece? What you are doing looks very similar (generate a large vocab, prune worst token until vocab size is reached), only the token selection criterion is different. It's also purely data driven in the sense that there are no assumption specific to language (and it can optionally segment across whitespace, as you are doing).
-
Code runs without definition of function (automatically calls a different function instead)
Hi, I'm studying the implementation of encode and decode functions for Google's SentencePiece tokenizer.
-
How to handle multiple languages in a sentence?
I think many LMs nowadays use unicode tokenizers, that are not tied to specific languages. E.g. sentencepiece is the most popular one: https://github.com/google/sentencepiece
- Large language models are having their Stable Diffusion moment
-
LLaMA-7B in Pure C++ with full Apple Silicon support
If you are interested in implementing LLaMA yourself or learning, I noticed that the reference code by Facebook is one of the cleaner, easier to read ML code I've seen in a while. https://github.com/facebookresearch/llama/blob/main/llama/mo... It's about 200 lines long. You probably do need a bit of knowledge to understand what you are reading but I was pleasantly surprised.
For example in comparison, StableDiffusion torch code in diffusers and transformers Python libraries has lots of conditionals, experiments etc. that are not being used that can make it hard to follow what is going on.
Last weekend I got the "main loop" of the transformer working in pure CPU Rust code, following the reference code. My crappy code is just very very slow as I focused on getting it to run, not making it fast. The tokenizer uses some Google thing https://github.com/google/sentencepiece but luckily for inference it seems that you just need to be able to parse the tokenizer model file and not understand how it was created; I was able to strip out the protobuf files from that repository and add it to Rust and read the tokens.
I am optimistic that someone makes a high quality CPU or some CPU+GPU+SSD combination thingmaling that will make it somewhat practical to run even the large LLM models without needing an A100 or two.
- ChatGPT in an iOS Shortcut – Worlds Smartest HomeKit Voice Assistant
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
CTranslate2 - Fast inference engine for Transformer models
llama - Inference code for Llama models
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
OpenNMT-Tutorial - Neural Machine Translation (NMT) tutorial. Data preprocessing, model training, evaluation, and deployment.
dalle-mini - DALL·E Mini - Generate images from a text prompt
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
hunspell - The most popular spellchecking library.
langchain - 🦜🔗 Build context-aware reasoning applications
dalle-2-preview
jukebox - Code for the paper "Jukebox: A Generative Model for Music"