transformers
sentencepiece
Our great sponsors
transformers | sentencepiece | |
---|---|---|
171 | 19 | |
123,251 | 9,279 | |
2.7% | 4.4% | |
10.0 | 8.3 | |
about 4 hours ago | 26 days ago | |
Python | C++ | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformers
-
Gemma doesn't suck anymore β 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.
-
Self train a super tiny model recommendations
You can train it with the code provided in transformer repo: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
transformers uses accelerate if you call it with device_map='auto'
-
Show HN: Phind Model beats GPT-4 at coding, with GPT-3.5 speed and 16k context
Too much money being thrown around on BS in the LLM space, hardly any of it is going to places where it matters.
For example, the researchers working hard on better text sampling techniques, or on better constraint techniques (i.e. like this https://arxiv.org/abs/2306.03081), or on actual negative prompting/CFG in LLMs (i.e. like this https://github.com/huggingface/transformers/issues/24536) are doing far FAR more to advance the state of AI than dozens of VC backed LLM "prompt engineering" companies operating today.
HN, and the NLP community have some serious blindspots with knowing how to exploit their own technology. At least someone at Anderson Howartz got a clue and gave some funding to Oogabooga - still waiting for Automatic1111 to get any funding.
-
ππ 23 issues to grow yourself as an exceptional open-source Python expert π§βπ» π₯
Repo : https://github.com/huggingface/transformers
-
Whisper prompt tuning
From what I know, Whisper already supports prompting (https://github.com/huggingface/transformers/pull/22496). Can I somehow freeze the whole model and tune exclusively the prompt or would I need to write an implementation from scratch?
-
A look at Appleβs new Transformer-powered predictive text model
https://github.com/huggingface/transformers/blob/0a55d9f7376...
To summarize how they work: you keep some number of previously generated tokens, and once you get logits that you want to sample a new token from, you find the logits for existing tokens and multiply them by a penalty, thus lowering the probability of the corresponding tokens.
-
Can LLMs learn from a single example?
Very cool. This came up in a huggingface transformers issue a while ago and we also determined memorization to be the likely reason. It's nice to see someone else reach the same conclusion.
sentencepiece
-
LLM.int8(): 8-Bit Matrix Multiplication for Transformers at Scale
you need to train the model on 1 trillion tokens (https://platform.openai.com/tokenizer https://github.com/google/sentencepiece) anyways for it to get reasoning capacities, which it feels very unlikely that your data is that much.
I'm highly skeptical that you have enough data to pretrain if you don't have enough data to fine tune.
fine tuning + vector search + prompting of as much stuff as you can, on a LLM like palm2 or gpt4 is what I would do. otherwise you can use falcon 40B ofc.
maybe I should charge for this ahah
-
[P] TokenMonster Ungreedy ~ 35% faster inference and 35% increased context-length for large language models (compared to tiktoken). Benchmarks included.
a) Comparison with SentencePiece tokenizer with comparable settings (It can also ignore word-boundaries and create phrase tokens)
-
[P] New tokenization method improves LLM performance & context-length by 25%+
Besides, are you familiar with SentencePiece? What you are doing looks very similar (generate a large vocab, prune worst token until vocab size is reached), only the token selection criterion is different. It's also purely data driven in the sense that there are no assumption specific to language (and it can optionally segment across whitespace, as you are doing).
-
Code runs without definition of function (automatically calls a different function instead)
Hi, I'm studying the implementation of encode and decode functions for Google's SentencePiece tokenizer.
- Large language models are having their Stable Diffusion moment
-
LLaMA-7B in Pure C++ with full Apple Silicon support
If you are interested in implementing LLaMA yourself or learning, I noticed that the reference code by Facebook is one of the cleaner, easier to read ML code I've seen in a while. https://github.com/facebookresearch/llama/blob/main/llama/mo... It's about 200 lines long. You probably do need a bit of knowledge to understand what you are reading but I was pleasantly surprised.
For example in comparison, StableDiffusion torch code in diffusers and transformers Python libraries has lots of conditionals, experiments etc. that are not being used that can make it hard to follow what is going on.
Last weekend I got the "main loop" of the transformer working in pure CPU Rust code, following the reference code. My crappy code is just very very slow as I focused on getting it to run, not making it fast. The tokenizer uses some Google thing https://github.com/google/sentencepiece but luckily for inference it seems that you just need to be able to parse the tokenizer model file and not understand how it was created; I was able to strip out the protobuf files from that repository and add it to Rust and read the tokens.
I am optimistic that someone makes a high quality CPU or some CPU+GPU+SSD combination thingmaling that will make it somewhat practical to run even the large LLM models without needing an A100 or two.
-
Dall-E 2
Haven't read the paper, but they are probably using something like sentencepiece with sub-word splitting and then charge by the number of resulting token.
-
[D] How do pretrained tokenizers work?
For papers, take a look at references here https://github.com/google/sentencepiece
-
Understanding translation input code
I downloaded it (along with fairseq and sentencepiece, which it says is necessary. The translator is saved in C:\Users\[User]\fairseq\covid-nmt
-
[R] Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs
I don't think this is valid in the context of this article. The input tokens are not one-hot encodings of the input characters, they are learned embeddings on a 32K SentencePiece vocabulary (4.1.1). As "STOP" and "SPOT" are probably fairly common words in their training dataset, I think it's safe to assume that each word would be assigned its own unique vector rather than be represented by the four "subword units" comprising their character decomposition.
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
llama - Inference code for Llama models
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
huggingface_hub - The official Python client for the Huggingface Hub.
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch
Swin-Transformer-Tensorflow - Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)
faiss - A library for efficient similarity search and clustering of dense vectors.
KoboldAI-Client
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
lm-evaluation-harness - A framework for few-shot evaluation of language models.