transformers
promptsource
Our great sponsors
transformers | promptsource | |
---|---|---|
173 | 11 | |
124,557 | 2,476 | |
2.7% | 3.6% | |
10.0 | 4.6 | |
about 17 hours ago | 6 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformers
-
AI enthusiasm #6 - Finetune any LLM you wantπ‘
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please β€οΈ
-
Schedule-Free Learning β A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore β 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.
-
Mixtral-7b-8expert working in Oobabooga (unquantized multi-gpu)
pip install git+https://github.com/huggingface/transformers.git@main
promptsource
- How to Prompt Design? Share resources
-
Any tips for hiring prompt engineers?
Bigscience Promptsource
- PromptSource: Toolkit for creating, sharing and using natural language prompts
-
Hugging Face Introduces βT0β, An Encoder-Decoder Model That Consumes Textual Inputs And Produces Target Responses
Quick 5 Min Read | Paper|Github
- 16x smaller than GPT3 but better [video]
-
[R] BigScience's first paper, T0: Multitask Prompted Training Enables Zero-Shot Task Generalization
Code for https://arxiv.org/abs/2110.08207 found: https://github.com/bigscience-workshop/promptsource/
- "P3: Public Pool of Prompts" (BigScience's collaborative collection of >2k prompts for >170 datasets)
- BigScience's guide to using templating languages to develop prompts
-
word2vec chatbot
I'd use a prompted dataset then, as well as explore the TO model framework.
-
First model released by BigScience outperforms GPT-3 while being 16x smaller
We fine-tuned the model on a dozens of different NLP datasets and tasks in a prompted style. You can read all the prompts in the appendix or get them all here: https://github.com/bigscience-workshop/promptsource . Most NLP tasks are not particularly freeform, or they are naturally length limited like summary (XSum is very short). As a consequence, the model mostly defaults to short responses. Your "trick" is not that unreasonable though! Many of the training prompts that want long responses, ask for them explicitly.
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
eai-prompt-gallery - Library of interesting prompt generations
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
natural-instructions - Expanding natural instructions
llama - Inference code for Llama models
spaCy - π« Industrial-strength Natural Language Processing (NLP) in Python
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
datasets - π€ The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
rasa - π¬ Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
huggingface_hub - The official Python client for the Huggingface Hub.
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch