transformers
OpenNMT-py
Our great sponsors
transformers | OpenNMT-py | |
---|---|---|
174 | 6 | |
124,557 | 6,558 | |
2.7% | 1.1% | |
10.0 | 8.9 | |
5 days ago | 15 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformers
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.
OpenNMT-py
-
Making a custom Google Translate equivalent / web translation filter for my conlang?
I already tried this with OpenNMT.
-
Cutting edge language translation models
fairseq and OpenNMT are very good starting points if you want to train your NMT model from scratch.
- How Telegram Messenger circumvents Google Translate's API
-
WEBNLG challenge 2017 on Google Colab error
It looks like this uses the version of OpenNMT implemented in torch, which has been deprecated. You will be much better off using the pytorch implementation of OpenNMT or the transformers library. In fact, I would recommend taking a look at the GEM benchmark, since it also uses the WebNLG dataset. Here is a tutorial to get started, you can change the dataset here to WebNLG instead of CommonGen.
-
Help with Neural Machine Translation
Umm... open-nmt This is a library maintained since 2016 for NMT
-
Oop concepts for pytorch
However, you do not need to use much OOP when training models with pytorch. Most of the time it is just inheriting a class and overwriting functions. You might need more advanced stuff if you were writing a framework on top of it, something like ONMT
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
tensor2tensor - Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
llama - Inference code for Llama models
Transformer-Models-from-Scratch - implementing various transformer models for various tasks
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
Opus-MT - Open neural machine translation models and web services
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
OpenNMT - Open Source Neural Machine Translation in Torch (deprecated)
huggingface_hub - The official Python client for the Huggingface Hub.
LibreTranslate - Free and Open Source Machine Translation API. Self-hosted, offline capable and easy to setup.