transformers
huggingface_hub
Our great sponsors
transformers | huggingface_hub | |
---|---|---|
175 | 104 | |
124,557 | 1,675 | |
2.7% | 7.4% | |
10.0 | 9.6 | |
6 days ago | about 8 hours ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you wantđź’ˇ
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
huggingface_hub
-
OpenAI's employees were given two explanations for why Sam Altman was fired
Something to think about:
https://github.com/huggingface/huggingface_hub
- Thoughts on a "Text Generation CivitAI"
-
Civitai alternatives.
Yes! We have a well documented Python library (https://github.com/huggingface/huggingface_hub) and public endpoints (https://huggingface.co/docs/hub/api#endpoints-table) you can use to retrieve information about the models and potentially build UIs with specific use cases in mind
-
Fox Fairy @ Diffusion Forest: Unreal Engine + Stable Diffusion
i think if you search for pixel art here there are some models worth checking out: https://huggingface.co/
- ASK HN: AI is really exciting but where do I start?
- j'ai entraîné une IA à générer Éric Duhaime en clown !
-
[Guide] DreamBooth Training with ShivamShrirao's Repo on Windows Locally
I received another error saying OSError: We couldn't connect to 'https://huggingface.co' to load this model, couldn't find it in the cached files and it looks like ./vae is not the path to a directory containing a file named diffusion_pytorch_model.bin
-
Training a Deep Learning Language Model for Latin text Generation
I plan to release it on https://huggingface.co/, where all this cool AI stuff is available for free for everyone that wishes to try it.
-
Image Upscaling Models Compared (General, Photo and Faces)
For this I used mainly the chainner application with models from here but I also used the google colab automatic1111 stable diffusion webui (for example for Lanczos) and also spaces fromhuggingface like this one or then from the replicate.com website super resolution collection.
-
2D Illustration Styles are scarce on Stable Diffusion so i created a dreambooth model inspired by Hollie Mengert's work
you will now need to create a huggingface account ( https://huggingface.co/) if you haven't already. When you have, go here and accept the terms, https://huggingface.co/runwayml/stable-diffusion-v1-5. When you have done both, click on your profile icon and go to settings. Click access tokens and then create token, name it whatever you want, select "write". When you are finished with all this, then you can run the next cell which is the hugging face cell. It will ask for a token, you copy and paste what you just created.
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
civitai - A repository of models, textual inversions, and more
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
spaCy - đź’« Industrial-strength Natural Language Processing (NLP) in Python
llama - Inference code for Llama models
mammography_metarepository - Meta-repository of screening mammography classifiers
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
KoboldAI-Client
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch
seldon-core - An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models