notebooks
transformers
notebooks | transformers | |
---|---|---|
17 | 176 | |
3,293 | 125,021 | |
2.7% | 1.4% | |
8.4 | 10.0 | |
4 days ago | 7 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
notebooks
- Training multiple models like ResNet50 or ViT on the same dataset [P]
-
Sagemaker Model deployment and Integration
📓 Open the notebook for an example of how to run a batch transform job for inference.
-
Your own Stable Diffusion endpoint with AWS SageMaker
In order to overwrite it, the package readme has some general information about it, and also there is an example in this jupyter notebook. We are doing what is necessary via the files inside sagemaker/code, which has the inference code following SageMaker requirements, and a requirements.txt, that has the necessary dependencies that will be installed when the endpoint gets created
-
Is there a huggingface model that does free response QA?
You still haven’t explained your use-case for the model. You can look up “Open Domain QA” models. There are a lot of them, but they’re often restricted in how well they generalize and benefit from fine tuning. E.g., https://github.com/huggingface/notebooks/blob/main/longform-qa/Long_Form_Question_Answering_with_ELI5_and_Wikipedia.ipynb
-
List of Stable Diffusion systems - Part 3
(Updated Aug. 27, 2022) Colab notebook Stable Diffusion with diffusers by huggingface. GitHub repo. Video tutorial. Official Colab notebook. txt2img. Uses HuggingFace diffusers repo.
- anyone having issues with the textual inversion colab?
-
Training textual inversion of Stable Diffusion on your own dataset
Looks like they updated the notebook 15 minutes ago. Hopefully it works now.
-
Ask HN: What kind of data do I need to build a language model?
Basically, you can then do similar things using HuggingFace, as indeed many have (you can explore the models in their hub)[2]
[1] https://www.youtube.com/playlist?list=PLtmWHNX-gukKocXQOkQju...
[2] https://github.com/huggingface/notebooks/blob/main/examples/...
-
[D] NLP has HuggingFace, what does Computer Vision have?
image classification: ViT, DeiT, BEiT, Swin Transformer, PoolFormer, ResNet, RegNet, ConvNeXT, Perceiver, ImageGPT, VAN. Check out the official example scripts, example notebooks.
- Need help in extracting a binary label from a text corpus
transformers
-
AI enthusiasm #9 - A multilingual chatbot📣🈸
transformers is a package by Hugging Face, that helps you interact with models on HF Hub (GitHub)
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
What are some alternatives?
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Transformers-Tutorials - This repository contains demos I made with the Transformers library by HuggingFace.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
stable-diffusion - k_diffusion wrapper included for k_lms sampling. fixed for notebook.
llama - Inference code for Llama models
easydiffusion - Easy Diffusion is an advanced Stable Diffusion Notebook with a feature rich image processing suite.
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
stable-diffusion-colab - Adapdet for google colab
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
HidamariDiffusionColab - colab for stable diffusion
huggingface_hub - The official Python client for the Huggingface Hub.