Transformers-Tutorials
notebooks
Our great sponsors
Transformers-Tutorials | notebooks | |
---|---|---|
7 | 17 | |
7,510 | 3,277 | |
- | 4.3% | |
8.4 | 8.4 | |
16 days ago | 17 days ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Transformers-Tutorials
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
- FLaNK Stack Weekly for 07August2023
- How to annotate compound words to build NER models?
-
[discussion] Anybody Working with VITMAE?
I'm pretraining on 850K grayscale spectrograms of birdsongs. I'm on epoch 400 out of 800 and the loss has declined from about 1.2 to 0.7. I don't really have a sense of what is "good enough" and I guess the only way I can judge is by looking at the reconstruction. I'm doing that using this notebook as a guide and right now it's doing quite badly.
-
[D] NLP has HuggingFace, what does Computer Vision have?
More tutorials can be found at https://github.com/NielsRogge/Transformers-Tutorials.
-
[Discussion] Information Extraction with LayoutLMv2
Ive been looking for an off the shelf encoder-decoder document understanding model for key information extraction. I found a great Huggingface implementation with concise notebook examples. However, the token classification model outputs a list of token labels corresponding bounding boxes for the token, but, not the text contained within the labeled bounding boxes themselves. Am I missing something? LayoutLMv2 describes itself as being capable of information extraction but without extracting the text I feel like it's fallen short of that ambition.
-
[Project] Deepmind's Perceiver IO available through Hugging Face
Example Notebooks
notebooks
- Training multiple models like ResNet50 or ViT on the same dataset [P]
-
Sagemaker Model deployment and Integration
📓 Open the notebook for an example of how to run a batch transform job for inference.
-
Your own Stable Diffusion endpoint with AWS SageMaker
In order to overwrite it, the package readme has some general information about it, and also there is an example in this jupyter notebook. We are doing what is necessary via the files inside sagemaker/code, which has the inference code following SageMaker requirements, and a requirements.txt, that has the necessary dependencies that will be installed when the endpoint gets created
-
Is there a huggingface model that does free response QA?
You still haven’t explained your use-case for the model. You can look up “Open Domain QA” models. There are a lot of them, but they’re often restricted in how well they generalize and benefit from fine tuning. E.g., https://github.com/huggingface/notebooks/blob/main/longform-qa/Long_Form_Question_Answering_with_ELI5_and_Wikipedia.ipynb
-
List of Stable Diffusion systems - Part 3
(Updated Aug. 27, 2022) Colab notebook Stable Diffusion with diffusers by huggingface. GitHub repo. Video tutorial. Official Colab notebook. txt2img. Uses HuggingFace diffusers repo.
- anyone having issues with the textual inversion colab?
-
Training textual inversion of Stable Diffusion on your own dataset
Looks like they updated the notebook 15 minutes ago. Hopefully it works now.
-
Ask HN: What kind of data do I need to build a language model?
Basically, you can then do similar things using HuggingFace, as indeed many have (you can explore the models in their hub)[2]
[1] https://www.youtube.com/playlist?list=PLtmWHNX-gukKocXQOkQju...
[2] https://github.com/huggingface/notebooks/blob/main/examples/...
-
[D] NLP has HuggingFace, what does Computer Vision have?
image classification: ViT, DeiT, BEiT, Swin Transformer, PoolFormer, ResNet, RegNet, ConvNeXT, Perceiver, ImageGPT, VAN. Check out the official example scripts, example notebooks.
- Need help in extracting a binary label from a text corpus
What are some alternatives?
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
gorilla-cli - LLMs for your CLI
stable-diffusion - k_diffusion wrapper included for k_lms sampling. fixed for notebook.
easydiffusion - Easy Diffusion is an advanced Stable Diffusion Notebook with a feature rich image processing suite.
adaptnlp - An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.
stable-diffusion-colab - Adapdet for google colab
OpenBuddy - Open Multilingual Chatbot for Everyone
HidamariDiffusionColab - colab for stable diffusion
ToolBench - [ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch