examples
Transformers-Tutorials
Our great sponsors
examples | Transformers-Tutorials | |
---|---|---|
5 | 7 | |
376 | 7,510 | |
10.9% | - | |
6.8 | 8.4 | |
3 months ago | 15 days ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
examples
- FLaNK Stack Weekly for 07August2023
-
Vector database built for scalable similarity search
As another commenter noted, Milvus is overkill and a "bit much" if you're learning/playing.
A good intro to the field with progression towards a full Milvus implementation could be starting with towhee[0] (which is also supported by Milvus).
towhee has an example to do exactly what you want with CLIP[1].
[0] - https://towhee.io/
[1] - https://github.com/towhee-io/examples/tree/main/image/text_i...
-
Ask HN: Any good self-hosted image recognition software?
Usually this is done in three steps. The first step is using a neural network to create a bounding box around the object, then generating vector embeddings of the object, and then using similarity search on vector embeddings.
The first step is accomplished by training a detection model to generate the bounding box around your object, this can usually be done by finetuning an already trained detection model. For this step the data you would need is all the images of the object you have with a bounding box created around it, the version of the object doesnt matter here.
The second step involves using a generalized image classification model thats been pretrained on generalized data (VGG, etc.) and a vector search engine/vector database. You would start by using the image classification model to generate vector embeddings (https://frankzliu.com/blog/understanding-neural-network-embe...) of all the different versions of the object. The more ground truth images you have, the better, but it doesn't require the same amount as training a classifier model. Once you have your versions of the object as embeddings, you would store them in a vector database (for example Milvus: https://github.com/milvus-io/milvus).
Now whenever you want to detect the object in an image you can run the image through the detection model to find the object in the image, then run the sliced out image of the object through the vector embedding model. With this vector embedding you can then perform a search in the vector database, and the closest results will most likely be the version of the object.
Hopefully this helps with the general rundown of how it would look like. Here is an example using Milvus and Towhee https://github.com/towhee-io/examples/tree/3a2207d67b10a246f....
Disclaimer: I am a part of those two open source projects.
-
Deep Dive into Real-World Image Search Engine with Python
I have shown how to Build an Image Search Engine in Minutes in the previous tutorial. Here is another one for how to optimize the algorithm, feed it with large-scale image datasets, and deploy it as a micro-service.
-
Build an Image Search Engine in Minutes
The full tutorial is at https://github.com/towhee-io/examples/blob/main/image/reverse_image_search/build_image_search_engine.ipynb
Transformers-Tutorials
-
AI enthusiasm #6 - Finetune any LLM you want๐ก
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please โค๏ธ
- FLaNK Stack Weekly for 07August2023
- How to annotate compound words to build NER models?
-
[discussion] Anybody Working with VITMAE?
I'm pretraining on 850K grayscale spectrograms of birdsongs. I'm on epoch 400 out of 800 and the loss has declined from about 1.2 to 0.7. I don't really have a sense of what is "good enough" and I guess the only way I can judge is by looking at the reconstruction. I'm doing that using this notebook as a guide and right now it's doing quite badly.
-
[D] NLP has HuggingFace, what does Computer Vision have?
More tutorials can be found at https://github.com/NielsRogge/Transformers-Tutorials.
-
[Discussion] Information Extraction with LayoutLMv2
Ive been looking for an off the shelf encoder-decoder document understanding model for key information extraction. I found a great Huggingface implementation with concise notebook examples. However, the token classification model outputs a list of token labels corresponding bounding boxes for the token, but, not the text contained within the labeled bounding boxes themselves. Am I missing something? LayoutLMv2 describes itself as being capable of information extraction but without extracting the text I feel like it's fallen short of that ambition.
-
[Project] Deepmind's Perceiver IO available through Hugging Face
Example Notebooks
What are some alternatives?
towhee - Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
nn - ๐งโ๐ซ 60 Implementations/tutorials of deep learning papers with side-by-side notes ๐; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), ๐ฎ reinforcement learning (ppo, dqn), capsnet, distillation, ... ๐ง
milvus-lite - A lightweight version of Milvus wrapped with Python.
gorilla-cli - LLMs for your CLI
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
anomalib - An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
notebooks - Notebooks using the Hugging Face libraries ๐ค
EverythingApacheNiFi - EverythingApacheNiFi
adaptnlp - An easy to use Natural Language Processing library and framework for predicting, training, fine-tuning, and serving up state-of-the-art NLP models.
harlequin - The SQL IDE for Your Terminal.
OpenBuddy - Open Multilingual Chatbot for Everyone