docarray
vision_transformer
Our great sponsors
docarray | vision_transformer | |
---|---|---|
32 | 7 | |
2,730 | 9,180 | |
2.1% | 4.0% | |
9.2 | 5.5 | |
8 days ago | about 1 month ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
docarray
- DocArray – Represent, send, and store multimodal data for ML
-
Some questions about multimodal data.
I’ve heard of DocArray, a library for multimodal data in transit and Pytorch Lightning which is also a tool for multimodal data. These two sound like a promising solution, but I’m not sure how to use it with databases or cloud storage. Do I need to install any additional packages or dependencies?
-
Trying to create an AI recommender system that’s also ad-free video streaming.
I'm considering using these tools for a recommender system for analyzing text data like user reviews: DocArray and the EZ-MMLA Toolkit. Can anyone share their experience with the DocArray and EZ-MMLA Toolkit? I would love to hear about others' experiences before making a final decision.
-
do you know any systems that can handle multimodal data fusion and representation learning?
I have been thinking about trying out DocArray and the EZ-MMLA Toolkit .. Has anyone had experience with these two projects?? Let me know what you think!
-
I plan to build my own AI powered search engine for my portfolio. Do you know ones that are open-source?
For some alternatives, I know there’s DocArray where you can handle text, image and audio data. is basically a toolbox for multimodal data and then there should be Haystack which is also let you build search systems and also has to do something with Transformers and LLMs.
-
A Guide to Using OpenTelemetry in Jina for Monitoring and Tracing Applications
DocArray to manipulate data and interact with the storage backend using document store.
-
This week(s) in DocArray
It's already been two weeks since the last alpha release of DocArray v2. And since then a lot has happened — we've merged features we're really proud of, and we've cried tears of joy and misery trying to coerce Python into doing what we want. If you want to learn about interesting Python edge cases or follow the advancement of DocArray v2 development then you’ve come to the right place in this blog post!
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
The German Fashion12k dataset is available for free use by the Jina AI community. After logging into Jina AI Cloud, you can download it directly in DocArray format:
-
Want to Search Inside Videos Like a Pro? CLIP-as-service Can Help
Jina AI’s DocArray library
-
Looking for open source projects in Machine Learning and Data Science
You could try spaCy. This is the brains of the operation - an open-source NLP library for advanced NLP in Python. Another is DocArray - It's built on top of NumPy and Dask, and good for preprocessing, modeling, and analysis of text data.
vision_transformer
-
Can I use CLIP to tag my picture collection?
And one last thing, should I even be thinking of using CLIP for these tasks when Google has released a better model here: https://github.com/google-research/vision_transformer/blob/main/model_cards/lit.md
-
When the client's management is happy but their dev team is a pain
Google's vision transformers are type hinted.
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
We’re going to look at a model that Open AI has trained with a broad multilingual dataset: The xlm-roberta-base-ViT-B-32 CLIP model, which uses the ViT-B/32image encoder, and the XLM-RoBERTa multilingual language model. Both of these are pre-trained:
-
[R] How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
JAX Code: https://github.com/google-research/vision_transformer
- [D] (Paper Overview) MLP-Mixer: An all-MLP Architecture for Vision
-
[P] Animesion: a framework, for anime (and related) character recognition. It uses Vision Transformers trained on a subset of Danbooru2018, that we rebranded as DAF:re, and can classify a given image into one of more than 3000 characters! Source code and checkpoints included.
For this project I used the pretrained models released by Google in Jax, using this particular PyTorch custom implementation. Those were pretrained on ImageNet21k with 14 M images among 21 K classes. Then yes I finetune on two datasets: one with 15 K images and 170 characters, and one with 3 K characters and almost 500 K images.
- Short term memory solutions for video tasks?
What are some alternatives?
Milvus - A cloud-native vector database, storage for next generation AI applications
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
nerfstudio - A collaboration friendly studio for NeRFs
bootcamp - Dealing with all unstructured data, such as reverse image search, audio search, molecular search, video analysis, question and answer systems, NLP, etc.
ImageNet21K - Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
kaggle-environments
Fashion12K_german_queries
imodels - Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
fashion-200k - Fashion 200K dataset used in paper "Automatic Spatially-aware Fashion Concept Discovery."
discoart - 🪩 Create Disco Diffusion artworks in one line
typeshed - Collection of library stubs for Python, with static types