docarray
transformers
Our great sponsors
docarray | transformers | |
---|---|---|
32 | 173 | |
2,730 | 124,557 | |
2.1% | 2.7% | |
9.2 | 10.0 | |
7 days ago | about 12 hours ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
docarray
- DocArray – Represent, send, and store multimodal data for ML
-
Some questions about multimodal data.
I’ve heard of DocArray, a library for multimodal data in transit and Pytorch Lightning which is also a tool for multimodal data. These two sound like a promising solution, but I’m not sure how to use it with databases or cloud storage. Do I need to install any additional packages or dependencies?
-
Trying to create an AI recommender system that’s also ad-free video streaming.
I'm considering using these tools for a recommender system for analyzing text data like user reviews: DocArray and the EZ-MMLA Toolkit. Can anyone share their experience with the DocArray and EZ-MMLA Toolkit? I would love to hear about others' experiences before making a final decision.
-
do you know any systems that can handle multimodal data fusion and representation learning?
I have been thinking about trying out DocArray and the EZ-MMLA Toolkit .. Has anyone had experience with these two projects?? Let me know what you think!
-
I plan to build my own AI powered search engine for my portfolio. Do you know ones that are open-source?
For some alternatives, I know there’s DocArray where you can handle text, image and audio data. is basically a toolbox for multimodal data and then there should be Haystack which is also let you build search systems and also has to do something with Transformers and LLMs.
-
A Guide to Using OpenTelemetry in Jina for Monitoring and Tracing Applications
DocArray to manipulate data and interact with the storage backend using document store.
-
This week(s) in DocArray
It's already been two weeks since the last alpha release of DocArray v2. And since then a lot has happened — we've merged features we're really proud of, and we've cried tears of joy and misery trying to coerce Python into doing what we want. If you want to learn about interesting Python edge cases or follow the advancement of DocArray v2 development then you’ve come to the right place in this blog post!
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
The German Fashion12k dataset is available for free use by the Jina AI community. After logging into Jina AI Cloud, you can download it directly in DocArray format:
-
Want to Search Inside Videos Like a Pro? CLIP-as-service Can Help
Jina AI’s DocArray library
-
Looking for open source projects in Machine Learning and Data Science
You could try spaCy. This is the brains of the operation - an open-source NLP library for advanced NLP in Python. Another is DocArray - It's built on top of NumPy and Dask, and good for preprocessing, modeling, and analysis of text data.
transformers
-
AI enthusiasm #6 - Finetune any LLM you wantđź’ˇ
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.
-
Mixtral-7b-8expert working in Oobabooga (unquantized multi-gpu)
pip install git+https://github.com/huggingface/transformers.git@main
What are some alternatives?
Milvus - A cloud-native vector database, storage for next generation AI applications
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
bootcamp - Dealing with all unstructured data, such as reverse image search, audio search, molecular search, video analysis, question and answer systems, NLP, etc.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
kaggle-environments
llama - Inference code for Llama models
imodels - Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
discoart - 🪩 Create Disco Diffusion artworks in one line
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
habitat-sim - A flexible, high-performance 3D simulator for Embodied AI research.
huggingface_hub - The official Python client for the Huggingface Hub.