bootcamp
transformers
bootcamp | transformers | |
---|---|---|
24 | 178 | |
1,634 | 125,369 | |
2.8% | 1.7% | |
9.1 | 10.0 | |
1 day ago | 6 days ago | |
HTML | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bootcamp
- FLaNK AI - 01 April 2024
- FLaNK Stack Weekly 22 January 2024
-
Milvus Adventures Jan 5, 2023
Metadata Filtering with Zilliz Cloud Pipelines This tutorial discuss scalar or metadata filtering and how you can perform metadata filtering in Zilliz Cloud. This blog continues on the previous blog on Getting started with RAG in just 5 minutes. You can find its code in this notebook and scroll down to Cell #27.
-
Build a search engine, not a vector DB
Partially agree.
Vector DBs are critical components in retrieval systems. What most applications need are retrieval systems, rather than building blocks of retrieval systems. That doesn't mean the building blocks are not important.
As someone working on vector DB, I find many users struggling in building their own retrieval systems with building blocks such as embedding service (openai,cohere), logic orchestration framework (langchain/llamaindex) and vector databases, some even with reranker models. Putting them together is not as easy as it looks. A fairly changeling system work. Letting alone quality tuning and devops.
The struggle is no surprise to me, as tech companies who are experts on this (google,meta) all have dedicated teams working on retrieval system alone, making tons of optimizations and develop a whole feedback loop of evaluating and improving the quality. Most developers don't get access to such resource.
No one size fits all. I think there shall exist a service that democratize AI-powered retrieval, in simple words the know-how of using embedding+vectordb and a bunch of tricks to achieve SOTA retrieval quality.
With this idea I built a Retrieval-as-a-service solution, and here is its demo:
https://github.com/milvus-io/bootcamp/blob/master/bootcamp/R...
Curious to learn your thoughts.
-
Vector Database in a Jupyter Notebook
Although it's common to use vector databases in conjunction with LLMs, I like to talk about vector databases in the context of unstructured data, i.e. any data that you can vectorize with (or without) an ML model. Yes, this includes text, but it also includes things like visual data, molecular structures, and geospatial data.
For folks who want to learn a bit more, there are examples of vector database use cases beyond semantic text search in our bootcamp: https://github.com/milvus-io/bootcamp
-
Beginner-ish resources for choosing a vector database?
Easy to get started: Here are some tutorials for Milvus in a Jupyter Notebook that I wrote - reverse image search, semantic text search
-
Semantic Similarity Search
I think you can just store your vector embeddings in the vector store somewhere and then query with your second document. I created a short tutorial on this that shows how to get the top 2 vector embeddings from a text query
-
[D] Looking for open source projects to contribute
For more beginner tasks associated with the Milvus vector database, you can contribute to the Bootcamp project( https://github.com/milvus-io/bootcamp), where we build a lot of data-driven solutions using ML and Milvus vector database, including reverse image search, recommender systems, etc.
-
I built an image similarity search system... Suggestions needed: what are some fun image datasets or scenarios I can use with this? :)
Source code here: https://github.com/milvus-io/bootcamp/tree/master/solutions/reverse_image_search
- Faiss: Facebook's open source vector search library
transformers
-
XLSTM: Extended Long Short-Term Memory
Fascinating work, very promising.
Can you summarise how the model in your paper differs from this one ?
https://github.com/huggingface/transformers/issues/27011
-
AI enthusiasm #9 - A multilingual chatbot📣🈸
transformers is a package by Hugging Face, that helps you interact with models on HF Hub (GitHub)
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
What are some alternatives?
Milvus - A cloud-native vector database, storage for next generation AI applications
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
google-research - Google Research
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
docarray - Represent, send, store and search multimodal data
llama - Inference code for Llama models
es-clip-image-search - Sample implementation of natural language image search with OpenAI's CLIP and Elasticsearch or Opensearch.
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
habitat-sim - A flexible, high-performance 3D simulator for Embodied AI research.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk
huggingface_hub - The official Python client for the Huggingface Hub.