transformers
sentence-transformers
Our great sponsors
transformers | sentence-transformers | |
---|---|---|
173 | 45 | |
124,557 | 13,661 | |
2.7% | 3.6% | |
10.0 | 9.1 | |
about 23 hours ago | 6 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
transformers
-
AI enthusiasm #6 - Finetune any LLM you want💡
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
In transformers, they tried really hard to have a single function or method to deal with both self and cross attention mechanisms, masking, positional and relative encodings, interpolation etc. While it allows a user to use the same function/method for any model, it has led to severe parameter bloat. Just compare the original implementation of llama by FAIR with the implementation by HF to get an idea.
-
Mixtral-7b-8expert working in Oobabooga (unquantized multi-gpu)
pip install git+https://github.com/huggingface/transformers.git@main
sentence-transformers
-
External vectorization
txtai is an open-source first system. Given it's own open-source roots, like-minded projects such as sentence-transformers are prioritized during development. But that doesn't mean txtai can't work with Embeddings API services.
-
[D] Looking for a better multilingual embedding model
Ok great. My use case is not very specific, but rather general. I am looking for a model that can perform asymmetric semantic search for the languages I mentioned earlier (Urdu, Persian, Arabic etc.). I have also looked into the sentence-transformer training documentation. Do you think it would be a good idea to use the XNLI dataset for fine-tuning? Or maybe you can suggest much better dataset. Furthermore, I am not sure if fine-tuning is suitable for my task. Because my use case is general so I can use already trained model.
- Best pathway for Domain Adaptation with Sentence Transformers?
-
Syntactic and Semantic surprisal using a LLM
The task you are looking for is semantic textual similarity. There are a few models and datasets out there that can do this. I'd probably start with the SemEval2017 Task 1 task description and competition entries here and then work outward from there (using something like SemanticScholar or Papers With Code to find newer state of the art works that cite these models if needed). For what it's worth you might find that Sentence Bert (SBERT) gives good vectors for cosine similarity comparison out of the box for this task.
-
Mean pooling in BERT
Check out the sentence-transformers implementation. If I don't miss anything they don't exclude CLS when the pooling strategy is set to 'mean'
-
I Built an AI Search Engine that can find exact timestamps for anything on Youtube using OpenAI Whisper
Break up transcript into shorter segments and convert segments to a 768 vector array. Use a process known as embedding using our second ML model, UKP Labs BERT’s sentence transformer model.
-
Seeking advice on improving NLP search results
Not sure what kind of texts you have, but these models have a max sequence limit of 512 (approx 350 words or so). If you're texts are longer than that, consider splitting them up into chunks or creating a summary and taking an embedding of that. Some clustering algorithm may be the way to go here. Here's a bunch of examples. I use agglomerative for my use case.
-
Dev Diary #12 - Finetune model
https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/data_augmentation (Augmented Encoding)
-
[R] Customize size of Bio-BERT pre-trained embeddings
For vector representation you can take the mean and then pca to get the size that you want, but if you have time then use sentence transformers to train a vector representation instead.
- SentenceTransformer producing different sentence embedding results in Docker
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
onnx - Open standard for machine learning interoperability
llama - Inference code for Llama models
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
Top2Vec - Top2Vec learns jointly embedded topic, document and word vectors.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
huggingface_hub - The official Python client for the Huggingface Hub.
datasets - 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
OpenNMT-py - Open Source Neural Machine Translation and (Large) Language Models in PyTorch
faiss - A library for efficient similarity search and clustering of dense vectors.