sentence-transformers
hummingbird
Our great sponsors
sentence-transformers | hummingbird | |
---|---|---|
45 | 9 | |
13,793 | 3,302 | |
4.5% | 0.7% | |
9.2 | 7.1 | |
2 days ago | 10 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sentence-transformers
-
External vectorization
txtai is an open-source first system. Given it's own open-source roots, like-minded projects such as sentence-transformers are prioritized during development. But that doesn't mean txtai can't work with Embeddings API services.
-
[D] Looking for a better multilingual embedding model
Ok great. My use case is not very specific, but rather general. I am looking for a model that can perform asymmetric semantic search for the languages I mentioned earlier (Urdu, Persian, Arabic etc.). I have also looked into the sentence-transformer training documentation. Do you think it would be a good idea to use the XNLI dataset for fine-tuning? Or maybe you can suggest much better dataset. Furthermore, I am not sure if fine-tuning is suitable for my task. Because my use case is general so I can use already trained model.
- Best pathway for Domain Adaptation with Sentence Transformers?
-
Syntactic and Semantic surprisal using a LLM
The task you are looking for is semantic textual similarity. There are a few models and datasets out there that can do this. I'd probably start with the SemEval2017 Task 1 task description and competition entries here and then work outward from there (using something like SemanticScholar or Papers With Code to find newer state of the art works that cite these models if needed). For what it's worth you might find that Sentence Bert (SBERT) gives good vectors for cosine similarity comparison out of the box for this task.
-
Mean pooling in BERT
Check out the sentence-transformers implementation. If I don't miss anything they don't exclude CLS when the pooling strategy is set to 'mean'
-
I Built an AI Search Engine that can find exact timestamps for anything on Youtube using OpenAI Whisper
Break up transcript into shorter segments and convert segments to a 768 vector array. Use a process known as embedding using our second ML model, UKP Labs BERT’s sentence transformer model.
-
Seeking advice on improving NLP search results
Not sure what kind of texts you have, but these models have a max sequence limit of 512 (approx 350 words or so). If you're texts are longer than that, consider splitting them up into chunks or creating a summary and taking an embedding of that. Some clustering algorithm may be the way to go here. Here's a bunch of examples. I use agglomerative for my use case.
-
Dev Diary #12 - Finetune model
https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/data_augmentation (Augmented Encoding)
-
[R] Customize size of Bio-BERT pre-trained embeddings
For vector representation you can take the mean and then pca to get the size that you want, but if you have time then use sentence transformers to train a vector representation instead.
- SentenceTransformer producing different sentence embedding results in Docker
hummingbird
- Treebomination: Convert a scikit-learn decision tree into a Keras model
-
[D] GPU-enabled scikit-learn
If are interested in just predictions you can try Hummingbird. It is part of the PyTorch ecosystem. We get already trained scikit-learn models and translate them into PyTorch models. From them you can run your model on any hardware support by PyTorch, export it into TVM, ONNX, etc. Performance on hardware acceleration is quite good (orders of magnitude better than scikit-learn is some cases)
-
Machine Learning with PyTorch and Scikit-Learn – The *New* Python ML Book
I think Rapids AI's cuML tried to go into this direction (essentially scikit-learn on the GPU): https://docs.rapids.ai/api/cuml/stable/api.html#logistic-reg.... For some reason it never took really off though.
Btw., going on a tangent, you might like Hummingbird (https://github.com/microsoft/hummingbird). It allows you trained scikit-learn tree-based models to PyTorch. I watched the SciPy talk last year, and it's a super smart & elegant idea.
-
Export and run models with ONNX
ONNX opens an avenue for direct inference using a number of languages and platforms. For example, a model could be run directly on Android to limit data sent to a third party service. ONNX is an exciting development with a lot of promise. Microsoft has also released Hummingbird which enables exporting traditional models (sklearn, decision trees, logistical regression..) to ONNX.
-
Supreme Court, in a 6–2 ruling in Google v. Oracle, concludes that Google’s use of Java API was a fair use of that material
And Python.
-
[D] Here are 3 ways to Speed Up Scikit-Learn - Any suggestions?
For inference, you can convert your models to other formats that support GPU acceleration. See Hummingbird https://github.com/microsoft/hummingbird
-
[D] Microsoft library, Hummingbird, compiles trained ML models into tensor computation for faster inference.
The surprising thing is that Hummingbird can be faster than the GPU implementation of LightGBM (and XGBoost) if you use tensor compilers such as TVM. [The paper](https://www.usenix.org/conference/osdi20/presentation/nakandala) describes our findings. We have also open sourced the [benchmark code](https://github.com/microsoft/hummingbird/tree/main/benchmarks) so you try yourself!
-
I learned about Microsoft's Hummingbird library today. 1000x performance??
I took their sample code from Github and tweaked it to spit out times for each model's prediction, as well as increase the number of rows to 5 million. I used Google's Colab and selected GPU for my hardware accelerator. This gives an option to run code on GPU, not that all computations will happen on the GPU.
What are some alternatives?
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
onnx - Open standard for machine learning interoperability
swift - The Swift Programming Language
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
cuml - cuML - RAPIDS Machine Learning Library
Top2Vec - Top2Vec learns jointly embedded topic, document and word vectors.
docker - Docker - the open-source application container engine
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
chemprop - Message Passing Neural Networks for Molecule Property Prediction
datasets - 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
tune-sklearn - A drop-in replacement for Scikit-Learn’s GridSearchCV / RandomizedSearchCV -- but with cutting edge hyperparameter tuning techniques.