SaaSHub helps you find the best software and product alternatives Learn more →
Fast_Sentence_Embeddings Alternatives
Similar projects and alternatives to Fast_Sentence_Embeddings
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
cso-classifier
Python library that classifies content from scientific papers with the topics of the Computer Science Ontology (CSO).
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
RecSys_Course_AT_PoliMi
This is the official repository for the Recommender Systems course at Politecnico di Milano.
Fast_Sentence_Embeddings reviews and mentions
-
The Illustrated Word2Vec
This is a great guide.
Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.
With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.
When you should use language model embeddings:
- Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.
For LM embedding models, many are multilingual aligned right away.
- Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.
This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.
1. sbert.net
2. https://collaborate.princeton.edu/en/publications/a-simple-b...
3. https://github.com/oborchers/Fast_Sentence_Embeddings
4. https://github.com/facebookresearch/MUSE
-
You probably shouldn't use OpenAI's embeddings
You can find some comparisons and evaluation datasets/tasks here: https://www.sbert.net/docs/pretrained_models.html
Generally MiniLM is a good baseline. For faster models you want this library:
https://github.com/oborchers/Fast_Sentence_Embeddings
For higher quality ones, just take the bigger/slower models in the SentenceTransformers library
-
[D] Unsupervised document similarity state of the art
Links: fse: https://github.com/oborchers/Fast_Sentence_Embeddings Sentence-transformers: https://github.com/oborchers/sentence-transformers
-
A note from our sponsor - SaaSHub
www.saashub.com | 25 Apr 2024
Stats
oborchers/Fast_Sentence_Embeddings is an open source project licensed under GNU General Public License v3.0 only which is an OSI approved license.
The primary programming language of Fast_Sentence_Embeddings is Jupyter Notebook.
Popular Comparisons
- Fast_Sentence_Embeddings VS gensim
- Fast_Sentence_Embeddings VS smaller-labse
- Fast_Sentence_Embeddings VS cso-classifier
- Fast_Sentence_Embeddings VS kgtk
- Fast_Sentence_Embeddings VS RecSys_Course_AT_PoliMi
- Fast_Sentence_Embeddings VS sentence-transformers
- Fast_Sentence_Embeddings VS wembedder
- Fast_Sentence_Embeddings VS gpt4-pdf-chatbot-langchain
Sponsored