Fast_Sentence_Embeddings VS jiant

Compare Fast_Sentence_Embeddings vs jiant and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
Fast_Sentence_Embeddings jiant
3 2
603 1,605
- 1.0%
0.0 0.0
about 1 year ago 10 months ago
Jupyter Notebook Python
GNU General Public License v3.0 only MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Fast_Sentence_Embeddings

Posts with mentions or reviews of Fast_Sentence_Embeddings. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-19.
  • The Illustrated Word2Vec
    3 projects | news.ycombinator.com | 19 Apr 2024
    This is a great guide.

    Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.

    With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.

    When you should use language model embeddings:

    - Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.

    For LM embedding models, many are multilingual aligned right away.

    - Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.

    This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.

    1. sbert.net

    2. https://collaborate.princeton.edu/en/publications/a-simple-b...

    3. https://github.com/oborchers/Fast_Sentence_Embeddings

    4. https://github.com/facebookresearch/MUSE

  • You probably shouldn't use OpenAI's embeddings
    5 projects | news.ycombinator.com | 30 Mar 2023
    You can find some comparisons and evaluation datasets/tasks here: https://www.sbert.net/docs/pretrained_models.html

    Generally MiniLM is a good baseline. For faster models you want this library:

    https://github.com/oborchers/Fast_Sentence_Embeddings

    For higher quality ones, just take the bigger/slower models in the SentenceTransformers library

  • [D] Unsupervised document similarity state of the art
    2 projects | /r/MachineLearning | 9 Apr 2021
    Links: fse: https://github.com/oborchers/Fast_Sentence_Embeddings Sentence-transformers: https://github.com/oborchers/sentence-transformers

jiant

Posts with mentions or reviews of jiant. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-11.

What are some alternatives?

When comparing Fast_Sentence_Embeddings and jiant you can also consider the following projects:

gensim - Topic Modelling for Humans

kiri - Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

smaller-labse - Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE

SGDepth - [ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance

cso-classifier - Python library that classifies content from scientific papers with the topics of the Computer Science Ontology (CSO).

allennlp - An open-source NLP research library, built on PyTorch.

kgtk - Knowledge Graph Toolkit

bertviz - BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)

RecSys_Course_AT_PoliMi - This is the official repository for the Recommender Systems course at Politecnico di Milano.

haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.

sentence-transformers - Sentence Embeddings with BERT & XLNet

PaddleNLP - πŸ‘‘ Easy-to-use and powerful NLP and LLM library with πŸ€— Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including πŸ—‚Text Classification, πŸ” Neural Search, ❓ Question Answering, ℹ️ Information Extraction, πŸ“„ Document Intelligence, πŸ’Œ Sentiment Analysis etc.