Fast_Sentence_Embeddings VS marqo

Compare Fast_Sentence_Embeddings vs marqo and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Fast_Sentence_Embeddings marqo
3 114
603 4,152
- 2.3%
0.0 9.3
about 1 year ago about 17 hours ago
Jupyter Notebook Python
GNU General Public License v3.0 only Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Fast_Sentence_Embeddings

Posts with mentions or reviews of Fast_Sentence_Embeddings. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-19.
  • The Illustrated Word2Vec
    3 projects | news.ycombinator.com | 19 Apr 2024
    This is a great guide.

    Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.

    With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.

    When you should use language model embeddings:

    - Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.

    For LM embedding models, many are multilingual aligned right away.

    - Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.

    This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.

    1. sbert.net

    2. https://collaborate.princeton.edu/en/publications/a-simple-b...

    3. https://github.com/oborchers/Fast_Sentence_Embeddings

    4. https://github.com/facebookresearch/MUSE

  • You probably shouldn't use OpenAI's embeddings
    5 projects | news.ycombinator.com | 30 Mar 2023
    You can find some comparisons and evaluation datasets/tasks here: https://www.sbert.net/docs/pretrained_models.html

    Generally MiniLM is a good baseline. For faster models you want this library:

    https://github.com/oborchers/Fast_Sentence_Embeddings

    For higher quality ones, just take the bigger/slower models in the SentenceTransformers library

  • [D] Unsupervised document similarity state of the art
    2 projects | /r/MachineLearning | 9 Apr 2021
    Links: fse: https://github.com/oborchers/Fast_Sentence_Embeddings Sentence-transformers: https://github.com/oborchers/sentence-transformers

marqo

Posts with mentions or reviews of marqo. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-25.
  • Are we at peak vector database?
    8 projects | news.ycombinator.com | 25 Jan 2024
    We (Marqo) are doing a lot on 1 and 2. There is a huge amount to be done on the ML side of vector search and we are investing heavily in it. I think it has not quite sunk in that vector search systems are ML systems and everything that comes with that. I would love to chat about 1 and 2 so feel free to email me (email is in my profile). What we have done so far is here -> https://github.com/marqo-ai/marqo
  • Qdrant, the Vector Search Database, raised $28M in a Series A round
    8 projects | news.ycombinator.com | 23 Jan 2024
    Marqo.ai (https://github.com/marqo-ai/marqo) is doing some interesting stuff and is oss. We handle embedding generation as well as retrieval (full disclosure, I work for Marqo.ai)
  • Ask HN: Is there any good semantic search GUI for images or documents?
    2 projects | news.ycombinator.com | 17 Jan 2024
    Take a look here https://github.com/marqo-ai/local-image-search-demo. It is based on https://github.com/marqo-ai/marqo. We do a lot of image search applications. Feel free to reach out if you have other questions (email in profile).
  • 90x Faster Than Pgvector – Lantern's HNSW Index Creation Time
    7 projects | news.ycombinator.com | 2 Jan 2024
    That sounds much longer than it should. I am not sure on your exact use-case but I would encourage you to check out Marqo (https://github.com/marqo-ai/marqo - disclaimer, I am a co-founder). All inference and orchestration is included (no api calls) and many open-source or fine-tuned models can be used.
  • Embeddings: What they are and why they matter
    9 projects | news.ycombinator.com | 24 Oct 2023
    Try this https://github.com/marqo-ai/marqo which handles all the chunking for you (and is configurable). Also handles chunking of images in an analogous way. This enables highlighting in longer docs and also for images in a single retrieval step.
  • Choosing vector database: a side-by-side comparison
    3 projects | news.ycombinator.com | 4 Oct 2023
    As others have correctly pointed out, to make a vector search or recommendation application requires a lot more than similarity alone. We have seen the HNSW become commoditised and the real value lies elsewhere. Just because a database has vector functionality doesn’t mean it will actually service anything beyond “hello world” type semantic search applications. IMHO these have questionable value, much like the simple Q and A RAG applications that have proliferated. The elephant in the room with these systems is that if you are relying on machine learning models to produce the vectors you are going to need to invest heavily in the ML components of the system. Domain specific models are a must if you want to be a serious contender to an existing search system and all the usual considerations still apply regarding frequent retraining and monitoring of the models. Currently this is left as an exercise to the reader - and a very large one at that. We (https://github.com/marqo-ai/marqo, I am a co-founder) are investing heavily into making the ML production worthy and continuous learning from feedback of the models as part of the system. Lots of other things to think about in how you represent documents with multiple vectors, multimodality, late interactions, the interplay between embedding quality and HNSW graph quality (i.e. recall) and much more.
  • Show HN: Marqo – Vectorless Vector Search
    1 project | news.ycombinator.com | 16 Aug 2023
  • AI for AWS Documentation
    6 projects | news.ycombinator.com | 6 Jul 2023
    Marqo provides automatic, configurable chunking (for example with overlap) and can allow you to bring your own model or choose from a wide range of opensource models. I think e5-large would be a good one to try. https://github.com/marqo-ai/marqo
  • [N] Open-source search engine Meilisearch launches vector search
    2 projects | /r/MachineLearning | 6 Jul 2023
    Marqo has a similar API to Meilisearch's standard API but uses vector search in the background: https://github.com/marqo-ai/marqo
  • Ask HN: Which Vector Database do you recommend for LLM applications?
    1 project | news.ycombinator.com | 29 Jun 2023
    Have you tried Marqo? check the repo : https://github.com/marqo-ai/marqo

What are some alternatives?

When comparing Fast_Sentence_Embeddings and marqo you can also consider the following projects:

gensim - Topic Modelling for Humans

Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database​.

smaller-labse - Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE

gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs

cso-classifier - Python library that classifies content from scientific papers with the topics of the Computer Science Ontology (CSO).

Milvus - A cloud-native vector database, storage for next generation AI applications

kgtk - Knowledge Graph Toolkit

qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

RecSys_Course_AT_PoliMi - This is the official repository for the Recommender Systems course at Politecnico di Milano.

vault-ai - OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.

sentence-transformers - Sentence Embeddings with BERT & XLNet

marqo - Tensor search for humans. [Moved to: https://github.com/marqo-ai/marqo]