vectara-answer VS ann-benchmarks

Compare vectara-answer vs ann-benchmarks and see what are their differences.

vectara-answer

LLM-powered Conversational AI experience using Vectara (by vectara)

ann-benchmarks

Benchmarks of approximate nearest neighbor libraries in Python (by erikbern)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
vectara-answer ann-benchmarks
13 51
217 4,619
1.8% -
8.9 7.7
7 days ago 9 days ago
TypeScript Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

vectara-answer

Posts with mentions or reviews of vectara-answer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-13.
  • Show HN: Quepid now works with vetor search
    1 project | news.ycombinator.com | 16 Oct 2023
    Hi HN!

    I lead product for Vectara (https://vectara.com) and we recently worked with OpenSource connections to both evaluate our new home-grown embedding model (Boomerang) as well as to help users start more quantitatively evaluating these systems on their own data/with their own queries.

    OSC maintains a fantastic open source tool, Quepid, and we worked with them to integrate Vectara (and to use it to quantitatively evaluate Boomerang). We're hoping this allows more vector/hybrid players to be more transparent about the quality of their systems and any models they use instead of everyone relying on and gaming a benchmark like BIER.

    More details on OSC's eval can be found at https://opensourceconnections.com/blog/2023/10/11/learning-t...

  • A Comprehensive Guide for Building Rag-Based LLM Applications
    6 projects | news.ycombinator.com | 13 Sep 2023
    RAG is a very useful flow but I agree the complexity is often overwhelming, esp as you move from a toy example to a real production deployment. It's not just choosing a vector DB (last time I checked there were about 50), managing it, deciding on how to chunk data, etc. You also need to ensure your retrieval pipeline is accurate and fast, ensuring data is secure and private, and manage the whole thing as it scales. That's one of the main benefits of using Vectara (https://vectara.com; FD: I work there) - it's a GenAI platform that abstracts all this complexity away, and you can focus on building your application.
  • Do we think about vector dbs wrong?
    7 projects | news.ycombinator.com | 5 Sep 2023
    I agree. my experience is that hybrid search does provide better results in many cases, and is honestly not as easy to implement as may seem at first. In general, getting search right can be complicated today and the common thinking of "hey I'm going to put up a vector DB and use that" is simplistic.

    Disclaimer: I'm with Vectara (https://vectara.com), we provide an end-to-end platform for building GenAI products.

  • What is a GenAI Platform?
    1 project | /r/ChatGPT | 11 Aug 2023
    In this article I discuss my long-held belief that it's time we shifted the discussion from "which vector database to use" for GenAI and instead think about "how do we make this whole architecture simpler to use", a focus of GenAI platforms like https://vectara.com
  • Comparison of Vector Databases
    7 projects | news.ycombinator.com | 31 Jul 2023
    With Vectara (full disclosure: I work there; https://vectara.com) we provide a simple API to implement applications with Grounded Generation (aka retrieval augmented generation). The embeddings model, the vector store, the retrieval engine and all the other functionality - implemented by the Vectara platform, so you don't have to choose which vector DB to use, which embeddings model to use, and so on. Makes life easy and simple, and you can focus on developing your application.
  • Vectara, une bonne alternative à l'ingestion de données par les LLMs
    1 project | /r/langchainfr | 7 Jul 2023
  • Train a model based on text from pdfs
    2 projects | /r/LargeLanguageModels | 7 Jul 2023
    You can also use vectara to implement this. Just upload the docs via the indexing API and then run queries via the search API. It tends to be less complicated with Vectara since we take care of many things internally (vectorDB, embeddings, etc). Let me know if I can help further with that.
  • ChatGPT-like interface for product search
    1 project | /r/ChatGPT | 15 Jun 2023
    I found vectara.com but all examples seem to be about feeding text. I'm not super technical so I may be missing something. Please let me know if I need to elaborate further.
  • Vectara-Answer
    1 project | news.ycombinator.com | 9 Jun 2023
  • ChatGPT made everyone realize that we don't want to search, we want answers.
    1 project | /r/ChatGPT | 8 Jun 2023
    yes agreed that if ChatGPT becomes monetized the same way as Google, then it the fun will be over. We'll have to wait and see. I think though that this innovation is not just applicable to web search or consumer search, and with products like vectara.com providing this type of user experience in the enterprise there is a significant net gain here overall.

ann-benchmarks

Posts with mentions or reviews of ann-benchmarks. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-30.
  • Using Your Vector Database as a JSON (Or Relational) Datastore
    1 project | news.ycombinator.com | 23 Apr 2024
    On top of my head, pgvector only supports 2 indexes, those are running in memory only. They don't support GPU indexing, nor Disk based indexing, they also don't have separation of query and insertions.

    Also with different people I've talked to, they struggle with scale past 100K-1M vector.

    You can also have a look yourself from a performance perspective: https://ann-benchmarks.com/

  • ANN Benchmarks
    1 project | news.ycombinator.com | 25 Jan 2024
  • Approximate Nearest Neighbors Oh Yeah
    5 projects | news.ycombinator.com | 30 Oct 2023
    https://ann-benchmarks.com/ is a good resource covering those libraries and much more.
  • pgvector vs Pinecone: cost and performance
    1 project | dev.to | 23 Oct 2023
    We utilized the ANN Benchmarks methodology, a standard for benchmarking vector databases. Our tests used the dbpedia dataset of 1,000,000 OpenAI embeddings (1536 dimensions) and inner product distance metric for both Pinecone and pgvector.
  • Vector database is not a separate database category
    3 projects | news.ycombinator.com | 2 Oct 2023
    Data warehouses are columnar stores. They are very different from row-oriented databases - like Postgres, MySQL. Operations on columns - e.g., aggregations (mean of a column) are very efficient.

    Most vector databases use one of a few different vector indexing libraries - FAISS, hnswlib, and scann (google only) are popular. The newer vector dbs, like weaviate, have introduced their own indexes, but i haven't seen any performance difference -

    Reference: https://ann-benchmarks.com/

  • How We Made PostgreSQL a Better Vector Database
    2 projects | news.ycombinator.com | 25 Sep 2023
    (Blog author here). Thanks for the question. In this case the index for both DiskANN and pgvector HNSW is small enough to fit in memory on the machine (8GB RAM), so there's no need to touch the SSD. We plan to test on a config where the index size is larger than memory (we couldn't this time due to limitations in ANN benchmarks [0], the tool we use).

    To your question about RAM usage, we provide a graph of index size. When enabling PQ, our new index is 10x smaller than pgvector HNSW. We don't have numbers for HNSWPQ in FAISS yet.

    [0]: https://github.com/erikbern/ann-benchmarks/

  • Do we think about vector dbs wrong?
    7 projects | news.ycombinator.com | 5 Sep 2023
  • Vector Search with OpenAI Embeddings: Lucene Is All You Need
    2 projects | news.ycombinator.com | 3 Sep 2023
    In terms of "All You Need" for Vector Search, ANN Benchmarks (https://ann-benchmarks.com/) is a good site to review when deciding what you need. As with anything complex, there often isn't a universal solution.

    txtai (https://github.com/neuml/txtai) can build indexes with Faiss, Hnswlib and Annoy. All 3 libraries have been around at least 4 years and are mature. txtai also supports storing metadata in SQLite, DuckDB and the next release will support any JSON-capable database supported by SQLAlchemy (Postgres, MariaDB/MySQL, etc).

  • Vector databases: analyzing the trade-offs
    5 projects | news.ycombinator.com | 20 Aug 2023
    pg_vector doesn't perform well compared to other methods, at least according to ANN-Benchmarks (https://ann-benchmarks.com/).

    txtai is more than just a vector database. It also has a built-in graph component for topic modeling that utilizes the vector index to autogenerate relationships. It can store metadata in SQLite/DuckDB with support for other databases coming. It has support for running LLM prompts right with the data, similar to a stored procedure, through workflows. And it has built-in support for vectorizing data into vectors.

    For vector databases that simply store vectors, I agree that it's nothing more than just a different index type.

  • Vector Dataset benchmark with 1536/768 dim data
    3 projects | news.ycombinator.com | 14 Aug 2023
    The reason https://ann-benchmarks.com is so good, is that we can see a plot of recall vs latency. I can see you have some latency numbers in the leaderboard at the bottom, but it's very difficult to make a decision.

    As a practitioner that works with vector databases every day, just latency is meaningless to me, because I need to know if it's fast AND accurate, and what the tradeoff is! You can't have it both ways. So it would be helpful if you showed plots showing this tradeoff, similar to ann-benchmarks.

What are some alternatives?

When comparing vectara-answer and ann-benchmarks you can also consider the following projects:

llama-hub - A library of data loaders for LLMs made by the community -- to be used with LlamaIndex and/or LangChain

pgvector - Open-source vector similarity search for Postgres

llm-applications - A comprehensive guide to building RAG-based LLM applications for production.

faiss - A library for efficient similarity search and clustering of dense vectors.

VectorDBBench - A Benchmark Tool for VectorDB

Milvus - A cloud-native vector database, storage for next generation AI applications

txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows

tlsh

motorhead - 🧠 Motorhead is a memory and information retrieval server for LLMs.

vald - Vald. A Highly Scalable Distributed Vector Search Engine

pyod - A Comprehensive and Scalable Python Library for Outlier Detection (Anomaly Detection)

pgANN - Fast Approximate Nearest Neighbor (ANN) searches with a PostgreSQL database.