google-research VS txtai

Compare google-research vs txtai and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
google-research txtai
98 356
32,915 7,033
1.1% 3.2%
9.6 9.3
4 days ago 9 days ago
Jupyter Notebook Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

google-research

Posts with mentions or reviews of google-research. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-10.
  • Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
    11 projects | news.ycombinator.com | 10 Apr 2024
    People on here will be happy to say that I do a similar thing, however my sequence length is dynamic because I also use a 2nd data structure - I'll use pretentious academic speak: I use a simple bigram LM (2-gram) for single next-word likeliness and separately a trie that models all words and phrases (so, n-gram). Not sure how many total nodes because sentence lengths vary in training data, but there are about 200,000 entry points (keys) so probably about 2-10 million total nodes in the default setup.

    "Constructing 7-gram LM": They likely started with bigrams (what I use) which only tells you the next word based on 1 word given, and thought to increase accuracy by modeling out more words in a sequence, and eventually let the user (developer) pass in any amount they want to model (https://github.com/google-research/google-research/blob/5c87...). I thought of this too at first, but I actually got more accuracy (and speed) out of just keeping them as bigrams and making a totally separate structure that models out an n-gram of all phrases (e.g. could be a 24-token long sequence or 100+ tokens etc. I model it all) and if that phrase is found, then I just get the bigram assumption of the last token of the phrase. This works better when the training data is more diverse (for a very generic model), but theirs would probably outperform mine on accuracy when the training data has a lot of nearly identical sentences that only change wildly toward the end - I don't find this pattern in typical data though, maybe for certain coding and other tasks there are those patterns though. But because it's not dynamic and they make you provide that number, even a low number (any phrase longer than 2 words) - theirs will always have to do more lookup work than with simple bigrams and they're also limited by that fixed number as far as accuracy. I wonder how scalable that is - if I need to train on occasional ~100-word long sentences but also (and mostly) just ~3-word long sentences, I guess I set this to 100 and have a mostly "undefined" trie.

    I also thought of the name "LMJS", theirs is "jslm" :) but I went with simply "next-token-prediction" because that's what it ultimately does as a library. I don't know what theirs is really designed for other than proving a concept. Most of their code files are actually comments and hypothetical scenarios.

    I recently added a browser example showing simple autocomplete using my library: https://github.com/bennyschmidt/next-token-prediction/tree/m... (video)

    And next I'm implementing 8-dimensional embeddings that are converted to normalized vectors between 0-1 to see if doing math on them does anything useful beyond similarity, right now they look like this:

      [nextFrequency, prevalence, specificity, length, firstLetter, lastLetter, firstVowel, lastVowel]
  • Google Research website is down
    1 project | news.ycombinator.com | 5 Apr 2024
  • Jpegli: A New JPEG Coding Library
    9 projects | news.ycombinator.com | 3 Apr 2024
    The change was literally just made: https://github.com/google-research/google-research/commit/4a...

    It appears this was in response to Hacker News comments.

  • Multi-bitrate JPEG compression perceptual evaluation dataset 2023
    1 project | news.ycombinator.com | 31 Jan 2024
  • Vector Databases: A Technical Primer [pdf]
    7 projects | news.ycombinator.com | 12 Jan 2024
    There are options such as Google's ScaNN that may let you go farther before needing to consider specialized databases.

    https://github.com/google-research/google-research/blob/mast...

  • Labs.Google
    1 project | news.ycombinator.com | 22 Dec 2023
    I feel it was unnecesary to create this because https://research.google/ already exists? It just seems like they want to take another URL with a "pure" domain name instead of psubdirectories, etc parts.
  • Smerf: Streamable Memory Efficient Radiance Fields
    3 projects | news.ycombinator.com | 13 Dec 2023
    https://github.com/google-research/google-research/blob/mast...
  • Shisa 7B: a new JA/EN bilingual model based on Mistral 7B
    2 projects | /r/LocalLLaMA | 7 Dec 2023
    You could also try some dedicated translation models like https://huggingface.co/facebook/nllb-moe-54b (or https://github.com/google-research/google-research/tree/master/madlad_400 for something smaller) and see how they do.
  • Translate to and from 400+ languages locally with MADLAD-400
    1 project | /r/LocalLLaMA | 10 Nov 2023
    Google released T5X checkpoints for MADLAD-400 a couple of months ago, but nobody could figure out how to run them. Turns out the vocabulary was wrong, but they uploaded the correct one last week.
  • Mastering ROUGE Matrix: Your Guide to Large Language Model Evaluation for Summarization with Examples
    2 projects | dev.to | 8 Oct 2023

txtai

Posts with mentions or reviews of txtai. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-01.
  • Show HN: FileKitty – Combine and label text files for LLM prompt contexts
    5 projects | news.ycombinator.com | 1 May 2024
  • What contributing to Open-source is, and what it isn't
    1 project | news.ycombinator.com | 27 Apr 2024
    I tend to agree with this sentiment. Many junior devs and/or those in college want to contribute. Then they feel entitled to merge a PR that they worked hard on often without guidance. I'm all for working with people but projects have standards and not all ideas make sense. In many cases, especially with commercial open source, the project is the base of a companies identity. So it's not just for drive-by ideas to pad a resume or finish a school project.

    For those who do want to do this, I'd recommend writing an issue and/or reaching out to the developers to engage in a dialogue. This takes work but it will increase the likelihood of a PR being merged.

    Disclaimer: I'm the primary developer of txtai (https://github.com/neuml/txtai), an open-source vector database + RAG framework

  • Build knowledge graphs with LLM-driven entity extraction
    1 project | dev.to | 21 Feb 2024
    txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.
  • Bootstrap or VC?
    1 project | news.ycombinator.com | 5 Feb 2024
    Bootstrapping only works if you have the runway to do it and you don't feel the need to grow fast.

    With NeuML (https://neuml.com), I've went the bootstrapping route. I've been able to build a fairly successful open source project (txtai 6K stars https://github.com/neuml/txtai) and a revenue positive company. It's a "live within your means" strategy.

    VC funding can have a snowball effect where you need more and more. Then you're in the loop of needing funding rounds to survive. The hope is someday you're acquired or start turning a profit.

    I would say both have their pros and cons. Not all ideas have the luxury of time.

  • txtai: An embeddings database for semantic search, graph networks and RAG
    1 project | news.ycombinator.com | 3 Feb 2024
  • Ask HN: What happened to startups, why is everything so polished?
    2 projects | news.ycombinator.com | 27 Jan 2024
    I agree that in many cases people are puffing their feathers to try to be something they're not (at least not yet). Some believe in the fake it until you make it mentality.

    With NeuML (https://neuml.com), the website is a simple HTML page. On social media, I'm honest about what NeuML is, that I'm in my 40s with a family and not striving to be the next Steve Jobs. I've been able to build a fairly successful open source project (txtai 6K stars https://github.com/neuml/txtai) and a revenue positive company. For me, authenticity and being genuine is most important. I would say that being genuine has been way more of an asset than liability.

  • Are we at peak vector database?
    8 projects | news.ycombinator.com | 25 Jan 2024
    I'll add txtai (https://github.com/neuml/txtai) to the list.

    There is still plenty of room for innovation in this space. Just need to focus on the right projects that are innovating and not the ones (re)working on problems solved in 2020/2021.

  • Txtai: An all-in-one embeddings database for semantic search and LLM workflows
    1 project | news.ycombinator.com | 24 Jan 2024
  • Generate knowledge with Semantic Graphs and RAG
    1 project | dev.to | 23 Jan 2024
    txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.
  • Show HN: Open-source Rule-based PDF parser for RAG
    9 projects | news.ycombinator.com | 23 Jan 2024
    Nice project! I've long used Tika for document parsing given it's maturity and wide number of formats supported. The XHTML output helps with chunking documents for RAG.

    Here's a couple examples:

    - https://neuml.hashnode.dev/build-rag-pipelines-with-txtai

    - https://neuml.hashnode.dev/extract-text-from-documents

    Disclaimer: I'm the primary author of txtai (https://github.com/neuml/txtai).

What are some alternatives?

When comparing google-research and txtai you can also consider the following projects:

qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

sentence-transformers - Multilingual Sentence & Image Embeddings with BERT

fast-soft-sort - Fast Differentiable Sorting and Ranking

tika-python - Tika-Python is a Python binding to the Apache Tikaâ„¢ REST services allowing Tika to be called natively in the Python community.

faiss - A library for efficient similarity search and clustering of dense vectors.

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Milvus - A cloud-native vector database, storage for next generation AI applications

CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

struct2depth - Models and examples built with TensorFlow

paperai - 📄 🤖 Semantic search and workflows for medical/scientific papers