bootcamp VS google-research

Compare bootcamp vs google-research and see what are their differences.

bootcamp

Dealing with all unstructured data, such as reverse image search, audio search, molecular search, video analysis, question and answer systems, NLP, etc. (by milvus-io)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
bootcamp google-research
24 98
1,619 32,804
3.6% 1.5%
9.1 9.6
6 days ago 5 days ago
HTML Jupyter Notebook
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

bootcamp

Posts with mentions or reviews of bootcamp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-01.
  • FLaNK AI - 01 April 2024
    31 projects | dev.to | 1 Apr 2024
  • FLaNK Stack Weekly 22 January 2024
    37 projects | dev.to | 22 Jan 2024
  • Milvus Adventures Jan 5, 2023
    1 project | dev.to | 5 Jan 2024
    Metadata Filtering with Zilliz Cloud Pipelines This tutorial discuss scalar or metadata filtering and how you can perform metadata filtering in Zilliz Cloud. This blog continues on the previous blog on Getting started with RAG in just 5 minutes. You can find its code in this notebook and scroll down to Cell #27.
  • Build a search engine, not a vector DB
    3 projects | news.ycombinator.com | 20 Dec 2023
    Partially agree.

    Vector DBs are critical components in retrieval systems. What most applications need are retrieval systems, rather than building blocks of retrieval systems. That doesn't mean the building blocks are not important.

    As someone working on vector DB, I find many users struggling in building their own retrieval systems with building blocks such as embedding service (openai,cohere), logic orchestration framework (langchain/llamaindex) and vector databases, some even with reranker models. Putting them together is not as easy as it looks. A fairly changeling system work. Letting alone quality tuning and devops.

    The struggle is no surprise to me, as tech companies who are experts on this (google,meta) all have dedicated teams working on retrieval system alone, making tons of optimizations and develop a whole feedback loop of evaluating and improving the quality. Most developers don't get access to such resource.

    No one size fits all. I think there shall exist a service that democratize AI-powered retrieval, in simple words the know-how of using embedding+vectordb and a bunch of tricks to achieve SOTA retrieval quality.

    With this idea I built a Retrieval-as-a-service solution, and here is its demo:

    https://github.com/milvus-io/bootcamp/blob/master/bootcamp/R...

    Curious to learn your thoughts.

  • Vector Database in a Jupyter Notebook
    1 project | news.ycombinator.com | 6 Jun 2023
    Although it's common to use vector databases in conjunction with LLMs, I like to talk about vector databases in the context of unstructured data, i.e. any data that you can vectorize with (or without) an ML model. Yes, this includes text, but it also includes things like visual data, molecular structures, and geospatial data.

    For folks who want to learn a bit more, there are examples of vector database use cases beyond semantic text search in our bootcamp: https://github.com/milvus-io/bootcamp

  • Beginner-ish resources for choosing a vector database?
    1 project | /r/vectordatabase | 20 May 2023
    Easy to get started: Here are some tutorials for Milvus in a Jupyter Notebook that I wrote - reverse image search, semantic text search
  • Semantic Similarity Search
    1 project | /r/learnmachinelearning | 13 May 2023
    I think you can just store your vector embeddings in the vector store somewhere and then query with your second document. I created a short tutorial on this that shows how to get the top 2 vector embeddings from a text query
  • [D] Looking for open source projects to contribute
    15 projects | /r/MachineLearning | 9 Jan 2022
    For more beginner tasks associated with the Milvus vector database, you can contribute to the Bootcamp project( https://github.com/milvus-io/bootcamp), where we build a lot of data-driven solutions using ML and Milvus vector database, including reverse image search, recommender systems, etc.
  • I built an image similarity search system... Suggestions needed: what are some fun image datasets or scenarios I can use with this? :)
    3 projects | /r/datascience | 21 Dec 2021
    Source code here: https://github.com/milvus-io/bootcamp/tree/master/solutions/reverse_image_search
  • Faiss: Facebook's open source vector search library
    8 projects | news.ycombinator.com | 14 Dec 2021

google-research

Posts with mentions or reviews of google-research. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-10.
  • Show HN: Next-token prediction in JavaScript – build fast LLMs from scratch
    11 projects | news.ycombinator.com | 10 Apr 2024
    People on here will be happy to say that I do a similar thing, however my sequence length is dynamic because I also use a 2nd data structure - I'll use pretentious academic speak: I use a simple bigram LM (2-gram) for single next-word likeliness and separately a trie that models all words and phrases (so, n-gram). Not sure how many total nodes because sentence lengths vary in training data, but there are about 200,000 entry points (keys) so probably about 2-10 million total nodes in the default setup.

    "Constructing 7-gram LM": They likely started with bigrams (what I use) which only tells you the next word based on 1 word given, and thought to increase accuracy by modeling out more words in a sequence, and eventually let the user (developer) pass in any amount they want to model (https://github.com/google-research/google-research/blob/5c87...). I thought of this too at first, but I actually got more accuracy (and speed) out of just keeping them as bigrams and making a totally separate structure that models out an n-gram of all phrases (e.g. could be a 24-token long sequence or 100+ tokens etc. I model it all) and if that phrase is found, then I just get the bigram assumption of the last token of the phrase. This works better when the training data is more diverse (for a very generic model), but theirs would probably outperform mine on accuracy when the training data has a lot of nearly identical sentences that only change wildly toward the end - I don't find this pattern in typical data though, maybe for certain coding and other tasks there are those patterns though. But because it's not dynamic and they make you provide that number, even a low number (any phrase longer than 2 words) - theirs will always have to do more lookup work than with simple bigrams and they're also limited by that fixed number as far as accuracy. I wonder how scalable that is - if I need to train on occasional ~100-word long sentences but also (and mostly) just ~3-word long sentences, I guess I set this to 100 and have a mostly "undefined" trie.

    I also thought of the name "LMJS", theirs is "jslm" :) but I went with simply "next-token-prediction" because that's what it ultimately does as a library. I don't know what theirs is really designed for other than proving a concept. Most of their code files are actually comments and hypothetical scenarios.

    I recently added a browser example showing simple autocomplete using my library: https://github.com/bennyschmidt/next-token-prediction/tree/m... (video)

    And next I'm implementing 8-dimensional embeddings that are converted to normalized vectors between 0-1 to see if doing math on them does anything useful beyond similarity, right now they look like this:

      [nextFrequency, prevalence, specificity, length, firstLetter, lastLetter, firstVowel, lastVowel]
  • Google Research website is down
    1 project | news.ycombinator.com | 5 Apr 2024
  • Jpegli: A New JPEG Coding Library
    9 projects | news.ycombinator.com | 3 Apr 2024
    The change was literally just made: https://github.com/google-research/google-research/commit/4a...

    It appears this was in response to Hacker News comments.

  • Multi-bitrate JPEG compression perceptual evaluation dataset 2023
    1 project | news.ycombinator.com | 31 Jan 2024
  • Vector Databases: A Technical Primer [pdf]
    7 projects | news.ycombinator.com | 12 Jan 2024
    There are options such as Google's ScaNN that may let you go farther before needing to consider specialized databases.

    https://github.com/google-research/google-research/blob/mast...

  • Labs.Google
    1 project | news.ycombinator.com | 22 Dec 2023
    I feel it was unnecesary to create this because https://research.google/ already exists? It just seems like they want to take another URL with a "pure" domain name instead of psubdirectories, etc parts.
  • Smerf: Streamable Memory Efficient Radiance Fields
    3 projects | news.ycombinator.com | 13 Dec 2023
    https://github.com/google-research/google-research/blob/mast...
  • Shisa 7B: a new JA/EN bilingual model based on Mistral 7B
    2 projects | /r/LocalLLaMA | 7 Dec 2023
    You could also try some dedicated translation models like https://huggingface.co/facebook/nllb-moe-54b (or https://github.com/google-research/google-research/tree/master/madlad_400 for something smaller) and see how they do.
  • Translate to and from 400+ languages locally with MADLAD-400
    1 project | /r/LocalLLaMA | 10 Nov 2023
    Google released T5X checkpoints for MADLAD-400 a couple of months ago, but nobody could figure out how to run them. Turns out the vocabulary was wrong, but they uploaded the correct one last week.
  • Mastering ROUGE Matrix: Your Guide to Large Language Model Evaluation for Summarization with Examples
    2 projects | dev.to | 8 Oct 2023

What are some alternatives?

When comparing bootcamp and google-research you can also consider the following projects:

Milvus - A cloud-native vector database, storage for next generation AI applications

qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

docarray - Represent, send, store and search multimodal data

fast-soft-sort - Fast Differentiable Sorting and Ranking

es-clip-image-search - Sample implementation of natural language image search with OpenAI's CLIP and Elasticsearch or Opensearch.

faiss - A library for efficient similarity search and clustering of dense vectors.

habitat-sim - A flexible, high-performance 3D simulator for Embodied AI research.

ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk

nn - 🧑‍🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠

struct2depth - Models and examples built with TensorFlow