marqo
openai-cookbook
marqo | openai-cookbook | |
---|---|---|
114 | 215 | |
4,124 | 55,954 | |
1.6% | 1.0% | |
9.3 | 9.5 | |
5 days ago | 5 days ago | |
Python | MDX | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
marqo
-
Are we at peak vector database?
We (Marqo) are doing a lot on 1 and 2. There is a huge amount to be done on the ML side of vector search and we are investing heavily in it. I think it has not quite sunk in that vector search systems are ML systems and everything that comes with that. I would love to chat about 1 and 2 so feel free to email me (email is in my profile). What we have done so far is here -> https://github.com/marqo-ai/marqo
-
Qdrant, the Vector Search Database, raised $28M in a Series A round
Marqo.ai (https://github.com/marqo-ai/marqo) is doing some interesting stuff and is oss. We handle embedding generation as well as retrieval (full disclosure, I work for Marqo.ai)
-
Ask HN: Is there any good semantic search GUI for images or documents?
Take a look here https://github.com/marqo-ai/local-image-search-demo. It is based on https://github.com/marqo-ai/marqo. We do a lot of image search applications. Feel free to reach out if you have other questions (email in profile).
-
90x Faster Than Pgvector – Lantern's HNSW Index Creation Time
That sounds much longer than it should. I am not sure on your exact use-case but I would encourage you to check out Marqo (https://github.com/marqo-ai/marqo - disclaimer, I am a co-founder). All inference and orchestration is included (no api calls) and many open-source or fine-tuned models can be used.
-
Embeddings: What they are and why they matter
Try this https://github.com/marqo-ai/marqo which handles all the chunking for you (and is configurable). Also handles chunking of images in an analogous way. This enables highlighting in longer docs and also for images in a single retrieval step.
-
Choosing vector database: a side-by-side comparison
As others have correctly pointed out, to make a vector search or recommendation application requires a lot more than similarity alone. We have seen the HNSW become commoditised and the real value lies elsewhere. Just because a database has vector functionality doesn’t mean it will actually service anything beyond “hello world” type semantic search applications. IMHO these have questionable value, much like the simple Q and A RAG applications that have proliferated. The elephant in the room with these systems is that if you are relying on machine learning models to produce the vectors you are going to need to invest heavily in the ML components of the system. Domain specific models are a must if you want to be a serious contender to an existing search system and all the usual considerations still apply regarding frequent retraining and monitoring of the models. Currently this is left as an exercise to the reader - and a very large one at that. We (https://github.com/marqo-ai/marqo, I am a co-founder) are investing heavily into making the ML production worthy and continuous learning from feedback of the models as part of the system. Lots of other things to think about in how you represent documents with multiple vectors, multimodality, late interactions, the interplay between embedding quality and HNSW graph quality (i.e. recall) and much more.
- Show HN: Marqo – Vectorless Vector Search
-
AI for AWS Documentation
Marqo provides automatic, configurable chunking (for example with overlap) and can allow you to bring your own model or choose from a wide range of opensource models. I think e5-large would be a good one to try. https://github.com/marqo-ai/marqo
-
[N] Open-source search engine Meilisearch launches vector search
Marqo has a similar API to Meilisearch's standard API but uses vector search in the background: https://github.com/marqo-ai/marqo
-
Ask HN: Which Vector Database do you recommend for LLM applications?
Have you tried Marqo? check the repo : https://github.com/marqo-ai/marqo
openai-cookbook
-
Question-Answer System Architectures using LLMs
A pretrained LLM is a closed-book system: It can only access information that it was trained on. With domain fine-tuning, the system manifests additional material. An early prototype of this technique was shown in this OpenAi cookbook: For the target domain, text was embedded using an API, and then when using the LLM, embeddings were retrieved using semantic similarity search to formulate an answer. Although this approach evolved to retrieval-augmented generation, its still a technique to adapt a Gen2 (2020) or Gen3 (2022) LLM into a question-answering system.
-
Ask HN: High quality Python scripts or small libraries to learn from
https://github.com/openai/openai-cookbook/blob/main/examples...
- Collection of notebooks showcasing some fun and effective ways of using Claude
- OpenAI Cookbook: Techniques to improve reliability
- OpenAI Cookbooks
-
How to fine tune vit/convnet to focus on the layout of the input room image and ignore other things ?
It sounds like you are trying to tweak embeddings for similarity search. Rather than fine-tune the model's layers, you may want to try training a linear transformation the existing model's output embedding. Openai has a cookbook on how to do that. You will need some data though - but I think you can try it with ~20 pieces of synthetically generated data.
-
Best base model 1B or 7B for full finetuning
tutorial from OpenAI https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb
-
Resources to learn ChatGPT and the OpenAI API
OpenAI Cookbook
- OpenAI Cookbook
-
Another Major Outage Across ChatGPT and API
OpenAI community repo with lots of examples: https://github.com/openai/openai-cookbook
What are some alternatives?
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
Milvus - A cloud-native vector database, storage for next generation AI applications
chatgpt-retrieval-plugin - The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
askai - Command Line Interface for OpenAi ChatGPT
vault-ai - OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
marqo - Tensor search for humans. [Moved to: https://github.com/marqo-ai/marqo]
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows