awesome-ml
marqo
awesome-ml | marqo | |
---|---|---|
27 | 114 | |
1,422 | 4,152 | |
- | 2.3% | |
8.8 | 9.3 | |
14 days ago | 2 days ago | |
Python | ||
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-ml
-
AI Infrastructure Landscape
I do something like that for open source:
https://github.com/underlines/awesome-ml
But it lost a bit of traction lately.
It needs re-work for the categories, or better, a tagging system, because these products and libraries can sit in more than one space.
Plus it either needs massive collaboration, or some form of automation (with an LLM and indexer), as I can't keep up with it.
-
OpenVoice: Versatile Instant Voice Cloning
This aera is barely new. Look at how old some of the projects are:
https://github.com/underlines/awesome-ml/blob/master/audio-a...
The thing that changes is the complexity to run it. I was training my wife's voice and my voice for fun and needed 15min of audio and trained on my 3080 for 40 minutes.
Now it's 2 Minutes.
-
Show HN: Floneum, a graph editor for local AI workflows
Thanks for your clarifications. I added it to my awesome list:
https://github.com/underlines/awesome-marketing-datascience/...
-
AI for AWS Documentation
RAG is very difficult to do right. I am experimenting with various RAG projects from [1]. The main problems are:
- Chunking can interfer with context boundaries
- Content vectors can differ vastly from question vectors, for this you have to use hypothetical embeddings (they generate artificial questions and store them)
- Instead of saving just one embedding per text-chuck you should store various (text chunk, hypothetical embedding questions, meta data)
- RAG will miserably fail with requests like "summarize the whole document"
- to my knowledge, openAI embeddings aren't performing well, use a embedding that is optimized for question answering or information retrieval and supports multi language. Also look into instructor embeddings: https://github.com/embeddings-benchmark/mteb
1 https://github.com/underlines/awesome-marketing-datascience/...
-
Explore and compare the parameters of top-performing LLMs
I do the same and with currently with 700+ github stars people seem to like it, but it's still curated/manual, because the hf search API is so limited and I don't have the time to create a scraper.
-
Vicuna v1.3 13B and 7B released, trained with twice the amount of ShareGPT data
Added to the list
-
Useful Links and Info
I keep mine fairly up to date as well, almost daily: https://github.com/underlines/awesome-marketing-datascience/blob/master/README.md
- How to keep track of all the LLMs out there?
-
Run and create custom ChatGPT-like bots with OpenChat
Disclaimer: I am curating LLM-tools on github [1]
A few thoughts:
* allow for custom endpoint URLs, this way people can use open source LLMs with a fake openAI API backend like basaran[2] or llama-api-server[3]
* look into better embedding methods for info-retrieval like InstructorEmbeddings or Document Summary Index
* Don't use a single embedding per content item, use multiple to increase retrieval quality
1 https://github.com/underlines/awesome-marketing-datascience/...
2 https://github.com/hyperonym/basaran
3 https://github.com/iaalm/llama-api-server
-
Seeking clarification about LLM's, Tools, etc.. for developers.
Oobabooga isn't a wrapper for llama.cpp, but it can act as such. A usual Oobabooga installation on windows will use a GPTQ wheel (binary) compiled for cuda/windows, or alternatively use llama.cpp's API and act as a GUI. On Linux you had the choice to use the triton or cuda branch for GPTQ, but I don't know if that is still the case. You can also go the route to use virtualized and hardware accelerated WSL2 Ubuntu on Windows and use anything similar to linux. See my guide
marqo
-
Are we at peak vector database?
We (Marqo) are doing a lot on 1 and 2. There is a huge amount to be done on the ML side of vector search and we are investing heavily in it. I think it has not quite sunk in that vector search systems are ML systems and everything that comes with that. I would love to chat about 1 and 2 so feel free to email me (email is in my profile). What we have done so far is here -> https://github.com/marqo-ai/marqo
-
Qdrant, the Vector Search Database, raised $28M in a Series A round
Marqo.ai (https://github.com/marqo-ai/marqo) is doing some interesting stuff and is oss. We handle embedding generation as well as retrieval (full disclosure, I work for Marqo.ai)
-
Ask HN: Is there any good semantic search GUI for images or documents?
Take a look here https://github.com/marqo-ai/local-image-search-demo. It is based on https://github.com/marqo-ai/marqo. We do a lot of image search applications. Feel free to reach out if you have other questions (email in profile).
-
90x Faster Than Pgvector – Lantern's HNSW Index Creation Time
That sounds much longer than it should. I am not sure on your exact use-case but I would encourage you to check out Marqo (https://github.com/marqo-ai/marqo - disclaimer, I am a co-founder). All inference and orchestration is included (no api calls) and many open-source or fine-tuned models can be used.
-
Embeddings: What they are and why they matter
Try this https://github.com/marqo-ai/marqo which handles all the chunking for you (and is configurable). Also handles chunking of images in an analogous way. This enables highlighting in longer docs and also for images in a single retrieval step.
-
Choosing vector database: a side-by-side comparison
As others have correctly pointed out, to make a vector search or recommendation application requires a lot more than similarity alone. We have seen the HNSW become commoditised and the real value lies elsewhere. Just because a database has vector functionality doesn’t mean it will actually service anything beyond “hello world” type semantic search applications. IMHO these have questionable value, much like the simple Q and A RAG applications that have proliferated. The elephant in the room with these systems is that if you are relying on machine learning models to produce the vectors you are going to need to invest heavily in the ML components of the system. Domain specific models are a must if you want to be a serious contender to an existing search system and all the usual considerations still apply regarding frequent retraining and monitoring of the models. Currently this is left as an exercise to the reader - and a very large one at that. We (https://github.com/marqo-ai/marqo, I am a co-founder) are investing heavily into making the ML production worthy and continuous learning from feedback of the models as part of the system. Lots of other things to think about in how you represent documents with multiple vectors, multimodality, late interactions, the interplay between embedding quality and HNSW graph quality (i.e. recall) and much more.
- Show HN: Marqo – Vectorless Vector Search
-
AI for AWS Documentation
Marqo provides automatic, configurable chunking (for example with overlap) and can allow you to bring your own model or choose from a wide range of opensource models. I think e5-large would be a good one to try. https://github.com/marqo-ai/marqo
-
[N] Open-source search engine Meilisearch launches vector search
Marqo has a similar API to Meilisearch's standard API but uses vector search in the background: https://github.com/marqo-ai/marqo
-
Ask HN: Which Vector Database do you recommend for LLM applications?
Have you tried Marqo? check the repo : https://github.com/marqo-ai/marqo
What are some alternatives?
anything-llm - The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
OpenChat - LLMs custom-chatbots console ⚡
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
AGiXT - AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
Milvus - A cloud-native vector database, storage for next generation AI applications
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
mnotify - A matrix cli client
vault-ai - OP Vault ChatGPT: Give ChatGPT long-term memory using the OP Stack (OpenAI + Pinecone Vector Database). Upload your own custom knowledge base files (PDF, txt, epub, etc) using a simple React frontend.
mteb - MTEB: Massive Text Embedding Benchmark
marqo - Tensor search for humans. [Moved to: https://github.com/marqo-ai/marqo]