awesome-ml
anything-llm
awesome-ml | anything-llm | |
---|---|---|
27 | 21 | |
1,422 | 12,420 | |
- | 22.1% | |
8.8 | 9.8 | |
14 days ago | 7 days ago | |
JavaScript | ||
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-ml
-
AI Infrastructure Landscape
I do something like that for open source:
https://github.com/underlines/awesome-ml
But it lost a bit of traction lately.
It needs re-work for the categories, or better, a tagging system, because these products and libraries can sit in more than one space.
Plus it either needs massive collaboration, or some form of automation (with an LLM and indexer), as I can't keep up with it.
-
OpenVoice: Versatile Instant Voice Cloning
This aera is barely new. Look at how old some of the projects are:
https://github.com/underlines/awesome-ml/blob/master/audio-a...
The thing that changes is the complexity to run it. I was training my wife's voice and my voice for fun and needed 15min of audio and trained on my 3080 for 40 minutes.
Now it's 2 Minutes.
-
Show HN: Floneum, a graph editor for local AI workflows
Thanks for your clarifications. I added it to my awesome list:
https://github.com/underlines/awesome-marketing-datascience/...
-
AI for AWS Documentation
RAG is very difficult to do right. I am experimenting with various RAG projects from [1]. The main problems are:
- Chunking can interfer with context boundaries
- Content vectors can differ vastly from question vectors, for this you have to use hypothetical embeddings (they generate artificial questions and store them)
- Instead of saving just one embedding per text-chuck you should store various (text chunk, hypothetical embedding questions, meta data)
- RAG will miserably fail with requests like "summarize the whole document"
- to my knowledge, openAI embeddings aren't performing well, use a embedding that is optimized for question answering or information retrieval and supports multi language. Also look into instructor embeddings: https://github.com/embeddings-benchmark/mteb
1 https://github.com/underlines/awesome-marketing-datascience/...
-
Explore and compare the parameters of top-performing LLMs
I do the same and with currently with 700+ github stars people seem to like it, but it's still curated/manual, because the hf search API is so limited and I don't have the time to create a scraper.
-
Vicuna v1.3 13B and 7B released, trained with twice the amount of ShareGPT data
Added to the list
-
Useful Links and Info
I keep mine fairly up to date as well, almost daily: https://github.com/underlines/awesome-marketing-datascience/blob/master/README.md
- How to keep track of all the LLMs out there?
-
Run and create custom ChatGPT-like bots with OpenChat
Disclaimer: I am curating LLM-tools on github [1]
A few thoughts:
* allow for custom endpoint URLs, this way people can use open source LLMs with a fake openAI API backend like basaran[2] or llama-api-server[3]
* look into better embedding methods for info-retrieval like InstructorEmbeddings or Document Summary Index
* Don't use a single embedding per content item, use multiple to increase retrieval quality
1 https://github.com/underlines/awesome-marketing-datascience/...
2 https://github.com/hyperonym/basaran
3 https://github.com/iaalm/llama-api-server
-
Seeking clarification about LLM's, Tools, etc.. for developers.
Oobabooga isn't a wrapper for llama.cpp, but it can act as such. A usual Oobabooga installation on windows will use a GPTQ wheel (binary) compiled for cuda/windows, or alternatively use llama.cpp's API and act as a GUI. On Linux you had the choice to use the triton or cuda branch for GPTQ, but I don't know if that is still the case. You can also go the route to use virtualized and hardware accelerated WSL2 Ubuntu on Windows and use anything similar to linux. See my guide
anything-llm
- AnythingLLM: Chat with your documents using any LLM
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
anything-llm looks pretty interesting and easy to use https://github.com/Mintplex-Labs/anything-llm
-
local/private llm based chatbot using free/open source tools.
You can just fork AnythingLLM for a very advanced starting point or just straight rip the code ive already written to build yours 🚀
-
Some solutions that work on older intel macs
AnythingLLM also works on an Intel Mac (i develop it on an intel mac) and can use any GGUF model to do local inferencing. Includes document embedding + local vector database so i can do chatting with documents and even coding inside of it. Pretty much a ChatGPT equilivent i can run locally via the repo or docker.
-
What tools or programs have you made or are working on?
If you want a UI you can leverage https://github.com/Mintplex-Labs/anything-llm and do all your coding in localhost with a locally running model.
- Web interface for Azure Open Ai
- DIY custom AI chatbot trained on your company data
What are some alternatives?
OpenChat - LLMs custom-chatbots console âš¡
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
AGiXT - AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
privateGPT - Interact with your documents using the power of GPT, 100% privately, no data leaks [Moved to: https://github.com/zylon-ai/private-gpt]
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
LLMStack - No-code platform to build LLM Agents, workflows and applications with your data
mnotify - A matrix cli client
gpt4all - gpt4all: run open-source LLMs anywhere
mteb - MTEB: Massive Text Embedding Benchmark
CSharp-ChatBot-GPT - This repository contains a simple C# chatbot powered by OpenAI’s ChatGPT. The chatbot utilizes the RestSharp and Newtonsoft.Json libraries to interact with the ChatGPT API and process user input.
simonwillisonblog - The source code behind my blog
ChainFury - 🦋 Production grade chaining engine behind TuneChat. Self host today!