rellm
awesome-ml
rellm | awesome-ml | |
---|---|---|
7 | 27 | |
491 | 1,434 | |
- | - | |
5.0 | 8.8 | |
9 months ago | 9 days ago | |
Python | ||
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rellm
-
Run and create custom ChatGPT-like bots with OpenChat
- https://github.com/r2d4/rellm
-
Forcing GPT-4 or GPT-3.5-turbo to adhere to a specific output format
MS guidance as mentioned and ReLLM
- GitHub - r2d4/rellm: Exact structure out of any language model completion.
- AI Showdown: Wizard Vicuna vs. Stable Vicuna, GPT-4 as the judge (test in comments)
-
ReLLM: Exact Structure for Large Language Model Completions
There's probably a better API that wraps generate, but there's a bit more work than the logit mask.
You have to go one token at a time, otherwise the masking becomes combinatoric rather than linear (two tokens at a time -- need to generate all two token pairs, etc.).
But otherwise, that's what the code does! https://github.com/r2d4/rellm/blob/main/rellm/rellm.py#L21
- r2d4/rellm: Exact structure out of any language model completion.
awesome-ml
-
AI Infrastructure Landscape
I do something like that for open source:
https://github.com/underlines/awesome-ml
But it lost a bit of traction lately.
It needs re-work for the categories, or better, a tagging system, because these products and libraries can sit in more than one space.
Plus it either needs massive collaboration, or some form of automation (with an LLM and indexer), as I can't keep up with it.
-
OpenVoice: Versatile Instant Voice Cloning
This aera is barely new. Look at how old some of the projects are:
https://github.com/underlines/awesome-ml/blob/master/audio-a...
The thing that changes is the complexity to run it. I was training my wife's voice and my voice for fun and needed 15min of audio and trained on my 3080 for 40 minutes.
Now it's 2 Minutes.
-
Show HN: Floneum, a graph editor for local AI workflows
Thanks for your clarifications. I added it to my awesome list:
https://github.com/underlines/awesome-marketing-datascience/...
-
AI for AWS Documentation
RAG is very difficult to do right. I am experimenting with various RAG projects from [1]. The main problems are:
- Chunking can interfer with context boundaries
- Content vectors can differ vastly from question vectors, for this you have to use hypothetical embeddings (they generate artificial questions and store them)
- Instead of saving just one embedding per text-chuck you should store various (text chunk, hypothetical embedding questions, meta data)
- RAG will miserably fail with requests like "summarize the whole document"
- to my knowledge, openAI embeddings aren't performing well, use a embedding that is optimized for question answering or information retrieval and supports multi language. Also look into instructor embeddings: https://github.com/embeddings-benchmark/mteb
1 https://github.com/underlines/awesome-marketing-datascience/...
-
Explore and compare the parameters of top-performing LLMs
I do the same and with currently with 700+ github stars people seem to like it, but it's still curated/manual, because the hf search API is so limited and I don't have the time to create a scraper.
-
Vicuna v1.3 13B and 7B released, trained with twice the amount of ShareGPT data
Added to the list
-
Useful Links and Info
I keep mine fairly up to date as well, almost daily: https://github.com/underlines/awesome-marketing-datascience/blob/master/README.md
- How to keep track of all the LLMs out there?
-
Run and create custom ChatGPT-like bots with OpenChat
Disclaimer: I am curating LLM-tools on github [1]
A few thoughts:
* allow for custom endpoint URLs, this way people can use open source LLMs with a fake openAI API backend like basaran[2] or llama-api-server[3]
* look into better embedding methods for info-retrieval like InstructorEmbeddings or Document Summary Index
* Don't use a single embedding per content item, use multiple to increase retrieval quality
1 https://github.com/underlines/awesome-marketing-datascience/...
2 https://github.com/hyperonym/basaran
3 https://github.com/iaalm/llama-api-server
-
Seeking clarification about LLM's, Tools, etc.. for developers.
Oobabooga isn't a wrapper for llama.cpp, but it can act as such. A usual Oobabooga installation on windows will use a GPTQ wheel (binary) compiled for cuda/windows, or alternatively use llama.cpp's API and act as a GUI. On Linux you had the choice to use the triton or cuda branch for GPTQ, but I don't know if that is still the case. You can also go the route to use virtualized and hardware accelerated WSL2 Ubuntu on Windows and use anything similar to linux. See my guide
What are some alternatives?
OpenChat - LLMs custom-chatbots console ⚡
anything-llm - The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
gpt-jargon - Jargon is a natural language programming language specified and executed by LLMs like GPT-4.
convostack - Plug and play embeddable AI chatbot widget and backend deployment framework
AGiXT - AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
llama-api-server - A OpenAI API compatible REST server for llama.
mnotify - A matrix cli client
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
mteb - MTEB: Massive Text Embedding Benchmark