awesome-ml
one-click-installers
awesome-ml | one-click-installers | |
---|---|---|
27 | 18 | |
1,422 | 470 | |
- | - | |
8.8 | 8.9 | |
14 days ago | 8 months ago | |
Python | ||
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-ml
-
AI Infrastructure Landscape
I do something like that for open source:
https://github.com/underlines/awesome-ml
But it lost a bit of traction lately.
It needs re-work for the categories, or better, a tagging system, because these products and libraries can sit in more than one space.
Plus it either needs massive collaboration, or some form of automation (with an LLM and indexer), as I can't keep up with it.
-
OpenVoice: Versatile Instant Voice Cloning
This aera is barely new. Look at how old some of the projects are:
https://github.com/underlines/awesome-ml/blob/master/audio-a...
The thing that changes is the complexity to run it. I was training my wife's voice and my voice for fun and needed 15min of audio and trained on my 3080 for 40 minutes.
Now it's 2 Minutes.
-
Show HN: Floneum, a graph editor for local AI workflows
Thanks for your clarifications. I added it to my awesome list:
https://github.com/underlines/awesome-marketing-datascience/...
-
AI for AWS Documentation
RAG is very difficult to do right. I am experimenting with various RAG projects from [1]. The main problems are:
- Chunking can interfer with context boundaries
- Content vectors can differ vastly from question vectors, for this you have to use hypothetical embeddings (they generate artificial questions and store them)
- Instead of saving just one embedding per text-chuck you should store various (text chunk, hypothetical embedding questions, meta data)
- RAG will miserably fail with requests like "summarize the whole document"
- to my knowledge, openAI embeddings aren't performing well, use a embedding that is optimized for question answering or information retrieval and supports multi language. Also look into instructor embeddings: https://github.com/embeddings-benchmark/mteb
1 https://github.com/underlines/awesome-marketing-datascience/...
-
Explore and compare the parameters of top-performing LLMs
I do the same and with currently with 700+ github stars people seem to like it, but it's still curated/manual, because the hf search API is so limited and I don't have the time to create a scraper.
-
Vicuna v1.3 13B and 7B released, trained with twice the amount of ShareGPT data
Added to the list
-
Useful Links and Info
I keep mine fairly up to date as well, almost daily: https://github.com/underlines/awesome-marketing-datascience/blob/master/README.md
- How to keep track of all the LLMs out there?
-
Run and create custom ChatGPT-like bots with OpenChat
Disclaimer: I am curating LLM-tools on github [1]
A few thoughts:
* allow for custom endpoint URLs, this way people can use open source LLMs with a fake openAI API backend like basaran[2] or llama-api-server[3]
* look into better embedding methods for info-retrieval like InstructorEmbeddings or Document Summary Index
* Don't use a single embedding per content item, use multiple to increase retrieval quality
1 https://github.com/underlines/awesome-marketing-datascience/...
2 https://github.com/hyperonym/basaran
3 https://github.com/iaalm/llama-api-server
-
Seeking clarification about LLM's, Tools, etc.. for developers.
Oobabooga isn't a wrapper for llama.cpp, but it can act as such. A usual Oobabooga installation on windows will use a GPTQ wheel (binary) compiled for cuda/windows, or alternatively use llama.cpp's API and act as a GUI. On Linux you had the choice to use the triton or cuda branch for GPTQ, but I don't know if that is still the case. You can also go the route to use virtualized and hardware accelerated WSL2 Ubuntu on Windows and use anything similar to linux. See my guide
one-click-installers
-
amd gpus on windows support?
AMD does not offer installation options for ROCm on Windows. I'm not familiar with the workarounds to make it work; if you find a solution, you can contribute it to https://github.com/oobabooga/one-click-installers/
-
Oobabooga for Windows
Running start_windows.bat should take care of everything.
-
Quant-Cude Error?
Had the same issue, turns out I was using an old 1 click installer / updater, you need to use https://github.com/oobabooga/one-click-installers and reinstall everything from scratch
-
Cant find the "start: file.
Are you sure you're looking at the right folder? start_windows.bat is there. It's listed in the source code: https://github.com/oobabooga/one-click-installers
- Any UI that allows Windows + AMD GPU ?
- WizardLM-30B-Uncensored
-
13b-4bit-128g - Trying to run compressed model without success. ( problem exist only with 13b models for some reason ) No error code has been displayed.
one-click-installers/INSTRUCTIONS.TXT
-
GPT4All: A little helper to get started
https://github.com/oobabooga/one-click-installers/issues/56 they explain it over here.
-
Visual Studio compile errors
I solved this by adding the Individual components 2019 Windows 10 SDK, C++ CMake tools for Windows, and MSVC v142 - VS 2019 C++ build tools. See https://github.com/oobabooga/one-click-installers/issues/56
-
python setup.py bdist_wheel did not run successfully.
It appears one of the extensions isn't pre-compiled on install. I believe you have the same problem as listed here. https://github.com/oobabooga/one-click-installers/issues/56
What are some alternatives?
anything-llm - The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ
OpenChat - LLMs custom-chatbots console ⚡
gpt4all - gpt4all: run open-source LLMs anywhere
AGiXT - AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
gradio - Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
llama-mps - Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2
KoboldAI
mnotify - A matrix cli client
WizardVicunaLM - LLM that combines the principles of wizardLM and vicunaLM
mteb - MTEB: Massive Text Embedding Benchmark
micromamba-releases - Micromamba executables mirrored from conda-forge as Github releases