MS-AMP
petals
MS-AMP | petals | |
---|---|---|
1 | 99 | |
471 | 8,781 | |
3.2% | 1.4% | |
7.4 | 7.9 | |
2 months ago | 8 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MS-AMP
petals
-
Chameleon: Meta's New Multi-Modal LLM
Things like [petals](https://github.com/bigscience-workshop/petals) exist, distributed computing over willing participants. Right now corporate cash is being rammed into the space so why not snap it up while you can, but the moment it dries up projects like petals will see more of the love they deserve.
I envision a future where crypto-style booms happen over tokens useful for purchasing priority computational time, which is earned by providing said computational time. This way researchers can daisy-chain their independent smaller rigs together into something with gargantuan capabilities.
-
Mistral Large
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
-
Distributed Inference and Fine-Tuning of Large Language Models over the Internet
Can check out their project at https://github.com/bigscience-workshop/petals
- Make no mistake—AI is owned by Big Tech
- Would you donate computation and storage to help build an open source LLM?
-
Run 70B LLM Inference on a Single 4GB GPU with This New Technique
There is already an implementation along the same line using the torrent architecture.
https://petals.dev/
-
Run LLMs in bittorrent style
Check it out at Petals.dev. Chatbot
- Is distributed computing dying, or just fading into the background?
-
Ask HN: Are there any projects currently exploring distributed AI training?
https://github.com/bigscience-workshop/petals
-
Mistral 7B,The complete Guide of the Best 7B model
https://github.com/bigscience-workshop/petals
Inference only: https://lite.koboldai.net/
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama - Inference code for Llama models
alpaca-lora - Instruct-tune LLaMA on consumer hardware
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
whisper.cpp - Port of OpenAI's Whisper model in C/C++
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
DeepSpeed-MII - MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.