petals
chat.petals.dev
petals | chat.petals.dev | |
---|---|---|
98 | 8 | |
8,684 | 298 | |
1.5% | 2.3% | |
8.3 | 7.1 | |
5 days ago | 12 days ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
petals
-
Mistral Large
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
-
Distributed Inference and Fine-Tuning of Large Language Models over the Internet
Can check out their project at https://github.com/bigscience-workshop/petals
- Make no mistake—AI is owned by Big Tech
- Would you donate computation and storage to help build an open source LLM?
-
Run 70B LLM Inference on a Single 4GB GPU with This New Technique
There is already an implementation along the same line using the torrent architecture.
https://petals.dev/
-
Run LLMs in bittorrent style
Check it out at Petals.dev. Chatbot
- Is distributed computing dying, or just fading into the background?
-
Ask HN: Are there any projects currently exploring distributed AI training?
https://github.com/bigscience-workshop/petals
-
Mistral 7B,The complete Guide of the Best 7B model
https://github.com/bigscience-workshop/petals
Inference only: https://lite.koboldai.net/
- Run LLMs at home, BitTorrent‑style
chat.petals.dev
-
Make no mistake—AI is owned by Big Tech
ETA: https://chat.petals.dev
-
Run LLMs in bittorrent style
Check it out at Petals.dev. Chatbot
-
Run LLMs at home, BitTorrent‑style
Hi, a dev here. means "end of sequence" for LLMs. If a model generates it, it forgets everything and continue with an unrelated random text. So I don't think that malicious actors are involved here.
Apparently, the Colab code snippet is just too simplified and does not handle correctly. This is not the case with the full chatbot app at https://chat.petals.dev - you can use it instead or take a look at its code.
-
Falcon180B: authors open source a new 180B version!
edit: this community of people is amazing. like 10 minutes after I posted this or so.... it is now up on chat.petals.dev !!!!
- Talk to Falcon 180B-Chat running over Petals
-
ChatGPT Is Down Again
good opportunity to try the free and totally open source Big Science Petals chat: https://chat.petals.dev/ ... Try out Stable Beluga 2 70B
I am currently running my 3090 GPU on there to help out, you can check out https://health.petals.dev/
If you have a spare GPU, consider contributing: https://github.com/bigscience-workshop/petals . I am not associated with them.
-
Sweating Bullets Test
So far, not a single one of the models tested (between 7b-70b) could figure out the name of the main character (Nick Slaughter). I've tried all sorts of prompts and the connection between "Tropical Heat" and "Sweating Bullets" is usually known to the model (e.g. "What's the show "Tropical Heat" called in the US?"). But as soon as I ask about the main character, all the models I have tested so far hallucinate all sorts of names, though usually in the right direction (detectives).
- Petals: Run 100B+ models at home bit-torrent style
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
askai - Command Line Interface for OpenAi ChatGPT
llama - Inference code for Llama models
ggml - Tensor library for machine learning
alpaca-lora - Instruct-tune LLaMA on consumer hardware
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
KoboldAI-Client
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.