dsir
petals
dsir | petals | |
---|---|---|
1 | 98 | |
191 | 8,730 | |
9.4% | 2.0% | |
7.7 | 8.3 | |
about 1 month ago | 18 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dsir
-
🧵 Researchers at Stanford Propose A Cheap And Scalable Data Selection Framework Based on Important Resampling For Improving The Downstream Performance of Language Models
Quick Read: https://www.marktechpost.com/2023/02/16/researchers-at-stanford-propose-a-cheap-and-scalable-data-selection-framework-based-on-importance-resampling-for-improving-the-downstream-performance-of-language-models/ Paper: https://arxiv.org/pdf/2302.03169.pdf Github: https://github.com/p-lambda/dsir
petals
-
Mistral Large
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
-
Distributed Inference and Fine-Tuning of Large Language Models over the Internet
Can check out their project at https://github.com/bigscience-workshop/petals
- Make no mistake—AI is owned by Big Tech
- Would you donate computation and storage to help build an open source LLM?
-
Run 70B LLM Inference on a Single 4GB GPU with This New Technique
There is already an implementation along the same line using the torrent architecture.
https://petals.dev/
-
Run LLMs in bittorrent style
Check it out at Petals.dev. Chatbot
- Is distributed computing dying, or just fading into the background?
-
Ask HN: Are there any projects currently exploring distributed AI training?
https://github.com/bigscience-workshop/petals
-
Mistral 7B,The complete Guide of the Best 7B model
https://github.com/bigscience-workshop/petals
Inference only: https://lite.koboldai.net/
- Run LLMs at home, BitTorrent‑style
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama - Inference code for Llama models
alpaca-lora - Instruct-tune LLaMA on consumer hardware
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
whisper.cpp - Port of OpenAI's Whisper model in C/C++
DeepSpeed-MII - MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
KoboldAI-Client