bittensor
petals
bittensor | petals | |
---|---|---|
4 | 98 | |
781 | 8,684 | |
4.5% | 1.5% | |
9.6 | 8.3 | |
1 day ago | 5 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bittensor
-
Has anyone else used the bittensor/subnet1 model? Is there a way to use it outside of the horde?
You are connected through the validator "Tensor.Exchange", and I recently upped the amount of workers to enhance its speed. What it runs on the background is literally a mixture of different models working in parallel to give the best response, more information about this can be found on www.bittensor.com or on their Discord.
-
LLM@home
Today I came across bittensor / Tao network. https://github.com/opentensor/bittensor
-
[D] I don't really trust papers out of "Top Labs" anymore
Have a look at Bittensor - www.bittensor.com
- Bittensor: Internet-Scale Neural Networks
petals
-
Mistral Large
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
-
Distributed Inference and Fine-Tuning of Large Language Models over the Internet
Can check out their project at https://github.com/bigscience-workshop/petals
- Make no mistake—AI is owned by Big Tech
- Would you donate computation and storage to help build an open source LLM?
-
Run 70B LLM Inference on a Single 4GB GPU with This New Technique
There is already an implementation along the same line using the torrent architecture.
https://petals.dev/
-
Run LLMs in bittorrent style
Check it out at Petals.dev. Chatbot
- Is distributed computing dying, or just fading into the background?
-
Ask HN: Are there any projects currently exploring distributed AI training?
https://github.com/bigscience-workshop/petals
-
Mistral 7B,The complete Guide of the Best 7B model
https://github.com/bigscience-workshop/petals
Inference only: https://lite.koboldai.net/
- Run LLMs at home, BitTorrent‑style
What are some alternatives?
CortexTheseus - Cortex - AI on Blockchain, Official Golang implementation
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
SD_Coin - This is an open source blockchain network and crytocurrency project. It exists for people to build and use their own blockchain networks. Or to join the network created by others.
llama - Inference code for Llama models
zeronet-conservancy - zeronet-conservancy is a client for decentralized p2p web 0net, focusing on preserving 0net and transition to riza network
alpaca-lora - Instruct-tune LLaMA on consumer hardware
TorchGA - Train PyTorch Models using the Genetic Algorithm with PyGAD
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
koila - Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
torchsynth - A GPU-optional modular synthesizer in pytorch, 16200x faster than realtime, for audio ML researchers.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.