Our great sponsors
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
hivemind
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
BLOOM's "Petals" kind of works like this, but it's still a ways off since I don't believe you can train on it, only inference.
Today I came across bittensor / Tao network. https://github.com/opentensor/bittensor
yeah, there's Hivemind. and there's research wrt how to chunk out training workload so it can be scaled up. not sure why there's commentary that latency issues would limit this sort of enterprise, the architecture typically isn't designed for liveness. other subfields of distributed training/inference include zero-knowledge machine learning. besides all of that, there's also adversarial computation like SafetyNets and refereed delegation of computation.
yeah, there's Hivemind. and there's research wrt how to chunk out training workload so it can be scaled up. not sure why there's commentary that latency issues would limit this sort of enterprise, the architecture typically isn't designed for liveness. other subfields of distributed training/inference include zero-knowledge machine learning. besides all of that, there's also adversarial computation like SafetyNets and refereed delegation of computation.
Related posts
- CNN binary classification validation accuracy reached %77, yet performing poorly on test set?
- Guide to creating a VAE?
- I love SD but the pain is real
- CNN Multiclass classification - What happens if in the last layer you use more units than classes?
- Useful Tools and Programs for Deep Learning with PyTorch