GLM-130B
hivemind
Our great sponsors
GLM-130B | hivemind | |
---|---|---|
19 | 40 | |
7,607 | 1,837 | |
0.9% | 2.9% | |
4.8 | 5.4 | |
9 months ago | 29 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GLM-130B
-
GLM-130B
The https://github.com/THUDM/GLM-130B model is trained on The Pile and can run on 4x3090 when quantized to INT4. I'm wondering if anyone knows if this model could (or has) been quantized using GPTQ, which gives some impressive performance gains over traditional quantization, and I'm also wondering if anyone has tried a 3-bit or 2-bit quantization of such a massive model (using GPTQ). Are there any inherent limitations in this? Is there anything about this model that prevents it from being run on text-generation-webui?
- Has anyone tried GLM?
- Ask HN: Open source LLM for commercial use?
- Whichever way I look at it, I just don’t see this being the case. Why do you agree/disagree?
-
The New Bing and ChatGPT
> GLM-130B, a model comparable with GPT-3, has 130 billion parameters in FP16 precision, a total of 260G of GPU memory is required to store model weights. The DGX-A100 server has 8 A100s and provides an amount of 320G of GPU memory (640G for 80G A100 version) so it suits GLM-130B well.
https://github.com/THUDM/GLM-130B/blob/main/docs/low-resourc...
-
OpenAI Major Outage
GLM-130B[1] (a 130 billion parameter model vs GPT-3's 175 billion parameter model) is able to run optimally on consumer level high-end hardware, 4xRTX 3090 in particular. That's < $4k at current prices, and as hardware prices go one can only imagine what it'll be in a year or two. It also enables running with degraded performance on lesser systems.
It's a whole lot cheaper to run neural net style systems than to train them. "Somebody on Twitter"[2] got it setup, and broke down the costs, demonstrated some prompts, and what not. Cliff notes being a fraction of a penny per query, with each taking about 16s to generate. The output's pretty terrible, but it's unclear to me whether that's inherent or a result of priority. I expect OpenAI spent a lot of manpower on supervised training, whereas this system probably had minimal, especially in English (it's from a Chinese university).
[1] - https://github.com/THUDM/GLM-130B
[2] - https://twitter.com/alexjc/status/1617152800571416577
- [D]Are there any known AI systems today that are significantly more advanced than chatGPT ?
-
Will there ever be a "Stable Diffusion chat AI" that we can run at home like one can do with Stable Diffusion? A "roll-your-own at home ChatGPT"?
GLM-130B in 4 bit mode is better than GPT3 and can run on 4 RTX-3090s. Still expensive but it’s getting closer. https://github.com/THUDM/GLM-130B
- Open-Source competitor to OpenAI?
-
Ask HN: Can you crowdfund the compute for GPT?
https://github.com/THUDM/GLM-130B might be a useful place to look
hivemind
-
You can now train a 70B language model at home
https://github.com/learning-at-home/hivemind is also relevant
-
Would anyone be interested in contributing to some group projects?
I really hope you'll join me, for the Petals support, at least! A single docker-compose.yml file is all we need, for now. If we are able to find enough people willing to host some smaller models, perhaps we could expand into the Hivemind, and create our own, custom foundation model one day?
- Hive mind:Train deep learning models on thousands of volunteers across the world
-
Could a model not be trained by a decentralized network? Like Seti @ home or kinda-sorta like bitcoin. Petals accomplishes this somewhat, but if raw computer power is the only barrier to open-source I'd be happy to try organizing decentalized computing efforts
Decentralized deep learning: https://github.com/learning-at-home/hivemind
-
Orca (built on llama13b) looks like the new sheriff in town
https://github.com/learning-at-home/hivemind - same people behind it, was made before petals I think.
-
Do you think that AI research will slow down to a halt because of regulation?
not if we rise to meet that challenge. here's a few tools that facilitate AI research in the face of an advanced persistent threat: Hivemind- a distributed Pytorch framework
-
LLM@home
yeah, there's Hivemind. and there's research wrt how to chunk out training workload so it can be scaled up. not sure why there's commentary that latency issues would limit this sort of enterprise, the architecture typically isn't designed for liveness. other subfields of distributed training/inference include zero-knowledge machine learning. besides all of that, there's also adversarial computation like SafetyNets and refereed delegation of computation.
-
[D] Google "We Have No Moat, And Neither Does OpenAI": Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI
We already have the software for it. There are some projects, but the one I'm most familiar with is https://github.com/learning-at-home/hivemind for training and it's sister project https://petals.ml/ for running large models distributed.
-
Run 100B+ language models at home, BitTorrent‑style
I'm not entirely how the approach they're using works [0], but I study federated learning and one of the highly-cited survey papers has several chapters (5 and 6 in particular) addressing potential attacks, failure modes, and bias [1].
0: https://github.com/learning-at-home/hivemind
1: https://arxiv.org/abs/1912.04977
-
SETI Home Is in Hibernation
The Hivemind project is just that
https://github.com/learning-at-home/hivemind
What are some alternatives?
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
replika-research - Replika.ai Research Papers, Posters, Slides & Datasets
ggml - Tensor library for machine learning
Super-SloMo - PyTorch implementation of Super SloMo by Jiang et al.
petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
alpa - Training and serving large-scale neural networks with auto parallelization.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
lm-human-preferences - Code for the paper Fine-Tuning Language Models from Human Preferences
HiveMind-core - Join the OVOS collective, utils for OpenVoiceOS mesh networking
metaseq - Repo for external large-scale work
FedML - FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, FEDML Nexus AI (https://fedml.ai) is your generative AI platform at scale.