GLM-130B
pythia
GLM-130B | pythia | |
---|---|---|
19 | 7 | |
7,616 | 2,056 | |
0.4% | 3.2% | |
4.8 | 7.8 | |
10 months ago | 7 days ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GLM-130B
-
GLM-130B
The https://github.com/THUDM/GLM-130B model is trained on The Pile and can run on 4x3090 when quantized to INT4. I'm wondering if anyone knows if this model could (or has) been quantized using GPTQ, which gives some impressive performance gains over traditional quantization, and I'm also wondering if anyone has tried a 3-bit or 2-bit quantization of such a massive model (using GPTQ). Are there any inherent limitations in this? Is there anything about this model that prevents it from being run on text-generation-webui?
- Has anyone tried GLM?
- Ask HN: Open source LLM for commercial use?
- Whichever way I look at it, I just don’t see this being the case. Why do you agree/disagree?
-
The New Bing and ChatGPT
> GLM-130B, a model comparable with GPT-3, has 130 billion parameters in FP16 precision, a total of 260G of GPU memory is required to store model weights. The DGX-A100 server has 8 A100s and provides an amount of 320G of GPU memory (640G for 80G A100 version) so it suits GLM-130B well.
https://github.com/THUDM/GLM-130B/blob/main/docs/low-resourc...
-
OpenAI Major Outage
GLM-130B[1] (a 130 billion parameter model vs GPT-3's 175 billion parameter model) is able to run optimally on consumer level high-end hardware, 4xRTX 3090 in particular. That's < $4k at current prices, and as hardware prices go one can only imagine what it'll be in a year or two. It also enables running with degraded performance on lesser systems.
It's a whole lot cheaper to run neural net style systems than to train them. "Somebody on Twitter"[2] got it setup, and broke down the costs, demonstrated some prompts, and what not. Cliff notes being a fraction of a penny per query, with each taking about 16s to generate. The output's pretty terrible, but it's unclear to me whether that's inherent or a result of priority. I expect OpenAI spent a lot of manpower on supervised training, whereas this system probably had minimal, especially in English (it's from a Chinese university).
[1] - https://github.com/THUDM/GLM-130B
[2] - https://twitter.com/alexjc/status/1617152800571416577
- [D]Are there any known AI systems today that are significantly more advanced than chatGPT ?
-
Will there ever be a "Stable Diffusion chat AI" that we can run at home like one can do with Stable Diffusion? A "roll-your-own at home ChatGPT"?
GLM-130B in 4 bit mode is better than GPT3 and can run on 4 RTX-3090s. Still expensive but it’s getting closer. https://github.com/THUDM/GLM-130B
- Open-Source competitor to OpenAI?
-
Ask HN: Can you crowdfund the compute for GPT?
https://github.com/THUDM/GLM-130B might be a useful place to look
pythia
-
If you can't reproduce the model then it's not open-source
You can grep for bad words. What you can't do(unless hoops are jumped through) is to verify that weights came from the same dataset. You can set the same random seed and still get different results. Calculations are not that deterministic. (https://pytorch.org/docs/stable/notes/randomness.html#reprod...).
>I am overall skeptical that this is true in the case of LLMs
This skepticism seems reasonable. EleutherAI have documentation to reproduce training (https://github.com/EleutherAI/pythia#reproducing-training). So far I haven't seen it leading to anything.
-
Local Alternatives of ChatGPT and Midjourney
LLaMA, Pythia, RWKV, Flan-T5 (self-hosted), FlexGen
- Ask HN: Open source LLM for commercial use?
-
A New AI Research Proposes Pythia: A Suite of Decoder-Only Autoregressive Language Models Ranging from 70M to 12B Parameters
Github: https://github.com/EleutherAI/pythia
- Pythia: Interpreting Autoregressive Transformers Across Time and Scale
- AI computing startup Cerebras releases open source ChatGPT-like models
-
Is there a way to easily train ChatGPT or GPT on custom knowledge?
Pythia is another smaller option that seems to have pretty good performance. As well as FLAN. Both of those are okay for commercial use AFAIK (though double check for yourself).
What are some alternatives?
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
lollms-webui - Lord of Large Language Models Web User Interface
ggml - Tensor library for machine learning
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
lm-human-preferences - Code for the paper Fine-Tuning Language Models from Human Preferences
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
stable-diffusion-webui - Stable Diffusion web UI