lm-human-preferences
GLM-130B
Our great sponsors
lm-human-preferences | GLM-130B | |
---|---|---|
8 | 19 | |
1,076 | 7,579 | |
5.4% | 1.0% | |
2.7 | 4.8 | |
8 months ago | 8 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lm-human-preferences
- Ask HN: Open-source GPT-3 alternatives
- El éxito continuo de OpenAI: Y como llegaron a crear la IA más avanzada del 2023. ChatGPT.
-
Value head in GPT2
Found relevant code at https://github.com/openai/lm-human-preferences + all code implementations here
-
Should we stick to the devil we know?
That's why, when they're serious, they use RL for finetuning from human preferences (would be hilarious if this attempt to solve the terrible bias you take to be evidence of AGI threat ends up creating a Woke Singleton itself, btw); it's a powerful general approach, and I see no sign of it being applied here.
-
Dall-E 2
The kind of measures they are taking, like simply deleting wholesale anything problematic, don't really have a '-1'.
But amusingly, exactly that did happen in one of their GPT experiments! https://openai.com/blog/fine-tuning-gpt-2/
-
[D] Applications for using reinforcement learning to fine-tune GPT-2
Code for https://arxiv.org/abs/1909.08593 found: https://github.com/openai/lm-human-preferences
GLM-130B
- Ask HN: Open source LLM for commercial use?
-
The New Bing and ChatGPT
> GLM-130B, a model comparable with GPT-3, has 130 billion parameters in FP16 precision, a total of 260G of GPU memory is required to store model weights. The DGX-A100 server has 8 A100s and provides an amount of 320G of GPU memory (640G for 80G A100 version) so it suits GLM-130B well.
https://github.com/THUDM/GLM-130B/blob/main/docs/low-resourc...
-
Will there ever be a "Stable Diffusion chat AI" that we can run at home like one can do with Stable Diffusion? A "roll-your-own at home ChatGPT"?
GLM-130B in 4 bit mode is better than GPT3 and can run on 4 RTX-3090s. Still expensive but it’s getting closer. https://github.com/THUDM/GLM-130B
- Open-Source competitor to OpenAI?
-
Ask HN: Can you crowdfund the compute for GPT?
https://github.com/THUDM/GLM-130B might be a useful place to look
- [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3
-
Will we have a free version of ChatGPT (GPT-3) similar to Stable Diffusion?
Also checkout https://github.com/THUDM/GLM-130B which can run on 4 RTX3090
-
Should we stick to the devil we know?
Same deal with Xi.
What are some alternatives?
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
ggml - Tensor library for machine learning
petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
trl - Train transformer language models with reinforcement learning.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
dalle-mini - DALL·E Mini - Generate images from a text prompt
metaseq - Repo for external large-scale work
tensorrtx - Implementation of popular deep learning networks with TensorRT network definition API
glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"