lm-human-preferences VS GLM-130B

Compare lm-human-preferences vs GLM-130B and see what are their differences.

lm-human-preferences

Code for the paper Fine-Tuning Language Models from Human Preferences (by openai)

GLM-130B

GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023) (by THUDM)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
lm-human-preferences GLM-130B
8 19
1,076 7,579
5.4% 1.0%
2.7 4.8
8 months ago 8 months ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

lm-human-preferences

Posts with mentions or reviews of lm-human-preferences. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-14.

GLM-130B

Posts with mentions or reviews of GLM-130B. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-10.

What are some alternatives?

When comparing lm-human-preferences and GLM-130B you can also consider the following projects:

PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

ggml - Tensor library for machine learning

petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

trl - Train transformer language models with reinforcement learning.

Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.

dalle-mini - DALL·E Mini - Generate images from a text prompt

metaseq - Repo for external large-scale work

tensorrtx - Implementation of popular deep learning networks with TensorRT network definition API

glide-text2im - GLIDE: a diffusion-based text-conditional image synthesis model

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"