Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
Why do you think that https://github.com/tatsu-lab/stanford_alpaca is a good alternative to hh-rlhf
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
Why do you think that https://github.com/tatsu-lab/stanford_alpaca is a good alternative to hh-rlhf