Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Why do you think that https://github.com/huggingface/alignment-handbook is a good alternative to safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Why do you think that https://github.com/huggingface/alignment-handbook is a good alternative to safe-rlhf