Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Why do you think that https://github.com/opening-up-chatgpt/opening-up-chatgpt.github.io is a good alternative to safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Why do you think that https://github.com/opening-up-chatgpt/opening-up-chatgpt.github.io is a good alternative to safe-rlhf