Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Why do you think that https://github.com/automorphic-ai/trex is a good alternative to safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Why do you think that https://github.com/automorphic-ai/trex is a good alternative to safe-rlhf