Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Why do you think that https://github.com/jerry1993-tech/Cornucopia-LLaMA-Fin-Chinese is a good alternative to safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Why do you think that https://github.com/jerry1993-tech/Cornucopia-LLaMA-Fin-Chinese is a good alternative to safe-rlhf