Suggest an alternative to

safe-rlhf

Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

Why do you think that https://github.com/jerry1993-tech/Cornucopia-LLaMA-Fin-Chinese is a good alternative to safe-rlhf

A URL to the alternative repo (e.g. GitHub, GitLab)

Here you can share your experience with the project you are suggesting or its comparison with safe-rlhf. Optional.

A valid email to send you a verification link when necessary or log in.