Examining how large language models (LLMs) perform across various synthetic regression tasks when given (input, output) examples in their context, without any parameter update
Why do you think that https://github.com/PKU-Alignment/safe-rlhf is a good alternative to llm4regression