Examining how large language models (LLMs) perform across various synthetic regression tasks when given (input, output) examples in their context, without any parameter update
Why do you think that https://github.com/NVIDIA/GenerativeAIExamples is a good alternative to llm4regression