flan-alpaca
instruct-eval
flan-alpaca | instruct-eval | |
---|---|---|
5 | 6 | |
337 | 471 | |
-0.3% | 4.0% | |
5.7 | 8.0 | |
11 months ago | 2 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
flan-alpaca
-
Is it feasible to develop multiple specialised language models that are small in size and expertise-specific, which can be merged to achieve comparable results to those obtained from a single large language model?
If you have enough task or domain specific training data, the model size becomes less important. For example, you can take an instruction tuned smaller model like FlanT5 and fine tune for your specific case: https://github.com/declare-lab/flan-alpaca
-
Best Instruct-Trained Alternative to Alpaca/Vicuna?
Hi, you can try Flan-Alpaca here which does not have such restrictions: https://github.com/declare-lab/flan-alpaca
-
Cerebras-GPT: A Family of Open, Compute-Efficient, Large Language Models
I've been following open source LLMs for a while and at first glance this doesn't seem too powerful compared to other open models, Flan-Alpaca[0] is licensed under Apache 2.0, and it seems to perform much better. Although I'm not sure about the legalities about that licensing, since it's basically Flan-T5 fine-tuned using the Alpaca dataset (which is under a Non-Commercial license).
Nonetheless, it's exciting to see all these open models popping up, and I hope that a LLM equivalent to Stable Diffusion comes sooner than later.
[0]: https://github.com/declare-lab/flan-alpaca
-
[D] What is the best open source chatbot AI to do transfer learning on?
Someone's already taking care of that - Flan-Alpaca
-
[P] ChatLLaMA - A ChatGPT style chatbot for Facebook's LLaMA
I think this might be exactly what you're looking for https://github.com/declare-lab/flan-alpaca
instruct-eval
-
Eval mmlu result against various infer methods (HF_Causal, VLLM, AutoGPTQ, AutoGPTQ-exllama)
I modified declare-lab's instruct-eval scripts, add support to VLLM, AutoGPTQ (and new autoGPTQ support exllama now), and test the mmlu result. I also add support to fastllm (which can accelerate ChatGLM2-6b.The code is here https://github.com/declare-lab/instruct-eval , I'd like to hear any errors in those code.
-
[D] Red Pajamas Instruct 7B. Is it really that bad or some some ggml/quantization artifact? Vicuna-7b has no issue writing stories and even does basic text transformation. Yet RP refuses to do anything most of the time. It does generate a story if you run it as a raw model, but gets into a loop.
Well, I ran it with exactly the same parameters I ran Vicuna 7b, although I ran Vicuna with llama.cpp. while PJ can only be ran with ggml (I don't have a GPU). And Vicuna looped only when temperature reached 0. Given how hard it loops, I think it is some bug with ggml. Testers claim it should be close to 7b alpaca/vicuna:https://github.com/declare-lab/flan-eval
- [P] The first RedPajama models are here! The 3B and 7B models are now available under Apache 2.0, including instruction-tuned and chat versions. These models aim replicate LLaMA as closely as possible.
-
Best Instruct-Trained Alternative to Alpaca/Vicuna?
For a list of other instruction tuned models, you can check out this benchmark here: https://github.com/declare-lab/flan-eval
-
[R]Comprehensive List of Instruction Datasets for Training LLM Models (GPT-4 & Beyond)
Great resource! I’ve recently also benchmarked many of the popular instruction models here: https://github.com/declare-lab/flan-eval
-
Stability AI Launches the First of Its StableLM Suite of Language Models
I really dislike this approach of announcing new models that some companies have taken, they don't mention evaluation results or performance of the model, but instead talk about how "transparent", "accessible" and "supportive" these models are.
Anyway, I have benchmarked stablelm-base-alpha-3b (the open-source version, not the fine-tuned one which is under a NC license) using the MMLU benchmark and the results are rather underwhelming compared to other open source models:
* stablelm-base-alpha-3b (3B params): 25.6% average accuracy
* flan-t5-xl (3B params): 49.3% average accuracy
* flan-t5-small (80M params): 29.4% average accuracy
MMLU is just one benchmark, but based on the blog post, I don't think it will yield much better results in others. I'll leave links to the MMLU results of other proprietary[0] and open-access[1] models (results may vary by ±2% depending on the parameters used during inference).
[0]: https://paperswithcode.com/sota/multi-task-language-understa...
[1]: https://github.com/declare-lab/flan-eval/blob/main/mmlu.py#L...
What are some alternatives?
alpaca-electron - The simplest way to run Alpaca (and other LLaMA-based local LLMs) on your own computer
lm-evaluation-harness - A framework for few-shot evaluation of autoregressive language models.
agents - An Open-source Framework for Autonomous Language Agents
StableLM - StableLM: Stability AI Language Models