instruct-eval
lm-evaluation-harness
instruct-eval | lm-evaluation-harness | |
---|---|---|
6 | 1 | |
466 | 91 | |
3.0% | - | |
8.0 | 3.7 | |
2 months ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
instruct-eval
-
Eval mmlu result against various infer methods (HF_Causal, VLLM, AutoGPTQ, AutoGPTQ-exllama)
I modified declare-lab's instruct-eval scripts, add support to VLLM, AutoGPTQ (and new autoGPTQ support exllama now), and test the mmlu result. I also add support to fastllm (which can accelerate ChatGLM2-6b.The code is here https://github.com/declare-lab/instruct-eval , I'd like to hear any errors in those code.
-
[D] Red Pajamas Instruct 7B. Is it really that bad or some some ggml/quantization artifact? Vicuna-7b has no issue writing stories and even does basic text transformation. Yet RP refuses to do anything most of the time. It does generate a story if you run it as a raw model, but gets into a loop.
Well, I ran it with exactly the same parameters I ran Vicuna 7b, although I ran Vicuna with llama.cpp. while PJ can only be ran with ggml (I don't have a GPU). And Vicuna looped only when temperature reached 0. Given how hard it loops, I think it is some bug with ggml. Testers claim it should be close to 7b alpaca/vicuna:https://github.com/declare-lab/flan-eval
- [P] The first RedPajama models are here! The 3B and 7B models are now available under Apache 2.0, including instruction-tuned and chat versions. These models aim replicate LLaMA as closely as possible.
-
Best Instruct-Trained Alternative to Alpaca/Vicuna?
For a list of other instruction tuned models, you can check out this benchmark here: https://github.com/declare-lab/flan-eval
-
[R]Comprehensive List of Instruction Datasets for Training LLM Models (GPT-4 & Beyond)
Great resource! I’ve recently also benchmarked many of the popular instruction models here: https://github.com/declare-lab/flan-eval
-
Stability AI Launches the First of Its StableLM Suite of Language Models
I really dislike this approach of announcing new models that some companies have taken, they don't mention evaluation results or performance of the model, but instead talk about how "transparent", "accessible" and "supportive" these models are.
Anyway, I have benchmarked stablelm-base-alpha-3b (the open-source version, not the fine-tuned one which is under a NC license) using the MMLU benchmark and the results are rather underwhelming compared to other open source models:
* stablelm-base-alpha-3b (3B params): 25.6% average accuracy
* flan-t5-xl (3B params): 49.3% average accuracy
* flan-t5-small (80M params): 29.4% average accuracy
MMLU is just one benchmark, but based on the blog post, I don't think it will yield much better results in others. I'll leave links to the MMLU results of other proprietary[0] and open-access[1] models (results may vary by ±2% depending on the parameters used during inference).
[0]: https://paperswithcode.com/sota/multi-task-language-understa...
[1]: https://github.com/declare-lab/flan-eval/blob/main/mmlu.py#L...
lm-evaluation-harness
-
Stability AI Launches the First of Its StableLM Suite of Language Models
Yeah, although looks like it currently has some issues with coqa: https://github.com/EleutherAI/lm-evaluation-harness/issues/2...
There's also the bigscience fork, but I ran into even more problems (although I didn't try too hard) https://github.com/bigscience-workshop/lm-evaluation-harness
And there's https://github.com/EleutherAI/lm-eval2/ (not sure if it's just starting over w/ a new repo or what?) but it has limited tests available
What are some alternatives?
StableLM - StableLM: Stability AI Language Models
awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT
flash-attention - Fast and memory-efficient exact attention
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
txtinstruct - 📚 Datasets and models for instruction-tuning
Emu - Emu Series: Generative Multimodal Models from BAAI
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
lm-eval2
lm-evaluation-harness - A framework for few-shot evaluation of language models.