instruct-eval
geov
instruct-eval | geov | |
---|---|---|
6 | 2 | |
471 | 122 | |
4.0% | 0.0% | |
8.0 | 5.0 | |
2 months ago | about 1 year ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
instruct-eval
-
Eval mmlu result against various infer methods (HF_Causal, VLLM, AutoGPTQ, AutoGPTQ-exllama)
I modified declare-lab's instruct-eval scripts, add support to VLLM, AutoGPTQ (and new autoGPTQ support exllama now), and test the mmlu result. I also add support to fastllm (which can accelerate ChatGLM2-6b.The code is here https://github.com/declare-lab/instruct-eval , I'd like to hear any errors in those code.
-
[D] Red Pajamas Instruct 7B. Is it really that bad or some some ggml/quantization artifact? Vicuna-7b has no issue writing stories and even does basic text transformation. Yet RP refuses to do anything most of the time. It does generate a story if you run it as a raw model, but gets into a loop.
Well, I ran it with exactly the same parameters I ran Vicuna 7b, although I ran Vicuna with llama.cpp. while PJ can only be ran with ggml (I don't have a GPU). And Vicuna looped only when temperature reached 0. Given how hard it loops, I think it is some bug with ggml. Testers claim it should be close to 7b alpaca/vicuna:https://github.com/declare-lab/flan-eval
- [P] The first RedPajama models are here! The 3B and 7B models are now available under Apache 2.0, including instruction-tuned and chat versions. These models aim replicate LLaMA as closely as possible.
-
Best Instruct-Trained Alternative to Alpaca/Vicuna?
For a list of other instruction tuned models, you can check out this benchmark here: https://github.com/declare-lab/flan-eval
-
[R]Comprehensive List of Instruction Datasets for Training LLM Models (GPT-4 & Beyond)
Great resource! I’ve recently also benchmarked many of the popular instruction models here: https://github.com/declare-lab/flan-eval
-
Stability AI Launches the First of Its StableLM Suite of Language Models
I really dislike this approach of announcing new models that some companies have taken, they don't mention evaluation results or performance of the model, but instead talk about how "transparent", "accessible" and "supportive" these models are.
Anyway, I have benchmarked stablelm-base-alpha-3b (the open-source version, not the fine-tuned one which is under a NC license) using the MMLU benchmark and the results are rather underwhelming compared to other open source models:
* stablelm-base-alpha-3b (3B params): 25.6% average accuracy
* flan-t5-xl (3B params): 49.3% average accuracy
* flan-t5-small (80M params): 29.4% average accuracy
MMLU is just one benchmark, but based on the blog post, I don't think it will yield much better results in others. I'll leave links to the MMLU results of other proprietary[0] and open-access[1] models (results may vary by ±2% depending on the parameters used during inference).
[0]: https://paperswithcode.com/sota/multi-task-language-understa...
[1]: https://github.com/declare-lab/flan-eval/blob/main/mmlu.py#L...
geov
-
Stability AI Launches the First of Its StableLM Suite of Language Models
Looks like my edit window closed, but my results ended up being very low so there must be something wrong (I've reached out to StabilityAI just in case). It does however seem to roughly match another user's 3B testing: https://twitter.com/abacaj/status/1648881680835387392
The current scores I have place it between gpt2_774M_q8 and pythia_deduped_410M (yikes!). Based on training and specs you'd expect it to outperform Pythia 6.9B at least... this is running on a HEAD checkout of https://github.com/EleutherAI/lm-evaluation-harness (releases don't support hf-casual) for those looking to replicate/debug.
Note, another LLM currently being trained, GeoV 9B, already far outperforms this model at just 80B tokens trained: https://github.com/geov-ai/geov/blob/master/results.080B.md
- Ask HN: Open source LLM for commercial use?
What are some alternatives?
lm-evaluation-harness - A framework for few-shot evaluation of autoregressive language models.
txtinstruct - 📚 Datasets and models for instruction-tuning
StableLM - StableLM: Stability AI Language Models
awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT
pythia - The hub for EleutherAI's work on interpretability and learning dynamics
Emu - Emu Series: Generative Multimodal Models from BAAI
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
sparsegpt - Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
llama.cpp - LLM inference in C/C++