instruct-eval
Emu
instruct-eval | Emu | |
---|---|---|
6 | 2 | |
471 | 1,510 | |
4.0% | 3.4% | |
8.0 | 7.4 | |
2 months ago | 3 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
instruct-eval
-
Eval mmlu result against various infer methods (HF_Causal, VLLM, AutoGPTQ, AutoGPTQ-exllama)
I modified declare-lab's instruct-eval scripts, add support to VLLM, AutoGPTQ (and new autoGPTQ support exllama now), and test the mmlu result. I also add support to fastllm (which can accelerate ChatGLM2-6b.The code is here https://github.com/declare-lab/instruct-eval , I'd like to hear any errors in those code.
-
[D] Red Pajamas Instruct 7B. Is it really that bad or some some ggml/quantization artifact? Vicuna-7b has no issue writing stories and even does basic text transformation. Yet RP refuses to do anything most of the time. It does generate a story if you run it as a raw model, but gets into a loop.
Well, I ran it with exactly the same parameters I ran Vicuna 7b, although I ran Vicuna with llama.cpp. while PJ can only be ran with ggml (I don't have a GPU). And Vicuna looped only when temperature reached 0. Given how hard it loops, I think it is some bug with ggml. Testers claim it should be close to 7b alpaca/vicuna:https://github.com/declare-lab/flan-eval
- [P] The first RedPajama models are here! The 3B and 7B models are now available under Apache 2.0, including instruction-tuned and chat versions. These models aim replicate LLaMA as closely as possible.
-
Best Instruct-Trained Alternative to Alpaca/Vicuna?
For a list of other instruction tuned models, you can check out this benchmark here: https://github.com/declare-lab/flan-eval
-
[R]Comprehensive List of Instruction Datasets for Training LLM Models (GPT-4 & Beyond)
Great resource! I’ve recently also benchmarked many of the popular instruction models here: https://github.com/declare-lab/flan-eval
-
Stability AI Launches the First of Its StableLM Suite of Language Models
I really dislike this approach of announcing new models that some companies have taken, they don't mention evaluation results or performance of the model, but instead talk about how "transparent", "accessible" and "supportive" these models are.
Anyway, I have benchmarked stablelm-base-alpha-3b (the open-source version, not the fine-tuned one which is under a NC license) using the MMLU benchmark and the results are rather underwhelming compared to other open source models:
* stablelm-base-alpha-3b (3B params): 25.6% average accuracy
* flan-t5-xl (3B params): 49.3% average accuracy
* flan-t5-small (80M params): 29.4% average accuracy
MMLU is just one benchmark, but based on the blog post, I don't think it will yield much better results in others. I'll leave links to the MMLU results of other proprietary[0] and open-access[1] models (results may vary by ±2% depending on the parameters used during inference).
[0]: https://paperswithcode.com/sota/multi-task-language-understa...
[1]: https://github.com/declare-lab/flan-eval/blob/main/mmlu.py#L...
Emu
-
Show HN: Emu2 – A Gemini-like open-source 37B Multimodal Model
I'm excited to introduce Emu2, the latest generative multimodal model developed by the Beijing Academy of Artificial Intelligence (BAAI). Emu2 is an open-source initiative that reflects BAAI's commitment to fostering open, secure, and responsible AI research. It's designed to enhance AI's proficiency in handling tasks across various modalities with minimal examples and straightforward instructions.
Emu2 has demonstrated superior performance over other large-scale models like Flamingo-80B in few-shot multimodal understanding tasks. It serves as a versatile base model for developers, providing a flexible platform for crafting specialized multimodal applications.
Key features of Emu2 include:
- A more streamlined modeling framework than its predecessor, Emu.
- A decoder capable of reconstructing images from the encoder's semantic space.
- An expansion to 37 billion parameters, boosting both capabilities and generalization.
BAAI has also released fine-tuned versions, Emu2-Chat for visual understanding and Emu2-Gen for visual generation, which stand as some of the most powerful open-source models available today.
Here are the resources for those interested in exploring or contributing to Emu2:
- Project: https://baaivision.github.io/emu2/
- Model: https://huggingface.co/BAAI/Emu2
- Code: https://github.com/baaivision/Emu/tree/main/Emu2
- Demo: https://huggingface.co/spaces/BAAI/Emu2
- Paper: https://arxiv.org/abs/2312.13286
We're eager to see how the HN community engages with Emu2 and we welcome your feedback to help us improve. Let's collaborate to push the boundaries of multimodal AI!
- Code: https://github.com/baaivision/Emu/Emu2
What are some alternatives?
lm-evaluation-harness - A framework for few-shot evaluation of autoregressive language models.
Painter - Painter & SegGPT Series: Vision Foundation Models from BAAI
StableLM - StableLM: Stability AI Language Models
open_flamingo - An open-source framework for training large multimodal models.
awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT
ColossalAI - Making large AI models cheaper, faster and more accessible
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
txtinstruct - 📚 Datasets and models for instruction-tuning
alpaca_lora_4bit
safetensors - Simple, safe way to store and distribute tensors