open-instruct reviews and mentions

Posts with mentions or reviews of open-instruct. We have used some of these posts to build our list of alternatives and similar projects.
  • Exploring Instruction-Tuning Language Models: Meet Tülu-A Suite of Fine-Tuned Large Language Models (LLMs)
    1 project | /r/machinelearningnews | 13 Jun 2023
    Github: https://github.com/allenai/open-instruct
  • New instruction tuned LLaMA: Tulu 7/13/30/65b (Exploring the State of Instruction Tuning on Open Resources)
    1 project | /r/LocalLLaMA | 10 Jun 2023
    In this work we explore recent advances in instruction-tuning language models on a range of open instruction-following datasets. Despite recent claims that open models can be on par with state-of-the-art proprietary models, these claims are often accompanied by limited evaluation, making it difficult to compare models across the board and determine the utility of various resources. We provide a large set of instruction-tuned models from 6.7B to 65B parameters in size, trained on 12 instruction datasets ranging from manually curated (e.g., OpenAssistant) to synthetic and distilled (e.g., Alpaca) and systematically evaluate them on their factual knowledge, reasoning, multilinguality, coding, and open-ended instruction following abilities through a collection of automatic, model-based, and human-based metrics. We further introduce Tülu, our best performing instruction-tuned model suite finetuned on a combination of high-quality open resources. Our experiments show that different instruction-tuning datasets can uncover or enhance specific skills, while no single dataset (or combination) provides the best performance across all evaluations. Interestingly, we find that model and human preference-based evaluations fail to reflect differences in model capabilities exposed by benchmark-based evaluations, suggesting the need for the type of systemic evaluation performed in this work. Our evaluations show that the best model in any given evaluation reaches on average 83% of ChatGPT performance, and 68% of GPT-4 performance, suggesting that further investment in building better base models and instruction-tuning data is required to close the gap. We release our instruction-tuned models, including a fully finetuned 65B Tülu, along with our code, data, and evaluation framework at this https URL to facilitate future research.

Stats

Basic open-instruct repo stats
2
1,027
9.2
6 days ago

allenai/open-instruct is an open source project licensed under Apache License 2.0 which is an OSI approved license.

The primary programming language of open-instruct is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com