LMFlow VS lm-evaluation-harness

Compare LMFlow vs lm-evaluation-harness and see what are their differences.

LMFlow

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All. (by OptimalScale)

lm-evaluation-harness

A framework for few-shot evaluation of language models. (by EleutherAI)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
LMFlow lm-evaluation-harness
10 34
8,042 5,151
3.5% 11.3%
9.6 9.9
4 days ago 7 days ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LMFlow

Posts with mentions or reviews of LMFlow. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-03.
  • Your weekly machine learning digest
    2 projects | /r/learnmachinelearning | 3 Jul 2023
  • Any guide/intro to fine-tuning anywhere?
    5 projects | /r/LocalLLaMA | 28 Jun 2023
    You might want to have a look at LMFlow.
  • Robin V2 Launches: Achieves Unparalleled Performance on OpenLLM!
    2 projects | /r/machinelearningnews | 15 Jun 2023
  • [D] Have you tried fine-tuning an open source LLM?
    6 projects | /r/MachineLearning | 13 May 2023
    I'd like to recommend LMFlow (https://github.com/OptimalScale/LMFlow), a fast and extensible toolkit for finetuning and inference of large foundation models.
  • [R] DetGPT: Detect What You Need via Reasoning
    2 projects | /r/MachineLearning | 12 May 2023
    The "reasoning-based object detection" is a challenging problem because the detector needs to understand and reason about the user's coarse-grained/abstract instructions and analyze the current visual information to locate the target object accurately. In this direction, researchers from the Hong Kong University of Science and Technology and the University of Hong Kong have conducted some preliminary explorations. Specifically, they use a pre-trained visual encoder (BLIP-2) to extract visual features from images and align the visual features to the text space using an alignment function. They use a large-scale language model (Robin/Vicuna) to understand the user's question, combined with the visual information they see, to reason about the objects that users are truly interested in. Then, they provide the object names to the pre-trained detector (Grounding-DINO) for specific location prediction. In this way, the model can analyze the image based on any user instructions and accurately predict the location of the object of interest to the user. It is worth noting that the difficulty here mainly lies in the fact that the model needs to achieve task-specific output formats for different specific tasks as much as possible without damaging the model's original abilities. To guide the language model to follow specific patterns and generate outputs that conform to the object detection format, the research team used ChatGPT to generate cross-modal instruction data to fine-tune the model. Specifically, based on 5000 coco images, they used ChatGPT to create a 30,000 cross-modal image-text fine-tuning dataset. To improve the efficiency of training, they fixed other model parameters and only learned cross-modal linear mapping. Experimental results show that even if only the linear layer is fine-tuned, the language model can understand fine-grained image features and follow specific patterns to perform inference-based image detection tasks, showing excellent performance. This research topic has great potential. Based on this technology, the field of home robots will further shine: people in homes can use abstract or coarse-grained voice instructions to make robots understand, recognize, and locate the objects they need, and provide relevant services. In the field of industrial robots, this technology will bring endless vitality: industrial robots can cooperate more naturally with human workers, accurately understand their instructions and needs, and achieve intelligent decision-making and operations. On the production line, human workers can use coarse-grained voice instructions or text input to allow robots to automatically understand, recognize, and locate the items that need to be processed, thereby improving production efficiency and quality. With object detection models that come with reasoning capabilities, we can develop more intelligent, natural, and efficient robots to provide more convenient, efficient, and humane services to humans. This is a field with broad prospects and deserves more attention and further exploration by more researchers. DetGPT supports multiple language models and has been validated based on two language models, Robin-13B and Vicuna-13B. The Robin series language model is a dialogue model trained by the LMFlow team ( https://github.com/OptimalScale/LMFlow) at the Hong Kong University of Science and Technology, achieving results competitive to Vicuna on multiple language ability evaluation benchmarks (model download: https://github.com/OptimalScale/LMFlow#model-zoo). Previously, the LMFlow team trained a vertical GPT model using a consumer-grade 3090 graphics card in just 5 hours. Today, this team, in collaboration with the NLP Group at the University of Hong Kong, has brought us a multimodal surprise. Welcome to try our demo and open-source code! Online demo: https://detgpt.github.io/ Open-source code: https://github.com/OptimalScale/DetGPT
  • Leaderboard for LLMs? [D]
    1 project | /r/MachineLearning | 9 May 2023
    Hi LMFlow Benchmark (https://github.com/OptimalScale/LMFlow) evaluates 31 open-source LLMs with an automatic metric: negative log likelihood.
  • [R] LMFlow Benchmark: An Automatic Evaluation Framework for Open-Source LLMs
    3 projects | /r/MachineLearning | 9 May 2023
    LMFlow: https://github.com/OptimalScale/LMFlow
  • [R] Foundation Model Alignment with RAFT🛶 in LMFlow
    2 projects | /r/MachineLearning | 17 Apr 2023
    Its implementation is available from https://github.com/OptimalScale/LMFlow.
  • LMFlow – Toolkit for Finetuning and Inference of Large Foundation Models
    1 project | news.ycombinator.com | 13 Apr 2023

lm-evaluation-harness

Posts with mentions or reviews of lm-evaluation-harness. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-09.
  • Mistral AI Launches New 8x22B Moe Model
    4 projects | news.ycombinator.com | 9 Apr 2024
    The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)
  • Show HN: Times faster LLM evaluation with Bayesian optimization
    6 projects | news.ycombinator.com | 13 Feb 2024
    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

  • Language Model Evaluation Harness
    1 project | news.ycombinator.com | 25 Nov 2023
  • Best courses / tutorials on open-source LLM finetuning
    1 project | /r/LLMDevs | 10 Jul 2023
    I haven't run this yet, but I'm aware of Eleuther AI's evaluation harness EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of autoregressive language models. (github.com) and GPT-4 -based evaluations like lm-sys/FastChat: An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and FastChat-T5. (github.com)
  • Orca-Mini-V2-13b
    1 project | /r/LocalLLaMA | 9 Jul 2023
    Updates: Just finished final evaluation (additional metrics) on https://github.com/EleutherAI/lm-evaluation-harness and have averaged the results for orca-mini-v2-13b. The average results for the Open LLM Leaderboard are not that great, compare to initial metrics. The average is now 0.54675 which put this model below then many other 13b out there.
  • My largest ever quants, GPT 3 sized! BLOOMZ 176B and BLOOMChat 1.0 176B
    6 projects | /r/LocalLLaMA | 6 Jul 2023
    Hey u/The-Bloke Appreciate the quants! What is the degradation on the some benchmarks. Have you seen https://github.com/EleutherAI/lm-evaluation-harness. 3-bit and 2-bit quant will really be pushing it. I don't see a ton of evaluation results on the quants and nice to see a before and after.
  • Dataset of MMLU results broken down by task
    2 projects | /r/datasets | 6 Jul 2023
    I am primarily looking for results of running the MMLU evaluation on modern large language models. I have been able to find some data here https://github.com/EleutherAI/lm-evaluation-harness/tree/master/results and will be asking them if/when, they can provide any additional data.
  • Orca-Mini-V2-7b
    1 project | /r/LocalLLaMA | 3 Jul 2023
    I evaluated orca_mini_v2_7b on a wide range of tasks using Language Model Evaluation Harness from EleutherAI.
  • Why Falcon 40B managed to beat LLaMA 65B?
    1 project | /r/datascience | 19 Jun 2023
  • OpenLLaMA 13B Released
    7 projects | news.ycombinator.com | 18 Jun 2023
    There is the Language Model Evaluation Harness project which evaluates LLMs on over 200 tasks. HuggingFace has a leaderboard tracking performance on a subset of these tasks.

    https://github.com/EleutherAI/lm-evaluation-harness

    https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderb...

What are some alternatives?

When comparing LMFlow and lm-evaluation-harness you can also consider the following projects:

axolotl - Go ahead and axolotl questions

BIG-bench - Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models

CogVLM - a state-of-the-art-level open visual language model | 多模态预训练模型

aitextgen - A robust Python tool for text-based AI training and generation using GPT-2.

chatgpt_macro_for_texstudio - The ChatGPT Macro for TeXstudio is a user-friendly integration that connects TeXstudio with OpenAI's API.

gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

llm-foundry - LLM training code for Databricks foundation models

StableLM - StableLM: Stability AI Language Models

const_layout - Official implementation of the MM'21 paper "Constrained Graphic Layout Generation via Latent Optimization" (LayoutGAN++, CLG-LO, and Layout evaluation)

gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

giskard - 🐢 Open-Source Evaluation & Testing framework for LLMs and ML models

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.