instruct-eval VS geov

Compare instruct-eval vs geov and see what are their differences.

instruct-eval

This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks. (by declare-lab)

geov

The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model. (by geov-ai)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
instruct-eval geov
6 2
471 122
4.0% 0.0%
8.0 5.0
2 months ago about 1 year ago
Python Jupyter Notebook
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

instruct-eval

Posts with mentions or reviews of instruct-eval. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-23.

geov

Posts with mentions or reviews of geov. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-19.
  • Stability AI Launches the First of Its StableLM Suite of Language Models
    24 projects | news.ycombinator.com | 19 Apr 2023
    Looks like my edit window closed, but my results ended up being very low so there must be something wrong (I've reached out to StabilityAI just in case). It does however seem to roughly match another user's 3B testing: https://twitter.com/abacaj/status/1648881680835387392

    The current scores I have place it between gpt2_774M_q8 and pythia_deduped_410M (yikes!). Based on training and specs you'd expect it to outperform Pythia 6.9B at least... this is running on a HEAD checkout of https://github.com/EleutherAI/lm-evaluation-harness (releases don't support hf-casual) for those looking to replicate/debug.

    Note, another LLM currently being trained, GeoV 9B, already far outperforms this model at just 80B tokens trained: https://github.com/geov-ai/geov/blob/master/results.080B.md

  • Ask HN: Open source LLM for commercial use?
    4 projects | news.ycombinator.com | 10 Apr 2023

What are some alternatives?

When comparing instruct-eval and geov you can also consider the following projects:

lm-evaluation-harness - A framework for few-shot evaluation of autoregressive language models.

txtinstruct - 📚 Datasets and models for instruction-tuning

StableLM - StableLM: Stability AI Language Models

awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT

pythia - The hub for EleutherAI's work on interpretability and learning dynamics

Emu - Emu Series: Generative Multimodal Models from BAAI

AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated

sparsegpt - Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".

llama.cpp - LLM inference in C/C++