llama-int8 VS egghead

Compare llama-int8 vs egghead and see what are their differences.

llama-int8

Quantized inference code for LLaMA models (by tloen)

egghead

discord bot for ai stuff (by toasterrepairman)
WorkOS - The modern identity platform for B2B SaaS
The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
workos.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
llama-int8 egghead
6 1
1,044 3
- -
3.6 9.2
about 1 year ago 8 days ago
Python Rust
GNU General Public License v3.0 only -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llama-int8

Posts with mentions or reviews of llama-int8. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-04.

egghead

Posts with mentions or reviews of egghead. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-04.
  • Show HN: Llama-dl – high-speed download of LLaMA, Facebook's 65B GPT model
    7 projects | news.ycombinator.com | 4 Mar 2023
    It's a toy I threw together as a weekend project, but you're welcome to give it a whirl: https://github.com/toasterrepairman/egghead

    Here's the rundown:

    - You need libtorch, openssl and cargo installed on your system before compiling

    AND

    - You have to put the variables from the README in your ~.bashrc along with a valid Discord bot token

    Once you do that, it should "just work". It's using a super pruned model with high-temperature tuning, so the results should be... dicy. I assume no responsibility for the vast amount of misinformation this will produce.

    Commands include "e.help" for help, "e.ask" for traditional ChatGPT-style questions, "e.news" to grab a Fox headline and generate the rest, "e.wiki" to look up a Wikipedia article and use it as a prompt, and "e.hn"... a feature I will build Soon™.

    Let me know if you run into any issues!

What are some alternatives?

When comparing llama-int8 and egghead you can also consider the following projects:

llama - Inference code for Llama models

llama-dl - High-speed download of LLaMA, Facebook's 65B parameter GPT model [UnavailableForLegalReasons - Repository access blocked]

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

test - Measuring Massive Multitask Language Understanding | ICLR 2021

FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.

text-g

llama-cpu - Fork of Facebooks LLaMa model to run on CPU