Local-LLM-Comparison-Colab-UI
alpaca_eval
Local-LLM-Comparison-Colab-UI | alpaca_eval | |
---|---|---|
20 | 4 | |
886 | 1,134 | |
- | 10.0% | |
9.1 | 9.6 | |
4 days ago | 2 days ago | |
Jupyter Notebook | Jupyter Notebook | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Local-LLM-Comparison-Colab-UI
- Mistral 7B OpenOrca outclasses Llama 2 13B variants
-
GPT-4 API general availability
In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.
You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.
That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).
I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)
[1] https://github.com/turboderp/exllama#results-so-far
[2] https://github.com/aigoopy/llm-jeopardy
[3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...
[4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
-
Best 7B model
The best 7B I tried is WizardLM. It's my go-to model.
-
UltraLM-13B reaches top of AlpacaEval leaderboard
If you want to try it out, you can use Google Colab here with Oobabooga Text Generation UI: Link (Remember to check the instruction template and generation parameters)
-
wizardLM-7B.q4_2
I'm really impressed by wizardLM-7B.q4_2 (GPT4all) running on my 8gb M2 Mac Air. Fast response, fewer hallucinations than other 7B models I've tried. GPT4All's beta document collection and query function is respectable--going to test it more tomorrow. FWIW wizardLM-7B.q4_2 was ranked very high here https://github.com/Troyanovsky/Local-LLM-comparison.
-
Help me discover new LLMs for school project
I made a series of Colab notebooks for different models: https://github.com/Troyanovsky/Local-LLM-comparison
-
Nous Hermes 13b is very good.
I found it performing very well too in my testing (Repo). It's my second favorite model after WizardLM-13B.
- How to train 7B models with small documents?
-
What are your favorite LLMs?
My entire list at: Local LLM Comparison Repo
-
Announcing Nous-Hermes-13b (info link in thread)
I just tried HyperMantis and updated the results in the repo. It performs not bad but worse than Nous-Hermes-13B.
alpaca_eval
-
UltraLM-13B reaches top of AlpacaEval leaderboard
Alpaca Eval is open source and was developed by the same team who trained the alpaca model afaik. It is not like what you said in the other comment
-
[P] AlpacaEval : An Automatic Evaluator for Instruction-following Language Models
I have been going deep in this space for my can-ai-code project and was looking at the config that WizardLM was run with: https://github.com/tatsu-lab/alpaca_eval/blob/main/src/alpaca_eval/models_configs/wizardlm-13b/configs.yaml
an automatic evaluator that is easy to use, fast, cheap and validated against 20K human annotations. It actually has a higher agreement with majority vote of humans than a single human annotator! Of course, our method still has limitations which we discuss here!
What are some alternatives?
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
FinGPT - FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
simple-proxy-for-tavern
llm-search - Querying local documents, powered by LLM
GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ
alpaca_farm - A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
medmcqa - A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.
can-ai-code - Self-evaluating interview for AI coders
language-planner - Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"