Local-LLM-Comparison-Colab-UI
private-gpt
Local-LLM-Comparison-Colab-UI | private-gpt | |
---|---|---|
20 | 131 | |
886 | 52,027 | |
- | 2.9% | |
9.1 | 9.2 | |
3 days ago | about 23 hours ago | |
Jupyter Notebook | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Local-LLM-Comparison-Colab-UI
- Mistral 7B OpenOrca outclasses Llama 2 13B variants
-
GPT-4 API general availability
In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.
You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.
That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).
I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)
[1] https://github.com/turboderp/exllama#results-so-far
[2] https://github.com/aigoopy/llm-jeopardy
[3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...
[4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
-
Best 7B model
The best 7B I tried is WizardLM. It's my go-to model.
-
UltraLM-13B reaches top of AlpacaEval leaderboard
If you want to try it out, you can use Google Colab here with Oobabooga Text Generation UI: Link (Remember to check the instruction template and generation parameters)
-
wizardLM-7B.q4_2
I'm really impressed by wizardLM-7B.q4_2 (GPT4all) running on my 8gb M2 Mac Air. Fast response, fewer hallucinations than other 7B models I've tried. GPT4All's beta document collection and query function is respectable--going to test it more tomorrow. FWIW wizardLM-7B.q4_2 was ranked very high here https://github.com/Troyanovsky/Local-LLM-comparison.
-
Help me discover new LLMs for school project
I made a series of Colab notebooks for different models: https://github.com/Troyanovsky/Local-LLM-comparison
-
Nous Hermes 13b is very good.
I found it performing very well too in my testing (Repo). It's my second favorite model after WizardLM-13B.
- How to train 7B models with small documents?
-
What are your favorite LLMs?
My entire list at: Local LLM Comparison Repo
-
Announcing Nous-Hermes-13b (info link in thread)
I just tried HyperMantis and updated the results in the repo. It performs not bad but worse than Nous-Hermes-13B.
private-gpt
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
PrivateGPT is a nice tool for this. It's not exactly what you're asking for, but it gets part of the way there.
https://github.com/zylon-ai/private-gpt
-
PrivateGPT exploring the Documentation
Further details available at: https://docs.privategpt.dev/api-reference/api-reference/ingestion
- Show HN: I made an app to use local AI as daily driver
-
privateGPT VS quivr - a user suggested alternative
2 projects | 12 Jan 2024
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Run https://github.com/imartinez/privateGPT
Then
make ingest /path/to/folder/with/files
Then chat to the LLM.
Done.
Docs: https://docs.privategpt.dev/overview/welcome/quickstart
-
Mozilla "MemoryCache" Local AI
PrivateGPT repository in case anyone's interested: https://github.com/imartinez/privateGPT . It doesn't seem to be linked from their official website.
-
What Is Retrieval-Augmented Generation a.k.a. RAG
I’m preparing a small internal tool for my work to search documents and provide answers (with references), I’m thinking of using GPT4All [0], Danswer [1] and/or privateGPT [2].
The RAG technique is very close to what I have in mind, but I don’t want the LLM to “hallucinate” and generate answers on its own by synthesizing the source documents. As stated by many others, we’re living in interesting times.
[0] https://gpt4all.io/index.html
[1] https://www.danswer.ai/
[2] https://github.com/imartinez/privateGPT
- LM Studio – Discover, download, and run local LLMs
-
Ask HN: Local LLM Recommendation?
https://www.reddit.com/r/LocalLLaMA/comments/14niv66/using_a...
https://github.com/imartinez/privateGPT
-
Run ChatGPT-like LLMs on your laptop in 3 lines of code
I've been playing around with https://github.com/imartinez/privateGPT and https://github.com/simonw/llm and wanted to create a simple Python package that made it easier to run ChatGPT-like LLMs on your own machine, use them with non-public data, and integrate them into practical applications.
This resulted in Python package I call OnPrem.LLM.
In the documentation, there are examples for how to use it for information extraction, text generation, retrieval-augmented generation (i.e., chatting with documents on your computer), and text-to-code generation: https://amaiya.github.io/onprem/
Enjoy!
What are some alternatives?
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
simple-proxy-for-tavern
gpt4all - gpt4all: run open-source LLMs anywhere
GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
alpaca_eval - An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
can-ai-code - Self-evaluating interview for AI coders
llama.cpp - LLM inference in C/C++