WizardLM
ggml
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WizardLM
- FLaNK AI-April 22, 2024
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
This is interesting work, and a good contribution, but there is no need to mislead people.
[1] https://github.com/nlpxucan/WizardLM
-
Continue with LocalAI: An alternative to GitHub's Copilot that runs everything locally
If you pair this with the latest WizardCoder models, which have a fairly better performance than the standard Salesforce Codegen2 and Codegen2.5, you have a pretty solid alternative to GitHub Copilot that runs completely locally.
- WizardCoder context?
- The world's most-powerful AI model suddenly got 'lazier' and 'dumber.' A radical redesign of OpenAI's GPT-4 could be behind the decline in performance.
-
Official WizardLM-13B-V1.1 Released! Train with Only 1K Data! Can Achieve 86.32% on AlpacaEval!
(We will update the demo links in our github.)
-
GPT-4 API general availability
In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.
You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.
That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).
I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)
[1] https://github.com/turboderp/exllama#results-so-far
[2] https://github.com/aigoopy/llm-jeopardy
[3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...
[4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
-
WizardLM-13B-V1.0-Uncensored
You talking about this? https://github.com/nlpxucan/WizardLM
-
What 7b llm to use
The smallest model that is close to competent at code is WizardCoder 15B.. https://github.com/nlpxucan/WizardLM/
-
16-Jun-2023
WizardCoder: Empowering Code Large Language Models with Evol-Instruct (https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder)
ggml
-
LLMs on your local Computer (Part 1)
git clone https://github.com/ggerganov/ggml cd ggml mkdir build cd build cmake .. make -j4 gpt-j ../examples/gpt-j/download-ggml-model.sh 6B
-
GGUF, the Long Way Around
Cool. I was just learning about GGUF by creating my own parser for it based on the spec https://github.com/ggerganov/ggml/blob/master/docs/gguf.md (for educational purposes)
-
Ask HN: People who switched from GPT to their own models. How was it?
If you don't care about the details of how those model servers work, then something that abstracts out the whole process like LM Studio or Ollama is all you need.
However, if you want to get into the weeds of how this actually works, I recommend you look up model quantization and some libraries like ggml[1] that actually do that for you.
[1] https://github.com/ggerganov/ggml
- GGUF File Format
-
Google just shipped libggml from llama-cpp into its Android AICore
Because the library is called ggml, but it supports gguf.
-
Q-Transformer
Apparently this guy like a bunch of others like https://github.com/ggerganov/ggml are implementing transformers from papers for people that want them. Pretty cool.
-
[P] Inference Vision Transformer (ViT) in plain C/C++ with ggml
You can access it here: https://github.com/staghado/vit.cpp It has been added to the ggml library on GitHub: https://github.com/ggerganov/ggml
-
Falcon 180B Released
https://github.com/ggerganov/ggml
One note is that prompt ingestion is extremely slow on CPU compared to GPU. So short prompts are fine (as tokens can be streamed once the prompt is ingested), but long prompts feel extremely sluggish.
-
Stable Diffusion in pure C/C++
I did a quick run under profiler and on my AVX2-laptop the slowest part (>50%) was matrix multiplication (sgemm).
In current version of GGML if OpenBLAS is enabled, they convert matrices to FP32 before running sgemm.
If OpenBLAS is disabled, on AVX2 plaftorm they convert FP16 to FP32 on every FMA operation, which even worse (due to repetition). After that, both ggml_vec_dot_f16 and ggml_vec_dot_f32 took first place in profiler.
Source: https://github.com/ggerganov/ggml/blob/master/src/ggml.c#L10...
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/
This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.
I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.
For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama
For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/
I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/
What are some alternatives?
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
llama.cpp - LLM inference in C/C++
llm-humaneval-benchmarks
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
airoboros - Customizable implementation of the self-instruct paper.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
can-ai-code - Self-evaluating interview for AI coders
llm - An ecosystem of Rust libraries for working with large language models