WizardLM
can-ai-code
WizardLM | can-ai-code | |
---|---|---|
38 | 30 | |
7,531 | 446 | |
- | - | |
9.4 | 9.5 | |
8 months ago | 2 days ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WizardLM
- FLaNK AI-April 22, 2024
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
This is interesting work, and a good contribution, but there is no need to mislead people.
[1] https://github.com/nlpxucan/WizardLM
-
Continue with LocalAI: An alternative to GitHub's Copilot that runs everything locally
If you pair this with the latest WizardCoder models, which have a fairly better performance than the standard Salesforce Codegen2 and Codegen2.5, you have a pretty solid alternative to GitHub Copilot that runs completely locally.
- WizardCoder context?
- The world's most-powerful AI model suddenly got 'lazier' and 'dumber.' A radical redesign of OpenAI's GPT-4 could be behind the decline in performance.
-
Official WizardLM-13B-V1.1 Released! Train with Only 1K Data! Can Achieve 86.32% on AlpacaEval!
(We will update the demo links in our github.)
-
GPT-4 API general availability
In terms of speed, we're talking about 140t/s for 7B models, and 40t/s for 33B models on a 3090/4090 now.[1] (1 token ~= 0.75 word) It's quite zippy. llama.cpp performs close on Nvidia GPUs now (but they don't have a handy chart) and you can get decent performance on 13B models on M1/M2 Macs.
You can take a look at a list of evals here: https://llm-tracker.info/books/evals/page/list-of-evals - for general usage, I think home-rolled evals like llm-jeopardy [2] and local-llm-comparison [3] by hobbyists are more useful than most of the benchmark rankings.
That being said, personally I mostly use GPT-4 for code assistance to that's what I'm most interested in, and the latest code assistants are scoring quite well: https://github.com/abacaj/code-eval - a recent replit-3b fine tune the human-eval results for open models (as a point of reference, GPT-3.5 gets 60.4 on pass@1 and 68.9 on pass@10 [4]) - I've only just started playing around with it since replit model tooling is not as good as llamas (doc here: https://llm-tracker.info/books/howto-guides/page/replit-mode...).
I'm interested in potentially applying reflexion or some of the other techniques that have been tried to even further increase coding abilities. (InterCode in particular has caught my eye https://intercode-benchmark.github.io/)
[1] https://github.com/turboderp/exllama#results-so-far
[2] https://github.com/aigoopy/llm-jeopardy
[3] https://github.com/Troyanovsky/Local-LLM-comparison/tree/mai...
[4] https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder
-
WizardLM-13B-V1.0-Uncensored
You talking about this? https://github.com/nlpxucan/WizardLM
-
What 7b llm to use
The smallest model that is close to competent at code is WizardCoder 15B.. https://github.com/nlpxucan/WizardLM/
-
16-Jun-2023
WizardCoder: Empowering Code Large Language Models with Evol-Instruct (https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder)
can-ai-code
-
Ask HN: Code Llama 70B on a dedicated server
You can run a Q4 quant of a 70B model in about 40GB of RAM (+context). You're single user (batch size 1, bs=1) inference speed will be basically memory bottlenecked, so on a dual channel dedicated box you'd expect somewhere about 1 token/s. That's inference, prefill/prompt generation will take even longer (as your chat history grows) on CPU. So falls into the realm of technically possible, but not for real world use.
If you're looking specifically for CodeLlama 70B, Artificial Analysis https://artificialanalysis.ai/models/codellama-instruct-70b/... lists Perplexity, Together.ai, Deep Infra, and Fireworks as potential hosts, with Together.ai and Deepinfra at about $0.9/1M tokens, with about 30 tokens/s and about 300ms latency (time to first token).
For those looking for local coding models in specifically. I keep a list of LLM coding evals here: https://llm-tracker.info/evals/Code-Evaluation
On the EvalPlus Leaderboard, there about about 10 open models that rank higher than CodeLlama 70B, all smaller models: https://evalplus.github.io/leaderboard.html
A few other evals (worth cross-referencing to counter contamination, overfitting):
* CRUXEval Leaderboard https://crux-eval.github.io/leaderboard.html
* CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
* Big Code Models Leaderboard https://huggingface.co/spaces/bigcode/bigcode-models-leaderb...
From the various leaderboards, deepseek-ai/deepseek-coder-33b-instruct still looks like the best performing open model (it has a very liberal ethical license), followed by ise-uiuc/Magicoder-S-DS-6.7B (a deepseek-coder-6.7b-base fine tune). The former can be run as a Q4 quant on a single 24GB GPU (a used 3090 should run you about $700 atm), and the latter, if it works for you will run 4X faster and fit on even cheaper/weaker GPUs.
There's always recent developments, but two worth pointing out:
OpenCodeInterpreter - a new system that uses execution feedback and outperforms ChatGPT4 Code Interpreter that is fine-tuned off of the DeepSeek code models: https://opencodeinterpreter.github.io/
StarCoder2-15B just dropped and also looks competitive. Announcement and relevant links: https://huggingface.co/blog/starcoder2
-
Meta AI releases Code Llama 70B
This is a completely fair, but open question. Not to be a typical HN user, but when you say SOTA local, the question is really what benchmarks do you really care about in order to evaluate. Size, operability, complexity, explainability etc.
Working out what copilot models perform best has been a deep exercise for myself and has really made me evaluate my own coding style on what I find important and things I look out for when investigating models and evaluating interview candidates.
I think three benchmarks & leaderboards most go to are:
https://huggingface.co/spaces/bigcode/bigcode-models-leaderb... - which is the most understood, broad language capability leaderboad that relies on well understood evaluations and benchmarks.
https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... - Also comprehensive, but primarily assesses Python and JavaScript.
https://evalplus.github.io/leaderboard.html - which I think is a better take on comparing models you intend to run locally as you can evaluate performance, operability and size in one visualisation.
Best of luck and I would love to know which models & benchmarks you choose and why.
-
Stable Code 3B: Coding on the Edge
Here is a leader board of some models
https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
Don't know how biased this leaderboard is, but I guess you could just give some of them a try and see for yourself.
-
Mistral has an even more powerfull model in the prototype-phase
- Can AI Code? - https://huggingface.co/spaces/mike-ravkine/can-ai-code-results
-
Assessing llms for code generation.
Check out https://github.com/the-crypt-keeper/can-ai-code for some ideas. I'd love to see more shootouts like this. Especially if they were spread out among a few different languages.
-
Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2
Very cool, this looks like a combination of chatbot-ui and llama-cpp-python? A similar project I've been using is https://github.com/serge-chat/serge. Nous-Hermes-Llama2-13b is my daily driver and scores high on coding evaluations (https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...).
-
How Is LLaMa.cpp Possible?
I have several sets of quant comparisons posted on my HF spaces, the caveat is my prompts are all "English to code": https://huggingface.co/spaces/mike-ravkine/can-ai-code-compa...
The dropdown at the top selects which comparison: Falcon compares GGML, Vicuna compares bits and bytes. I have some more comparisons planned, feel free to open an issue if you'd like to see something specific: https://github.com/the-crypt-keeper/can-ai-code
-
Ask HN: Who is using small OS LLMs in production?
Yeah it seemed suspiciously high for HumanEval and it only ranks 14th for JS and 7th for Python on other benchmarks now: https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
WizardCoder is a bit of a problem since it's not llama 1/2 based but is its own 15B model and as such the support for it in anything practical is near nonexistent. WizardLM v1.2 looks like it may be worth checking out.
-
Recent updates on the LLM Explorer (15,000+ LLMs listed)
There are at least 4 different types of quants floating around HF (bitsandbytes, GGML, GPTQ and AWQ) so I dont know if a "GGML" column makes sense vs a more abstract way of linking quants to their base models. I am doing this and its fucking awful: https://github.com/the-crypt-keeper/can-ai-code/blob/main/models/models.yaml
-
Did anyone try to benchmark LLM's for coding against each other and against proprietary ones like Copilot X?
Ah I meant this one but I see now it's WIP.
What are some alternatives?
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
llm-humaneval-benchmarks
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
openchat - OpenChat: Advancing Open-source Language Models with Imperfect Data
airoboros - Customizable implementation of the self-instruct paper.
Local-LLM-Comparison-Colab-UI - Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
llm-mlc - LLM plugin for running models using MLC
chat-ui - Open source codebase powering the HuggingChat app
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.