Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Can-ai-code Alternatives
Similar projects and alternatives to can-ai-code
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
WizardLM
Discontinued Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
Local-LLM-Comparison-Colab-UI
Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
-
llama-gpt
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
-
landmark-attention-qlora
Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
can-ai-code reviews and mentions
-
Ask HN: Code Llama 70B on a dedicated server
You can run a Q4 quant of a 70B model in about 40GB of RAM (+context). You're single user (batch size 1, bs=1) inference speed will be basically memory bottlenecked, so on a dual channel dedicated box you'd expect somewhere about 1 token/s. That's inference, prefill/prompt generation will take even longer (as your chat history grows) on CPU. So falls into the realm of technically possible, but not for real world use.
If you're looking specifically for CodeLlama 70B, Artificial Analysis https://artificialanalysis.ai/models/codellama-instruct-70b/... lists Perplexity, Together.ai, Deep Infra, and Fireworks as potential hosts, with Together.ai and Deepinfra at about $0.9/1M tokens, with about 30 tokens/s and about 300ms latency (time to first token).
For those looking for local coding models in specifically. I keep a list of LLM coding evals here: https://llm-tracker.info/evals/Code-Evaluation
On the EvalPlus Leaderboard, there about about 10 open models that rank higher than CodeLlama 70B, all smaller models: https://evalplus.github.io/leaderboard.html
A few other evals (worth cross-referencing to counter contamination, overfitting):
* CRUXEval Leaderboard https://crux-eval.github.io/leaderboard.html
* CanAiCode Leaderboard https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
* Big Code Models Leaderboard https://huggingface.co/spaces/bigcode/bigcode-models-leaderb...
From the various leaderboards, deepseek-ai/deepseek-coder-33b-instruct still looks like the best performing open model (it has a very liberal ethical license), followed by ise-uiuc/Magicoder-S-DS-6.7B (a deepseek-coder-6.7b-base fine tune). The former can be run as a Q4 quant on a single 24GB GPU (a used 3090 should run you about $700 atm), and the latter, if it works for you will run 4X faster and fit on even cheaper/weaker GPUs.
There's always recent developments, but two worth pointing out:
OpenCodeInterpreter - a new system that uses execution feedback and outperforms ChatGPT4 Code Interpreter that is fine-tuned off of the DeepSeek code models: https://opencodeinterpreter.github.io/
StarCoder2-15B just dropped and also looks competitive. Announcement and relevant links: https://huggingface.co/blog/starcoder2
-
Meta AI releases Code Llama 70B
This is a completely fair, but open question. Not to be a typical HN user, but when you say SOTA local, the question is really what benchmarks do you really care about in order to evaluate. Size, operability, complexity, explainability etc.
Working out what copilot models perform best has been a deep exercise for myself and has really made me evaluate my own coding style on what I find important and things I look out for when investigating models and evaluating interview candidates.
I think three benchmarks & leaderboards most go to are:
https://huggingface.co/spaces/bigcode/bigcode-models-leaderb... - which is the most understood, broad language capability leaderboad that relies on well understood evaluations and benchmarks.
https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul... - Also comprehensive, but primarily assesses Python and JavaScript.
https://evalplus.github.io/leaderboard.html - which I think is a better take on comparing models you intend to run locally as you can evaluate performance, operability and size in one visualisation.
Best of luck and I would love to know which models & benchmarks you choose and why.
-
Stable Code 3B: Coding on the Edge
Here is a leader board of some models
https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
Don't know how biased this leaderboard is, but I guess you could just give some of them a try and see for yourself.
-
Mistral has an even more powerfull model in the prototype-phase
- Can AI Code? - https://huggingface.co/spaces/mike-ravkine/can-ai-code-results
-
Assessing llms for code generation.
Check out https://github.com/the-crypt-keeper/can-ai-code for some ideas. I'd love to see more shootouts like this. Especially if they were spread out among a few different languages.
-
Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2
Very cool, this looks like a combination of chatbot-ui and llama-cpp-python? A similar project I've been using is https://github.com/serge-chat/serge. Nous-Hermes-Llama2-13b is my daily driver and scores high on coding evaluations (https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...).
-
How Is LLaMa.cpp Possible?
I have several sets of quant comparisons posted on my HF spaces, the caveat is my prompts are all "English to code": https://huggingface.co/spaces/mike-ravkine/can-ai-code-compa...
The dropdown at the top selects which comparison: Falcon compares GGML, Vicuna compares bits and bytes. I have some more comparisons planned, feel free to open an issue if you'd like to see something specific: https://github.com/the-crypt-keeper/can-ai-code
-
Ask HN: Who is using small OS LLMs in production?
Yeah it seemed suspiciously high for HumanEval and it only ranks 14th for JS and 7th for Python on other benchmarks now: https://huggingface.co/spaces/mike-ravkine/can-ai-code-resul...
WizardCoder is a bit of a problem since it's not llama 1/2 based but is its own 15B model and as such the support for it in anything practical is near nonexistent. WizardLM v1.2 looks like it may be worth checking out.
-
Recent updates on the LLM Explorer (15,000+ LLMs listed)
There are at least 4 different types of quants floating around HF (bitsandbytes, GGML, GPTQ and AWQ) so I dont know if a "GGML" column makes sense vs a more abstract way of linking quants to their base models. I am doing this and its fucking awful: https://github.com/the-crypt-keeper/can-ai-code/blob/main/models/models.yaml
-
Did anyone try to benchmark LLM's for coding against each other and against proprietary ones like Copilot X?
Ah I meant this one but I see now it's WIP.
-
A note from our sponsor - InfluxDB
www.influxdata.com | 2 May 2024
Stats
the-crypt-keeper/can-ai-code is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of can-ai-code is Python.
Popular Comparisons
Sponsored