refactor-benchmark
llama-cpp-python
refactor-benchmark | llama-cpp-python | |
---|---|---|
2 | 56 | |
23 | 6,850 | |
- | - | |
5.9 | 9.8 | |
4 months ago | 2 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
refactor-benchmark
-
GPT-4 Turbo with Vision is a step backwards for coding
FWIW, I agree with you that each model has its own personality and that models may do better or worse on different kinds of coding tasks. Aider leans into both of these concepts.
The GPT-4 Turbo models have a lazy coding personality, and I spent months of effort figuring out how to both measure and reduce that laziness. This resulted in aider supporting "unified diffs" as a code editing format to reduce such laziness by 3X [0] and the aider refactoring benchmark as a way to quantify these benefits [1].
The benchmark results I just shared about GPT-4 Turbo with Vision cover both smaller, toy coding problems [2] as well as larger edits to larger source files [3]. The new model slightly underperforms on smaller coding tasks, and significantly underperforms on the larger edits where laziness is often a culprit.
[0] https://aider.chat/2023/12/21/unified-diffs.html
[1] https://github.com/paul-gauthier/refactor-benchmark
[2] https://aider.chat/2024/04/09/gpt-4-turbo.html#code-editing-...
[3] https://aider.chat/2024/04/09/gpt-4-turbo.html#lazy-coding
-
OpenAI: Memory and New Controls for ChatGPT
1-2 sentences: Rather than writing code, GPT-4 Turbo often inserts comments like "... finish implementing function here ...". I made a benchmark that provokes and quantifies that behavior.
1-2 paragraphs:
I found that I could provoke lazy coding by giving GPT-4 Turbo refactoring tasks, where I ask it to refactor a large method out of a large class. I analyzed 9 popular open source python repos and found 89 such methods that were conceptually easy to refactor, and built them into a benchmark [0].
GPT succeeds on a task if it can remove the method from its original class and add it to the top level of the file with appropriate changes to the SIZE of the abstract syntax tree. By measuring the size of the AST, we infer that GPT didn't replace a bunch of code with a comment like "... insert original method here...". I also gathered other laziness metrics like counting the number of new comments that contained "...", which correlated well with the AST size test.
[0] https://github.com/paul-gauthier/refactor-benchmark
llama-cpp-python
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
There's a Python binding for llama.cpp which is actively maintained and has worked well for me: https://github.com/abetlen/llama-cpp-python
- FLaNK AI for 11 March 2024
-
OpenAI: Memory and New Controls for ChatGPT
I'll share the core bit that took a while to figure out the right format, my main script is a hot mess using embeddings with SentenceTransformer, so I won't share that yet. E.g: last night I did a PR for llama-cpp-python that shows how Phi might be used with JSON only for the author to write almost exactly the same code at pretty much the same time. https://github.com/abetlen/llama-cpp-python/pull/1184
-
TinyLlama LLM: A Step-by-Step Guide to Implementing the 1.1B Model on Google Colab
Python Bindings for llama.cpp
- Mistral-8x7B-Chat
-
Running Mistral LLM on Apple Silicon Using Apple's MLX Framework Is Much Faster
If the model could be made to work with llama.cpp, then https://github.com/abetlen/llama-cpp-python might be more compact. llama.cpp only supports a limited list of model types though.
- Run ChatGPT-like LLMs on your laptop in 3 lines of code
-
Code Llama, a state-of-the-art large language model for coding
https://github.com/abetlen/llama-cpp-python has a web server mode that replicates openai's API iirc and the readme shows it has docker builds already.
-
Meta: Code Llama, an AI Tool for Coding
LocalAI https://localai.io/ and LMStudio https://lmstudio.ai/ both have fairly complete OpenAI compatibility layers. llama-cpp-python has a FastAPI server as well: https://github.com/abetlen/llama-cpp-python/blob/main/llama_... (as of this moment it hasn't merged GGUF update yet though)
-
First steps with llama
I went with Python, llama-cpp-python, since my goal is just to get a small project up and running locally.
What are some alternatives?
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
llama.cpp - LLM inference in C/C++
text-generation-inference - Large Language Model Text Generation Inference
mlc-llm - Universal LLM Deployment Engine with ML Compilation
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
KoboldAI
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
localLLM_guidance - Local LLM ReAct Agent with Guidance
continue - ⏩ Continue enables you to create your own AI code assistant inside your IDE. Keep your developers in flow with open-source VS Code and JetBrains extensions
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment