Refactor-benchmark Alternatives
Similar projects and alternatives to refactor-benchmark
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
big-AGI
Generative AI suite powered by state-of-the-art models and providing advanced AI/AGI functions. It features AI personas, AGI functions, multi-model chats, text-to-image, voice, response streaming, code highlighting and execution, PDF import, presets for developers, much more. Deploy on-prem or in the cloud.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
refactor-benchmark reviews and mentions
-
GPT-4 Turbo with Vision is a step backwards for coding
FWIW, I agree with you that each model has its own personality and that models may do better or worse on different kinds of coding tasks. Aider leans into both of these concepts.
The GPT-4 Turbo models have a lazy coding personality, and I spent months of effort figuring out how to both measure and reduce that laziness. This resulted in aider supporting "unified diffs" as a code editing format to reduce such laziness by 3X [0] and the aider refactoring benchmark as a way to quantify these benefits [1].
The benchmark results I just shared about GPT-4 Turbo with Vision cover both smaller, toy coding problems [2] as well as larger edits to larger source files [3]. The new model slightly underperforms on smaller coding tasks, and significantly underperforms on the larger edits where laziness is often a culprit.
[0] https://aider.chat/2023/12/21/unified-diffs.html
[1] https://github.com/paul-gauthier/refactor-benchmark
[2] https://aider.chat/2024/04/09/gpt-4-turbo.html#code-editing-...
[3] https://aider.chat/2024/04/09/gpt-4-turbo.html#lazy-coding
-
OpenAI: Memory and New Controls for ChatGPT
1-2 sentences: Rather than writing code, GPT-4 Turbo often inserts comments like "... finish implementing function here ...". I made a benchmark that provokes and quantifies that behavior.
1-2 paragraphs:
I found that I could provoke lazy coding by giving GPT-4 Turbo refactoring tasks, where I ask it to refactor a large method out of a large class. I analyzed 9 popular open source python repos and found 89 such methods that were conceptually easy to refactor, and built them into a benchmark [0].
GPT succeeds on a task if it can remove the method from its original class and add it to the top level of the file with appropriate changes to the SIZE of the abstract syntax tree. By measuring the size of the AST, we infer that GPT didn't replace a bunch of code with a comment like "... insert original method here...". I also gathered other laziness metrics like counting the number of new comments that contained "...", which correlated well with the AST size test.
[0] https://github.com/paul-gauthier/refactor-benchmark
Stats
paul-gauthier/refactor-benchmark is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of refactor-benchmark is Python.
Popular Comparisons
Sponsored