evals
aider
evals | aider | |
---|---|---|
49 | 65 | |
14,048 | 10,084 | |
3.3% | - | |
9.3 | 9.9 | |
12 days ago | 5 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
evals
-
Show HN: Times faster LLM evaluation with Bayesian optimization
Fair question.
Evaluate refers to the phase after training to check if the training is good.
Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!
So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.
- I asked 60 LLMs a set of 20 questions
-
Ask HN: How are you improving your use of LLMs in production?
OpenAI open sourced their evals framework. You can use it to evaluate different models but also your entire prompt chain setup. https://github.com/openai/evals
They also have a registry of evals built in.
-
SuperAlignment
"What if" is all these "existential risk" conversations ever are.
Where is your evidence that we're approaching human level AGI, let alone SuperIntelligence? Because ChatGPT can (sometimes) approximate sophisticated conversation and deep knowledge?
How about some evidence that ChatGPT isn't even close? Just clone and run OpenAI's own evals repo https://github.com/openai/evals on the GPT-4 API.
It performs terribly on novel logic puzzles and exercises that a clever child could learn to do in an afternoon (there are some good chess evals, and I submitted one asking it to simulate a Forth machine).
-
What is that new "Alpha" tab in ChatGPT Plus? Are limits gone for standard GPT-4???
Ah well, I think you just got lucky then, I did the same with the survey. I'll be compulsively checking mine all day today lol. People on Reddit like to say that if you did an Eval which is basically a performance test natively run using code on GPT models, then OpenAI is more likely to favor you when they’re releasing new features. If ydk, then I guess that answers that.
-
OpenAI Function calling and API updates
You can get GPT 4 access by submitting an eval if gets merged (https://github.com/openai/evals). Here's the one that got me access[1]
Although from the blog post it looks like they're planning to open up to everyone soon, so that may happen before you get through the evals backlog.
1: https://github.com/openai/evals/pull/778
- GitHub - openai/evals: Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
- There have been a lot of threads and comments around the models in ChatGPT and the API outputs getting much worse in the last few weeks. This is a huge reason why we open sourced https://github.com/openai/evals . You can write an eval and test the quality over time. No guesswork!
-
Spend time on openai evals - Community - OpenAI Developer Forum
来源:GitHub - openai/evals: Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks. 8
- Is it worth it to critique the dialogue chatgpt4 generates? I’m hoping the feedback I provide can somehow help it in future models. …Waste of time?
aider
-
I Spent 24 Hours with GitHub Copilot Workspaces
My open source tool aider [0] has long offered a "AI pair programming" workflow. Aider's UX is similar but not identical to Copilot Workspaces.
Aider is more of a collaborative chat, where you work with the LLM interactively asking for a sequence of changes to your git repo. The changes can be non-trivial, modifying a group of files in a coordinated way. So much more than just the original copilot "autocomplete".
Workspaces seems more agentic, a bit like Devin. You need to do a bunch of up-front work to (fully) specify the requirements. Then the agent goes off and (hopefully) builds what you want. You need to fully understand what you want to build up front, and you need the describe it unambiguously to the agent. Also, even with a perfect request, agents often go down wrong paths and waste a lot of time and token costs doing the wrong thing.
That's not how I code personally. My process is more iterative, where I explore the problem and solution spaces as I build.
The other difference between aider and Workspaces is that currently aider is a terminal CLI tool. Although I just released a basic browser UI [1] the other day, making it more approachable for folks who are not fully comfortable on the command line.
[0] https://github.com/paul-gauthier/aider
[1] https://aider.chat/2024/05/02/browser.html
-
Agents of Change: Navigating the Rise of AI Agents in 2024
Aider was developed by Paul Gaither and focuses on giving developers a pair programming experience directly from developers' terminals. This command-line tool edits code in real-time based on a user prompt in the command terminal. As of writing, it only supports OpenAI’s API but can write, edit, and refine code across multiple languages including Python, JavaScript, and HTML. Developers can use Aider for code generation, debugging, and understanding complex projects.
-
2markdown – Transform Websites into Markdown
I built a similar thing in python using Playwright and Pandoc [0]. It's used by aider's `/web ` command that lets you paste a markdown version of any webpage into your AI coding chat. This helps if you want to include docs for an obscure or non-public package/api/etc with the LLM while coding.
I really value dependencies which are easy for all users to install, cross-platform. Playwright is nice because it has a simple way to install its dependencies on most platforms. And the `pypandoc` module provides a seamless install of pandoc across platforms.
The result turns most web pages into nice markdown without requiring users to solve some painful platform specific chromium dependency nightmare.
[0] https://github.com/paul-gauthier/aider/blob/main/aider/scrap...
-
Aider: AI pair programming in your terminal
Thanks for trying aider, and sorry to hear you had trouble getting the hang of it. It might be worth looking through some of the tips on the aider GitHub page [0].
In particular, this is one of the most important tips: Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk GPT through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
Not sure if this was a factor in your attempts? I'd be happy to help you if you'd like to open an GitHub issue [1] our jump into our discord [2].
[0] https://github.com/paul-gauthier/aider#tips
[1] https://github.com/paul-gauthier/aider/issues/new/choose
[2] https://discord.gg/Tv2uQnR88V
-
Ask HN: If you've used GPT-4-Turbo and Claude Opus, which do you prefer?
Have you tried something like Agentic’s Glide? (They announced it this week here on HN)
They use gpt, but they might be able to configure it so it uses Claude
Another tool to check out could be aider https://github.com/paul-gauthier/aider
-
Launch HN: Glide (YC W19) – AI-assisted technical design docs
Are you aware of the work on https://github.com/paul-gauthier/aider? What's your take on generating code diffs directly instead of code editing instructions?
-
A Man in Seat 61
He should add AI to his site!
Not really - the site is great as-is and there's nothing wrong with this approach. It looks like it works really well for Mr. 61.
But I'd imagine it'd be pretty helpful to write tools to help with maintaining the site which do leverage LLM models. Do a combination of search + AI to rewrite + reviewing the individual edits (e.g. through selective git adds).
I'm imagining a tool like https://github.com/paul-gauthier/aider (which I haven't tried yet, but it looks useful for this kind of effort).
- Ask HN: What is the, currently, best Programming LLM (copilot) subscriptions?
-
Web Scraping in Python – The Complete Guide
I recently used [0] Playwright for Python and [1] pypandoc to build a scraper that fetches a webpage and turns the content into sane markdown so that it can be passed into an AI coding chat [2].
They are both very gentle dependencies to add to a project. Both packages contain built in or scriptable methods to install their underlying platform-specific binary dependencies. This means you don't need to ask end users to use some complex, platform-specific package manager to install playwright and pandoc.
Playwright let's you scrape pages that rely on js. Pandoc is great at turning HTML into sensible markdown. Below is an excerpt of the openai pricing docs [3] that have been scraped to markdown [4] in this manner.
[0] https://playwright.dev/python/docs/intro
[1] https://github.com/JessicaTegner/pypandoc
[2] https://github.com/paul-gauthier/aider
[3] https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turb...
[4] https://gist.githubusercontent.com/paul-gauthier/95a1434a28d...
## GPT-4 and GPT-4 Turbo
-
DeepSeek Coder: Let the Code Write Itself
Thanks for trying aider, and sorry to hear you had trouble getting the hang of it. It might be worth looking through some of the tips on the aider github page:
https://github.com/paul-gauthier/aider#tips
In particular, this is one of the most important tips: Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk GPT through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
Not sure if this was a factor in your attempts? But it's best not to ask for a big sweeping change all at once. It's hard to unambiguously and completely specify what you want, and it's also harder for GPT to succeed at bigger changes in one bite.
What are some alternatives?
gpt4-pdf-chatbot-langchain - GPT4 & LangChain Chatbot for large PDF docs
gpt-engineer - Specify what you want it to build, the AI asks for clarification, and then builds it.
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
gpt-pilot - The first real AI developer
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
llama-cpp-python - Python bindings for llama.cpp
gpt4free - The official gpt4free repository | various collection of powerful language models
ollama-ui - Simple HTML UI for Ollama
clownfish - Constrained Decoding for LLMs against JSON Schema
tabby - Self-hosted AI coding assistant
BIG-bench - Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
continue - ⏩ Open-source VS Code and JetBrains extensions that enable you to easily create your own modular AI software development system