aider
jsonformer
aider | jsonformer | |
---|---|---|
61 | 25 | |
9,450 | 3,793 | |
- | - | |
9.9 | 5.4 | |
7 days ago | 2 months ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aider
-
Aider: AI pair programming in your terminal
Thanks for trying aider, and sorry to hear you had trouble getting the hang of it. It might be worth looking through some of the tips on the aider GitHub page [0].
In particular, this is one of the most important tips: Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk GPT through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
Not sure if this was a factor in your attempts? I'd be happy to help you if you'd like to open an GitHub issue [1] our jump into our discord [2].
[0] https://github.com/paul-gauthier/aider#tips
[1] https://github.com/paul-gauthier/aider/issues/new/choose
[2] https://discord.gg/Tv2uQnR88V
-
Ask HN: If you've used GPT-4-Turbo and Claude Opus, which do you prefer?
Have you tried something like Agentic’s Glide? (They announced it this week here on HN)
They use gpt, but they might be able to configure it so it uses Claude
Another tool to check out could be aider https://github.com/paul-gauthier/aider
-
Launch HN: Glide (YC W19) – AI-assisted technical design docs
Are you aware of the work on https://github.com/paul-gauthier/aider? What's your take on generating code diffs directly instead of code editing instructions?
-
A Man in Seat 61
He should add AI to his site!
Not really - the site is great as-is and there's nothing wrong with this approach. It looks like it works really well for Mr. 61.
But I'd imagine it'd be pretty helpful to write tools to help with maintaining the site which do leverage LLM models. Do a combination of search + AI to rewrite + reviewing the individual edits (e.g. through selective git adds).
I'm imagining a tool like https://github.com/paul-gauthier/aider (which I haven't tried yet, but it looks useful for this kind of effort).
- Ask HN: What is the, currently, best Programming LLM (copilot) subscriptions?
-
Web Scraping in Python – The Complete Guide
I recently used [0] Playwright for Python and [1] pypandoc to build a scraper that fetches a webpage and turns the content into sane markdown so that it can be passed into an AI coding chat [2].
They are both very gentle dependencies to add to a project. Both packages contain built in or scriptable methods to install their underlying platform-specific binary dependencies. This means you don't need to ask end users to use some complex, platform-specific package manager to install playwright and pandoc.
Playwright let's you scrape pages that rely on js. Pandoc is great at turning HTML into sensible markdown. Below is an excerpt of the openai pricing docs [3] that have been scraped to markdown [4] in this manner.
[0] https://playwright.dev/python/docs/intro
[1] https://github.com/JessicaTegner/pypandoc
[2] https://github.com/paul-gauthier/aider
[3] https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turb...
[4] https://gist.githubusercontent.com/paul-gauthier/95a1434a28d...
## GPT-4 and GPT-4 Turbo
-
DeepSeek Coder: Let the Code Write Itself
Thanks for trying aider, and sorry to hear you had trouble getting the hang of it. It might be worth looking through some of the tips on the aider github page:
https://github.com/paul-gauthier/aider#tips
In particular, this is one of the most important tips: Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk GPT through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
Not sure if this was a factor in your attempts? But it's best not to ask for a big sweeping change all at once. It's hard to unambiguously and completely specify what you want, and it's also harder for GPT to succeed at bigger changes in one bite.
-
Voxos.ai – An Open-Source Desktop Voice Assistant
How does Voxos help avoid copying & pasting code into your IDE? I had a look around the code base and don't see any indication that it allows GPT to directly edit your source files. But maybe I am missing it?
I'm asking because this is a major focus of my open source AI coding project aider [0]. I always like to see how other projects approach the challenge of letting GPT edit existing code. Most recently aider adopted unified diffs as the GPT 4 Turbo code editing format [1].
[0] https://github.com/paul-gauthier/aider
[1] https://aider.chat/docs/unified-diffs.html
-
LLMs and Programming in the first days of 2024
There is a bit of learning curve to figuring out the most effective ways to collaboratively code with GPT, either through aider or other UXs. My best piece of advice is taken from aider's tips list and applies broadly to coding with LLMs:
Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk GPT through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.
https://github.com/paul-gauthier/aider#tips
- Tell HN: My Favorite Tools
jsonformer
- Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
- Tools like jsonformer https://github.com/1rgs/jsonformer are not possible with OpenAIs API.
-
Show HN: LLMs can generate valid JSON 100% of the time
How does this compare in terms of latency, cost, and effectiveness to jsonformer? https://github.com/1rgs/jsonformer
-
Ask HN: Explain how size of input changes ChatGPT performance
You're correct with interpreting how the model works wrt it returning tokens one at a time. The model returns one token, and the entire context window gets shifted right by one to for account it when generating the next one.
As for model performance at different context sizes, it's seems a bit complicated. From what I understand, even if models are tweaked (for example using the superHOT RoPE hack or sparse attention) to be able to use longer contexts, they still have to be fined tuned on input of this increased context to actually utilize it, but performance seems to degrade regardless as input length increases.
For your question about fine tuning models to respond with only "yes" or "no", I recommend looking into how the jsonformers library works: https://github.com/1rgs/jsonformer . Essentially, you still let the model generate many tokens for the next position, and only accept the ones that satisfy certain criteria (such as the token for "yes" and the token for "no".
You can do this with openAI API too, using tiktoken https://twitter.com/AAAzzam/status/1669753722828730378?t=d_W... . Be careful though as results will be different on different selections of tokens, as "YES", "Yes", "yes", etc are all different tokens to the best of my knowledge
- A framework to securely use LLMs in companies – Part 1: Overview of Risks
-
LLMs for Schema Augmentation
From here, we just need to continue generating tokens until we get to a closing quote. This approach was borrowed from Jsonformer which uses a similar approach to induce LLMs to generate structured output. Continuing to do so for each property using Replit's code LLM gives the following output:
-
Doesn't a 4090 massively overpower a 3090 for running local LLMs?
https://github.com/1rgs/jsonformer or https://github.com/microsoft/guidance may help get better results, but I ended up with a bit more of a custom solution.
-
“Sam altman won't tell you that GPT-4 has 220B parameters and is 16-way mixture model with 8 sets of weights”
I think function calling is just JSONformer idk: https://github.com/1rgs/jsonformer
- Inference Speed vs. Quality Hacks?
-
Best bet for parseable output?
jsonformer: https://github.com/1rgs/jsonformer
What are some alternatives?
gpt-engineer - Specify what you want it to build, the AI asks for clarification, and then builds it.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
gpt-pilot - The first real AI developer
clownfish - Constrained Decoding for LLMs against JSON Schema
llama-cpp-python - Python bindings for llama.cpp
outlines - Structured Text Generation
ollama-ui - Simple HTML UI for Ollama
gpt-json - Structured and typehinted GPT responses in Python
tabby - Self-hosted AI coding assistant
jikkou - The Open source Resource as Code framework for Apache Kafka
continue - ⏩ Open-source VS Code and JetBrains extensions that enable you to easily create your own modular AI software development system
evadb - Database system for AI-powered apps