star-history
gpt-pilot-db-analysis-tool
star-history | gpt-pilot-db-analysis-tool | |
---|---|---|
1 | 2 | |
7 | 14 | |
- | - | |
4.4 | 7.6 | |
4 months ago | 4 months ago | |
JavaScript | JavaScript | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
star-history
-
What I learned in 6 months of working on a CodeGen dev tool GPT Pilot
⏳ Time spent: 6 hours 🪙 OpenAI tokens spent: ~400k ($12) 💾 Github repo
gpt-pilot-db-analysis-tool
-
What we learned in 6 months of working on an AI Developer
Hm.
It’s easy to look at https://github.com/Pythagora-io/gpt-pilot-db-analysis-tool/b... and go… so, this new tool means you took two days to write this?
long stare
Why did you bother?
…but, this both hits the nail on the head and misses the point at the same time.
On the one hand, this is foundational tech, prototyping on a new way of doing things. It’s not goi g to be faster than doing it yourself.
On the other hand, we already know that GPT4 level models can do trivial tasks.
Over and over and over, people claim coding tools can massively improve productivity, and then try to demo that by building a trivial system.
…but building a trivial systems is not the problem that needs solving.
The problem that needs solving is building large complex systems with dynamically adjusting requirements.
The examples and blog post seem to miss this even as an idea.
While I applaud, in general, efforts to explore this space, tackling the easy problems seems like it doesn’t significantly advance the state of play.
Here are some concrete things that would be more valuable, but are significantly technically harder:
- Use tests. Make it write tests. Make humans write tests. Do not accept generated code that fails the tests.
- Focus on refactoring; it’s a known issue that models struggle to refactor code. Breaking your existing code base into tiny files isn’t the answer.
- Focus on documenting the behaviour of existing code and incrementally migrating to new behaviour.
The tool as shown, is I believe broadly speaking interesting, but the approach described in the blog (upfront decisions about everything) is a dead end.
-
What I learned in 6 months of working on a CodeGen dev tool GPT Pilot
⏳ Time spent: ~2 days 🪙 OpenAI tokens spent: ~1.2M ($38) 💾 Github repo