plandex

Open source AI coding agent. Designed for large projects and real world tasks. (by plandex-ai)

Plandex Alternatives

Similar projects and alternatives to plandex

  1. ollama

    440 plandex VS ollama

    Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, and other large language models.

  2. CodeRabbit

    CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.

    CodeRabbit logo
  3. asciinema

    Terminal session recorder 📹

  4. aider

    90 plandex VS aider

    aider is AI pair programming in your terminal

  5. semantic-kernel

    Integrate cutting-edge LLM technology quickly and easily into your apps

  6. Voyager

    55 plandex VS Voyager

    An Open-Ended Embodied Agent with Large Language Models (by MineDojo)

  7. litellm

    43 plandex VS litellm

    Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]

  8. vhs

    39 plandex VS vhs

    Your CLI home video recorder 📼

  9. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  10. openrouter-runner

    Inference engine powering open source models on OpenRouter

  11. llama3

    27 plandex VS llama3

    The official Meta Llama 3 GitHub site

  12. lighthouse

    A framework for serving GraphQL from Laravel (by nuwave)

  13. EdgeChains

    15 plandex VS EdgeChains

    EdgeChains.js is Full-Stack GenAI library. Front-end, backend, apis, prompt management, distributed computing. All core prompts & chains are managed declaratively in jsonnet (and not hidden in classes)

  14. promptfoo

    5 plandex VS promptfoo

    Discontinued Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. [Moved to: https://github.com/promptfoo/promptfoo] (by typpo)

  15. torchtune

    8 plandex VS torchtune

    PyTorch native post-training library

  16. DeepSeek-Coder

    9 plandex VS DeepSeek-Coder

    DeepSeek Coder: Let the Code Write Itself

  17. crewAI

    12 plandex VS crewAI

    Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.

  18. OLMo

    7 plandex VS OLMo

    Modeling, training, eval, and inference code for OLMo

  19. soulshack

    soulshack, an irc chatbot. openai/ollama api compatible. easy shell tooling.

  20. r2ai

    1 plandex VS r2ai

    local language model for radare2

  21. swarms

    4 plandex VS swarms

    The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework. Website: https://swarms.ai

  22. CopilotChat.nvim

    Chat with GitHub Copilot in Neovim

  23. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better plandex alternative or higher similarity.

plandex discussion

Log in or Post with

plandex reviews and mentions

Posts with mentions or reviews of plandex. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-08-22.
  • Coconut by Meta AI – Better LLM Reasoning with Chain of Continuous Thought?
    1 project | news.ycombinator.com | 31 Dec 2024
  • Ask HN: In 2024, is SWE a sustainable career?
    1 project | news.ycombinator.com | 7 Oct 2024
    I'm working on an agent-based AI coding tool[1] that is trying to push the limits on the size/complexity of tasks that can automated by LLMs, so I think about this often. I'm also using the full gamut of AI tools for development (I have 5 subscriptions and counting).

    My opinion is that skilled engineers won't be replaced for a long, long time, if ever. While AI codegen will keep getting better and better, getting the best result out of it fundamentally requires knowing what to ask for—and beyond that, understanding exactly what is generated. This is because, depending on a particular prompt, there are hundreds/thousands/millions/billions of viable paths for fulfilling it. There are no "correct answers" in a large software project. Rather there is an endlessly branching web of micro-decisions with associated tradeoffs.

    I think the job of engineers will gradually shift from writing code directly to navigating this web of tradeoffs.

    1 - https://plandex.ai

  • I'm Tired of Fixing Customers' AI Generated Code
    2 projects | news.ycombinator.com | 22 Aug 2024
    One thing I've found in doing a lot of coding with LLMs is that you're often better off updating the initial prompt and starting fresh rather than asking for fixes.

    Having mistakes in context seems to 'contaminate' the results and you keep getting more problems even when you're specifically asking for a fix.

    It does make some sense as LLMs are generally known to respond much better to positive examples than negative examples. If an LLM sees the wrong way, it can't help being influenced by it, even if your prompt says very sternly not to do it that way. So you're usually better off re-framing what you want in positive terms.

    I actually built an AI coding tool to help enable the workflow of backing up and re-prompting: https://github.com/plandex-ai/plandex

  • Fine-tuning now available for GPT-4o
    3 projects | news.ycombinator.com | 20 Aug 2024
  • AI agents but they're working in big tech
    2 projects | news.ycombinator.com | 6 Aug 2024
    > Where you specify a top-level objective, it plans out those objectives, it selects a completion metric so that it knows when to finish, and iterates/reiterates over the output until completion?

    I built Plandex[1], which works roughly like this. The goal (so far) is not to take you from an initial prompt to a 100% working solution in one go, but to provide tools that help you iterate your way to a 90-95% solution. You can then fill in the gaps yourself.

    I think the idea of a fully autonomous AI engineer is currently mostly hype. Making that the target is good for marketing, but in practice it leads to lots of useless tire-spinning and wasted tokens. It's not a good idea, for example, to have the LLM try to debug its own output by default. It might, on a case-by-case basis, be a good idea to feed an error back to the LLM, but just as often it will be faster for the developer to do the debugging themselves.

    1 - https://plandex.ai

  • Ask HN: Do you use AI for writing?
    1 project | news.ycombinator.com | 5 Jul 2024
    I'm working on a docs site for the AI coding tool I built [1], and I had to turn off GH Copilot for markdown files.

    As massive as I find the productivity boost from AI tools for coding (> 2x for me conservatively), with current capabilities I find it's a net negative for writing prose, even technical prose.

    The problem is: I like to think I'm a better writer than an LLM, but writing is hard. Every paragraph, every sentence requires a small shot of mental energy to get right. And what the LLM suggests is never bad. It's always like, "yeah, that could work." And that's the problem. It's good enough to be seductive. To make me want to skip that little bit of effort and auto-complete the sentence, auto-complete the paragraph.

    But the end result when I do that is missing something. It's grammatically correct and substantively correct. It's fine. But it doesn't grab the reader and pull them through. It's text that remains text and keeps the reader at a distance.

    The core problem, I guess, is the lack of a human voice. There's some kind of essential weirdness that is missing. This generally isn't a problem for code. In most cases, code that is boring and generic and anodyne and does the job it's supposed to do is good code.

    It will be interesting to see how this changes as LLMs continue to progress. Is this a fundamental limitation of the technology or a minor hurdle that will be quickly overcome?

    If I could write docs with AI that would genuinely pull the reader in and hold attention better than my own writing, I'd be happy to do so. It's not sentimental for me. But for now, Copilot will stay disabled for markdown files.

    1 - https://github.com/plandex-ai/plandex

  • We no longer use LangChain for building our AI agents
    10 projects | news.ycombinator.com | 20 Jun 2024
    I haven't used LangChain, but my sense is that much of what it's really helping people with is stream handling and async control flow. While there are libraries that make it easier, I think doing this stuff right in Python can feel like swimming against the current given its history as a primarily synchronous, single-threaded runtime.

    I built an agent-based AI coding tool in Go (https://github.com/plandex-ai/plandex) and I've been very happy with that choice. While there's much less of an ecosystem of LLM-related libraries and frameworks, Go's concurrency primitives make it straightforward to implement whatever I need, and I never have to worry about leaky or awkward abstractions.

  • Plandex 1.1.0 – AI driven development in the terminal. Now multi-modal.
    1 project | news.ycombinator.com | 11 Jun 2024
  • Systematically Improving Your RAG
    1 project | news.ycombinator.com | 22 May 2024
    This all seems pretty sensible. Another area that would be nice to see addressed are strategies for balancing latency/cost/performance when data is frequently updated. I'm building a terminal-based AI coding tool[1] and have been thinking about how to bring RAG into the picture, as it clearly could add value, but the tradeoffs are tricky to get right.

    The options, as far as I can tell, are:

    - Re-embed lazily as needed at prompt-time. This should be the cheapest as it minimizes the number of embedding calls, but it's the most expensive in terms of latency.

    - Re-embed eagerly after updates (perhaps with some delay and throttling to avoid rapid-fire updates). Great for latency, but can get very expensive.

    - Some combination of the above two options. This seems to be what many IDE-based AI tools like GH Copilot are doing. An issue with this approach is that it's hard to ever know for sure what's updated and what isn't, and what exactly is getting added to context at any given time.

    I'm leaning toward the first option (lazy on-demand embedding) and letting the user decide whether the latency cost is worth it for their task vs. just manually selecting the exact context they want to load.

    1 - https://github.com/plandex-ai/plandex

  • Ask HN: What's with the Gatekeeping in Open Source?
    1 project | news.ycombinator.com | 2 May 2024
  • A note from our sponsor - CodeRabbit
    coderabbit.ai | 19 Mar 2025
    Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR. Learn more →

Stats

Basic plandex repo stats
25
11,252
9.1
9 days ago

Sponsored
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
coderabbit.ai

Did you know that Go is
the 4th most popular programming language
based on number of references?