plandex VS litellm

Compare plandex vs litellm and see what are their differences.

plandex

An AI coding engine for building complex, real-world software with LLMs (by plandex-ai)

litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs) (by BerriAI)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
plandex litellm
16 28
9,433 8,907
98.7% 21.7%
9.8 10.0
8 days ago about 9 hours ago
Go Python
GNU Affero General Public License v3.0 GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

plandex

Posts with mentions or reviews of plandex. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-01.
  • Ask HN: What's with the Gatekeeping in Open Source?
    1 project | news.ycombinator.com | 2 May 2024
    Today I tried to post my open source project on the /r/opensource subreddit. It's an AGPL 3.0-licensed, terminal-based AI coding tool that defaults to OpenAI, but can also be used with other models, including open source models.

    The subreddit's rules in the sidebar state that a project must be open source under the definition on Wikipedia (https://en.wikipedia.org/wiki/Open_source) and also that limited and responsible self-promotion is ok.

    My post was automatically blocked, seemingly by the mere mention of "OpenAI". The auto-message stated that "ChatGPT wrappers" were not allowed on the subreddit.

    I messaged the mods to tell them about the mistake, since my project plainly was not a "ChatGPT wrapper". One of them replied saying only "Working as intended" and that because my project uses OpenAI models by default, that it isn't welcome in the subreddit.

    I asked why projects using OpenAI in particular are penalized (despite this being mentioned nowhere in the rules on the sidebar), considering that there are many posts for projects interfacing with MacOS, Windows, AWS, GitHub, and countless other closed source technologies. I received no answer to this question. I was only told that any project "advertising" OpenAI was "against the spirit of FOSS" and therefore did not belong on the subreddit. The mod also continued derisively referring to my project as a "ChatGPT wrapper" and "OpenAI plugin" despite my earlier explanation. I was also called "egocentric" for wanting to share my project.

    It made me sad that a subreddit with over 200k members that seems to have a lot of cool discussions going on is being moderated like this. What's with all the gatekeeping? Why are people so interested in excluding the "wrong" type of open source projects? As far as I'm concerned, if you have an open source license and people can run your code, then your project is open source.

    Am I right to be miffed by this or does the moderator have a point? Have you experienced this kind of thing with your own projects? How have you dealt with it?

    This is my project, by the way: https://github.com/plandex-ai/plandex

  • Fixing a real-world bug with AI using Claude Opus 3 with Plandex [video]
    1 project | news.ycombinator.com | 2 May 2024
    In this video, I use the latest 0.9.0 release of Plandex - https://github.com/plandex-ai/plandex - an open source, terminal-based tool for building more complex software with LLMs. This release includes many options for using models beyond OpenAI's.

    I used Claude Opus 3 via OpenRouter.ai to fix a tricky state-management bug in a new feature I'm working on for Plandex.

  • How do people create those sleek looking demos for startups?
    6 projects | news.ycombinator.com | 1 May 2024
    I built the demo video for https://plandex.ai myself using CleanShot X (https://cleanshot.com/), Adobe Premiere Pro, an effect I bought in Adobe's marketplace, some AppleScript automation, and music from SoundStripe (https://soundstripe.com/).

    It was my first time using all these tools. It took me a couple days to make the video. Premier is a bit of a beast, but by just asking ChatGPT how to do everything, I was able to get up to speed with it pretty fast.

  • GitHub Copilot Workspace: Welcome to the Copilot-native developer environment
    4 projects | news.ycombinator.com | 29 Apr 2024
    > plandex went into some kind of loops a couple of times so I stoped using it for now.

    Hey, Plandex creator here. I just pushed a release today that includes fixes for exactly this kind of problem - https://github.com/plandex-ai/plandex/releases/tag/cli%2Fv0.... -- Plandex now has a much better 'working memory' that helps it not to go into loops, repeat steps it's already done, or give up too early.

    I'd love to hear whether it's working better for you now.

  • Meta Llama 3
    10 projects | news.ycombinator.com | 18 Apr 2024
    I'm building Plandex (https://github.com/plandex-ai/plandex), which currently uses the OpenAI api--I'm working on support for Anthropic and OSS models right now and hoping I can ship it later today.

    You can self-host it so that data is only going to the model provider (i.e. OpenAI) and nowhere else, and it gives you fine-grained control of context, so you can pick and choose exactly which files you want to load in. It's not going to pull in anything in the background that you don't want uploaded.

    There's a contributor working on integration with local models and making some progress, so that will likely be an option the future as well, but for now it should at least be a pretty big improvement for you compared to the copy-paste heavy ChatGPT workflow.

  • Anthropic launches Tool Use (function calling)
    3 projects | news.ycombinator.com | 5 Apr 2024
    I'm looking forward to trying this out with Plandex[1] (a terminal-based AI coding tool I recently launched that can build large features and whole projects).

    Plandex does rely on OpenAI's streaming function calls for its build progress indicators, so the lack of streaming is a bit unfortunate. But great to hear that it will be included in GA.

    I've been getting a lot of requests to support Claude, as well as open source models. A humble suggestion for folks working on models: focus on full compatibility with the OpenAI API as soon as you can, including function calls and streaming function calls. Full support for function calls is crucial for building advanced functionality.

    1 - https://github.com/plandex-ai/plandex

  • Show HN: Plandex – an AI coding engine for complex tasks
    9 projects | news.ycombinator.com | 3 Apr 2024
    The server does quite a bit. Most of the features are covered here: https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...

    I actually did start out with just the CLI running locally, but it reached a point I needed a database and thus a client-server model to get it all working smoothly. I also want to add sharing and collaboration features in the future, and those also require a client-server model.

  • Discovering Devin, Devika, and OpenDevin
    1 project | news.ycombinator.com | 2 Apr 2024
    In my opinion, an "AI software engineer" is the wrong target for the current generation of models, though it's obviously good for generating buzz.

    I'm working on an open source, OpenAI-based tool that uses agents to build complex software (https://github.com/plandex-ai/plandex), and I have found that the results are generally much better when you target 80-90% of a task and then finish it up rather than burning lots of time and tokens on trying to get the LLM to do the entire thing.

    Results are also better, in my experience, when the developer remains in the driver's seat--frequently stopping, revising prompts and context, and then retrying rather than expecting to kick back and let the model do everything.

  • How well can LLMs write COBOL?
    1 project | news.ycombinator.com | 31 Mar 2024
    This is interesting. I'm working on an OpenAI-based tool for coding tasks that are too complex for ChatGPT - https://github.com/plandex-ai/plandex.

    It's working quite well for me, but it definitely needs some time spent on benchmarking and ironing out edge cases.

    I'm especially curious how it will do on more "obscure" languages. Not that Cobol is obscure exactly--I suppose there's probably quite a bit of it in GPT-4's training considering how pervasive it is in some domains. In any case, I'll try out this benchmark and see how it goes.

  • Ask HN: Is anybody getting value from AI Agents? How so?
    4 projects | news.ycombinator.com | 31 Mar 2024
    I'm working on an agent-based tool for software development. I'm getting quite a lot of value out of it. The intention is to minimize copy-pasting and work on complex, multi-file features that are too large for ChatGPT, Copilot, and other AI development tools I've tried.

    https://github.com/plandex-ai/plandex

    It's working quite well though I am still working out some kinks.

    I think the key to agents that really work is understanding the limitations of the models and working around them rather than trying to do everything with the LLM.

    In the context of software development, imo we are currently at the stage of developer-AI symbiosis and probably will be for some time. We aren't yet at the stage where it makes sense to try to get an agent to code and debug complex tasks end-to-end. Trying to do this is a recipe for burning lots of tokens and spending more time and than it would take to build something yourself. But if you follow the 80/20 rule and get the AI to the bulk of the work, intervening frequently to keep it on track and then polishing it at the end, huge productivity gains are definitely in reach.

litellm

Posts with mentions or reviews of litellm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-05.
  • Anthropic launches Tool Use (function calling)
    3 projects | news.ycombinator.com | 5 Apr 2024
    There are a few libs that already abstract this away, for example:

    - https://github.com/BerriAI/litellm

    - https://jxnl.github.io/instructor/

    - langchain

    It's not hard for me to imagine a future where there is something like the CNCF for AI models, tools, and infra.

  • Ask HN: Python Meta-Client for OpenAI, Anthropic, Gemini LLM and other API-s?
    1 project | news.ycombinator.com | 7 Mar 2024
    Hey, are you just looking for litellm - https://github.com/BerriAI/litellm

    context - i'm the repo maintainer

  • Voxos.ai – An Open-Source Desktop Voice Assistant
    7 projects | news.ycombinator.com | 19 Jan 2024
    It should be possible using LiteLLM and a patch or a proxy.

    https://github.com/BerriAI/litellm

  • Show HN: Talk to any ArXiv paper just by changing the URL
    5 projects | news.ycombinator.com | 20 Dec 2023
  • Integrate LLM Frameworks
    5 projects | dev.to | 10 Dec 2023
    This article will demonstrate how txtai can integrate with llama.cpp, LiteLLM and custom generation methods. For custom generation, we'll show how to run inference with a Mamba model.
  • Is there any open source app to load a model and expose API like OpenAI?
    5 projects | /r/LocalLLaMA | 9 Dec 2023
    I use this with ollama and works perfectly https://github.com/BerriAI/litellm
  • OpenAI Switch Kit: Swap OpenAI with any open-source model
    5 projects | news.ycombinator.com | 6 Dec 2023
    Another abstraction layer library is: https://github.com/BerriAI/litellm

    For me the killer feature of a library like this would be if it implemented function calling. Even if it was for a very restricted grammar - like the traditional ReAct prompt:

      Solve a question answering task with interleaving Thought, Action, Observation   usteps. Thought can reason about the current situation, and Action can be three types:
  • LibreChat
    9 projects | news.ycombinator.com | 2 Dec 2023
  • LM Studio – Discover, download, and run local LLMs
    17 projects | news.ycombinator.com | 22 Nov 2023
  • Please!!! Help me!!!! Open Interpreter. Chatgpt-4. Mac, Terminals.
    1 project | /r/OPENINTERPRETER | 21 Nov 2023
    Welcome to Open Interpreter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ OpenAI API key not found To use GPT-4 (recommended) please provide an OpenAI API key. To use Code-Llama (free but less capable) press enter. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── OpenAI API key: [the API Key I inputed] Tip: To save this key for later, run export OPENAI_API_KEY=your_api_key on Mac/Linux or setx OPENAI_API_KEY your_api_key on Windows. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── ▌ Model set to GPT-4 Open Interpreter will require approval before running code. Use interpreter -y to bypass this. Press CTRL-C to exit. > export OPENAI_API_KEY=your_api_key Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.12/bin/interpreter", line 8, in sys.exit(cli()) ^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 22, in cli cli(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/cli/cli.py", line 254, in cli interpreter.chat() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 76, in chat for _ in self._streaming_chat(message=message, display=display): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 97, in _streaming_chat yield from terminal_interface(self, message) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/terminal_interface/terminal_interface.py", line 62, in terminal_interface for chunk in interpreter.chat(message, display=False, stream=True): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 105, in _streaming_chat yield from self._respond() File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/core.py", line 131, in _respond yield from respond(self) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/core/respond.py", line 61, in respond for chunk in interpreter._llm(messages_for_llm): File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/interpreter/llm/setup_openai_coding_llm.py", line 94, in coding_llm response = litellm.completion(**params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 792, in wrapper raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 751, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 53, in wrapper result = future.result(timeout=local_timeout_duration) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 456, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/timeout.py", line 42, in async_func return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 1183, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2959, in exception_type raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/utils.py", line 2355, in exception_type raise original_exception File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 441, in completion raise e File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/litellm/main.py", line 423, in completion response = openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create response, _, api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.

What are some alternatives?

When comparing plandex and litellm you can also consider the following projects:

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

r2ai - local language model for radare2

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

soulshack - soulshack, an irc chatbot: because real people are overrated. (gpt-4)

LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

rbenv - Manage your app's Ruby environment

dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. [Moved to: https://github.com/promptfoo/promptfoo]

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

openrouter-runner - Inference engine powering open source models on OpenRouter

libsql - libSQL is a fork of SQLite that is both Open Source, and Open Contributions.