plandex
ollama
plandex | ollama | |
---|---|---|
16 | 210 | |
9,433 | 66,540 | |
98.7% | 23.9% | |
9.8 | 9.9 | |
8 days ago | 2 days ago | |
Go | Go | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
plandex
-
Ask HN: What's with the Gatekeeping in Open Source?
Today I tried to post my open source project on the /r/opensource subreddit. It's an AGPL 3.0-licensed, terminal-based AI coding tool that defaults to OpenAI, but can also be used with other models, including open source models.
The subreddit's rules in the sidebar state that a project must be open source under the definition on Wikipedia (https://en.wikipedia.org/wiki/Open_source) and also that limited and responsible self-promotion is ok.
My post was automatically blocked, seemingly by the mere mention of "OpenAI". The auto-message stated that "ChatGPT wrappers" were not allowed on the subreddit.
I messaged the mods to tell them about the mistake, since my project plainly was not a "ChatGPT wrapper". One of them replied saying only "Working as intended" and that because my project uses OpenAI models by default, that it isn't welcome in the subreddit.
I asked why projects using OpenAI in particular are penalized (despite this being mentioned nowhere in the rules on the sidebar), considering that there are many posts for projects interfacing with MacOS, Windows, AWS, GitHub, and countless other closed source technologies. I received no answer to this question. I was only told that any project "advertising" OpenAI was "against the spirit of FOSS" and therefore did not belong on the subreddit. The mod also continued derisively referring to my project as a "ChatGPT wrapper" and "OpenAI plugin" despite my earlier explanation. I was also called "egocentric" for wanting to share my project.
It made me sad that a subreddit with over 200k members that seems to have a lot of cool discussions going on is being moderated like this. What's with all the gatekeeping? Why are people so interested in excluding the "wrong" type of open source projects? As far as I'm concerned, if you have an open source license and people can run your code, then your project is open source.
Am I right to be miffed by this or does the moderator have a point? Have you experienced this kind of thing with your own projects? How have you dealt with it?
This is my project, by the way: https://github.com/plandex-ai/plandex
-
Fixing a real-world bug with AI using Claude Opus 3 with Plandex [video]
In this video, I use the latest 0.9.0 release of Plandex - https://github.com/plandex-ai/plandex - an open source, terminal-based tool for building more complex software with LLMs. This release includes many options for using models beyond OpenAI's.
I used Claude Opus 3 via OpenRouter.ai to fix a tricky state-management bug in a new feature I'm working on for Plandex.
-
How do people create those sleek looking demos for startups?
I built the demo video for https://plandex.ai myself using CleanShot X (https://cleanshot.com/), Adobe Premiere Pro, an effect I bought in Adobe's marketplace, some AppleScript automation, and music from SoundStripe (https://soundstripe.com/).
It was my first time using all these tools. It took me a couple days to make the video. Premier is a bit of a beast, but by just asking ChatGPT how to do everything, I was able to get up to speed with it pretty fast.
-
GitHub Copilot Workspace: Welcome to the Copilot-native developer environment
> plandex went into some kind of loops a couple of times so I stoped using it for now.
Hey, Plandex creator here. I just pushed a release today that includes fixes for exactly this kind of problem - https://github.com/plandex-ai/plandex/releases/tag/cli%2Fv0.... -- Plandex now has a much better 'working memory' that helps it not to go into loops, repeat steps it's already done, or give up too early.
I'd love to hear whether it's working better for you now.
-
Meta Llama 3
I'm building Plandex (https://github.com/plandex-ai/plandex), which currently uses the OpenAI api--I'm working on support for Anthropic and OSS models right now and hoping I can ship it later today.
You can self-host it so that data is only going to the model provider (i.e. OpenAI) and nowhere else, and it gives you fine-grained control of context, so you can pick and choose exactly which files you want to load in. It's not going to pull in anything in the background that you don't want uploaded.
There's a contributor working on integration with local models and making some progress, so that will likely be an option the future as well, but for now it should at least be a pretty big improvement for you compared to the copy-paste heavy ChatGPT workflow.
-
Anthropic launches Tool Use (function calling)
I'm looking forward to trying this out with Plandex[1] (a terminal-based AI coding tool I recently launched that can build large features and whole projects).
Plandex does rely on OpenAI's streaming function calls for its build progress indicators, so the lack of streaming is a bit unfortunate. But great to hear that it will be included in GA.
I've been getting a lot of requests to support Claude, as well as open source models. A humble suggestion for folks working on models: focus on full compatibility with the OpenAI API as soon as you can, including function calls and streaming function calls. Full support for function calls is crucial for building advanced functionality.
1 - https://github.com/plandex-ai/plandex
-
Show HN: Plandex – an AI coding engine for complex tasks
The server does quite a bit. Most of the features are covered here: https://github.com/plandex-ai/plandex/blob/main/guides/USAGE...
I actually did start out with just the CLI running locally, but it reached a point I needed a database and thus a client-server model to get it all working smoothly. I also want to add sharing and collaboration features in the future, and those also require a client-server model.
-
Discovering Devin, Devika, and OpenDevin
In my opinion, an "AI software engineer" is the wrong target for the current generation of models, though it's obviously good for generating buzz.
I'm working on an open source, OpenAI-based tool that uses agents to build complex software (https://github.com/plandex-ai/plandex), and I have found that the results are generally much better when you target 80-90% of a task and then finish it up rather than burning lots of time and tokens on trying to get the LLM to do the entire thing.
Results are also better, in my experience, when the developer remains in the driver's seat--frequently stopping, revising prompts and context, and then retrying rather than expecting to kick back and let the model do everything.
-
How well can LLMs write COBOL?
This is interesting. I'm working on an OpenAI-based tool for coding tasks that are too complex for ChatGPT - https://github.com/plandex-ai/plandex.
It's working quite well for me, but it definitely needs some time spent on benchmarking and ironing out edge cases.
I'm especially curious how it will do on more "obscure" languages. Not that Cobol is obscure exactly--I suppose there's probably quite a bit of it in GPT-4's training considering how pervasive it is in some domains. In any case, I'll try out this benchmark and see how it goes.
-
Ask HN: Is anybody getting value from AI Agents? How so?
I'm working on an agent-based tool for software development. I'm getting quite a lot of value out of it. The intention is to minimize copy-pasting and work on complex, multi-file features that are too large for ChatGPT, Copilot, and other AI development tools I've tried.
https://github.com/plandex-ai/plandex
It's working quite well though I am still working out some kinks.
I think the key to agents that really work is understanding the limitations of the models and working around them rather than trying to do everything with the LLM.
In the context of software development, imo we are currently at the stage of developer-AI symbiosis and probably will be for some time. We aren't yet at the stage where it makes sense to try to get an agent to code and debug complex tasks end-to-end. Trying to do this is a recipe for burning lots of tokens and spending more time and than it would take to build something yourself. But if you follow the 80/20 rule and get the AI to the bulk of the work, intervening frequently to keep it on track and then polishing it at the end, huge productivity gains are definitely in reach.
ollama
- Ollama v0.1.34 Is Out
-
Ask HN: What do you use local LLMs for?
- Basic internet search (I start ollama CLI faster than I can start a browser - https://ollama.com)
- Formatting/changing text
- Troubleshooting code, esp. new frameworks/libs
- Recipes
- Data entry
- Organizing thoughts: High-level lists, comparison, classification, synonyms, jargon & nomenclature
- Learning esp. by analogy and example
RAG for:
- Website assistants (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Game NPCs (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Discord/Slack/forum bots (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Character-driven storytelling and creating art in a specific style for video game loading screens, background images, avatars, website art, etc. (https://github.com/bennyschmidt/ragdoll-studio/tree/master/r...)
- FLaNK-AIM Weekly 06 May 2024
-
Introducing Jan
Jan goes a step further by integrating with other local engines like LM Studio and ollama.
- Ollama v0.1.33
-
Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain
# install the Ollama curl -fsSL https://ollama.com/install.sh | sh # get the llama3 model ollama pull llama2 # install the MLFlow pip install mlflow
-
Create an AI prototyping environment using Jupyter Lab IDE with Typescript, LangChain.js and Ollama for rapid AI prototyping
Ollama for running LLMs locally
-
Setup Llama 3 using Ollama and Open-WebUI
curl -fsSL https://ollama.com/install.sh | sh
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Streaming is not a problem (it's just a simple flag: https://github.com/wiktor-k/llama-chat/blob/main/index.ts#L2...) but I've never used voice input.
The examples show image input though: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
Maybe you can file an issue here: https://github.com/ollama/ollama/issues
-
I Said Goodbye to ChatGPT and Hello to Llama 3 on Open WebUI - You Should Too
I’m a huge fan of open source models, especially the newly release Llama 3. Because of the performance of both the large 70B Llama 3 model as well as the smaller and self-host-able 8B Llama 3, I’ve actually cancelled my ChatGPT subscription in favor of Open WebUI, a self-hostable ChatGPT-like UI that allows you to use Ollama and other AI providers while keeping your chat history, prompts, and other data locally on any computer you control.
What are some alternatives?
r2ai - local language model for radare2
llama.cpp - LLM inference in C/C++
soulshack - soulshack, an irc chatbot: because real people are overrated. (gpt-4)
gpt4all - gpt4all: run open-source LLMs anywhere
rbenv - Manage your app's Ruby environment
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality. [Moved to: https://github.com/promptfoo/promptfoo]
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
llama - Inference code for Llama models
openrouter-runner - Inference engine powering open source models on OpenRouter
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.