r2ai
openrouter-runner
r2ai | openrouter-runner | |
---|---|---|
1 | 12 | |
55 | 406 | |
- | 19.2% | |
9.5 | 9.4 | |
7 days ago | about 1 month ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
r2ai
-
Show HN: Plandex – an AI coding engine for complex tasks
I think Mistral-2-Pro would work really well for this, judging by the great results I've had with it on another heavy on tool calling project [1]
[1] https://github.com/radareorg/r2ai
openrouter-runner
-
Show HN: Route your prompts to the best LLM
I've bumped into a few of these. I use https://openrouter.ai as a model abstraction, but not as a router. https://withmartian.com does the same thing but with a more enterprise feel. Also https://www.braintrustdata.com/ though it's less clear how committed they are to that feature.
That said, while I've really enjoyed the LLM abstraction (making it easy for me to test different models without changing my code), I haven't felt any desire for a router. I _do_ have some prompts that I send to gpt-3.5-turbo, and could potentially use other models, but it's kind of niche.
In part this is because I try to do as much in a single prompt as I can, meaning I want to use a model that's able to handle the hardest parts of the prompt and then the easy parts come along with. As a result there's not many "easy" prompts. The easy prompts are usually text fixup and routing.
My "routing" prompts are at a different level of abstraction, usually routing some input or activity to one of several prompts (each of which has its own context, and the sum of all contexts across those prompts is too large, hence the routing). I don't know if there's some meaningful crossover between these two routing concepts.
Another issue I have with LLM portability is the use of tools/functions/structured output. Opus and Gemini Pro 1.5 have kind of implemented this OK, but until recently GPT was the only halfway decent implementation of this. This seems to be an "advanced" feature, yet it's also a feature I use even more with smaller prompts, as those small prompts are often inside some larger algorithm and I don't want the fuss of text parsing and exceptions from ad hoc output.
But in the end I'm not price sensitive in my work, so I always come back to the newest GPT model. If I make a switch to Opus it definitely won't be to save money! And I'm probably not going to want to fiddle, but instead make a thoughtful choice and switch the default model in my code.
- Openrouter
- Integra múltiples APIs de IA en una sola plataforma
-
Collection of notebooks showcasing some fun and effective ways of using Claude
Why not use something like http://openrouter.ai? Pay as you go and you can select any model you want. Heaven!
-
World_SIM: LLM prompted to act as a sentient CLI universe simulator
teknium / Nous released Mistral finetunes (Hermes) that are quite great, and even published the datasets used for training.
But for the worldsim I think they are really using Claude (probably Haiku or Sonnet) via openrouter (https://openrouter.ai/).
-
Show HN: Plandex – an AI coding engine for complex tasks
Not affiliated with the project but you could use something like OpenRouter to give users a massive list of models to choose from with fairly minimal effort
https://openrouter.ai/
-
The Next Generation of Claude (Claude 3)
> I hate that they require a phone number
https://openrouter.ai/ lets you make one account and get API access to a bunch of different models. They also provide access to hosted versions of a bunch of open models.
Useful if you want to compare 15 different models without bothering to create 15 different accounts or download 15 x 20GB of models :)
-
The killer app of Gemini Pro 1.5 is video
You sure can! NeuroEngine[1] hosts some nice free demos of what are basically the state of the art in unfiltered models, and if you need API access, OpenRouter[2] has dozens of unfiltered models to choose from.
[1] https://www.neuroengine.ai/
[2] https://openrouter.ai/
-
OpenAI has Text to Speech Support now!
However, this needs to be changed as other providers like OpenRouter can also start supporting this feature in the future.
-
How to narrow down interest and where to begin?
Great resources for me are - The LangChain Blog: Very technical but great graphics of complex topics. Gives you a good understanding of what is currently possible and what the hot topics are - Product Hunt: Great resource to see what others are building with AI - Replicate and OpenRouter for custom made / fine tuned models - AI Twitter
What are some alternatives?
plandex - AI driven development in your terminal. Designed for large, real-world tasks.
llm - Access large language models from the command-line
llm-claude-3 - LLM plugin for interacting with the Claude 3 family of models