Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR. Learn more →
Llm Alternatives
Similar projects and alternatives to llm
-
text-generation-webui
A Gradio web UI for Large Language Models with support for multiple inference backends.
-
Judoscale
Save 47% on cloud hosting with autoscaling that just works. Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.
-
-
ollama
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.
-
-
-
-
guidance
Discontinued A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance] (by microsoft)
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
-
-
-
serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
-
-
aichat
All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI Tools & Agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.
-
simpleaichat
Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
-
-
-
-
-
llama-gpt
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
-
-
InfluxDB
InfluxDB high-performance time series database. Collect, organize, and act on massive volumes of high-resolution data to power real-time intelligent systems.
llm discussion
llm reviews and mentions
-
An LLM Query Understanding Service
Prompting LLMs to turn search queries like "red loveseat" into structured search filters like {"item_type": "loveseat", "color": "red"} is a neat trick.
I tried Doug's prompt out on a few other LLMs:
Gemini 1.5 Flash 8B handles it well and costs about 1/1000th of a cent: https://gist.github.com/simonw/cc825bfa7f921ca9ac47d7afb6eab...
Llama 3.2 3B is a very small local model (a 2GB file) which can handle it too: https://gist.github.com/simonw/d18422ca24528cdb9e5bd77692531...
An even smaller model, the 1.1GB deepseek-r1:1.5b, thought about it at length and confidently spat out the wrong answer! https://gist.github.com/simonw/c37eca96dd6721883207c99d25aec...
All three tests run with https://llm.datasette.io using the llm-gemini or llm-ollama plugins.
-
I genuinely don't understand why some people are still bullish about LLMs
You mean for https://tools.simonwillison.net/colophon ?
I've used a whole bunch of techniques.
Most of the code in there is directly copied and pasted in from https://claude.ai or https://chatgpt.com - often using Claude Artifacts to try it out first.
Some changes are made in VS Code using GitHub Copilot
I've used Claude Code for a few of them https://docs.anthropic.com/en/docs/agents-and-tools/claude-c...
Some were my own https://llm.datasette.io tool - I can run a prompt through that and save the result straight to a file
The commit messages usually link to either a "share" transcript or my own Gist showing the prompts that I used to build the tool in question.
- Gemini 2.5 Pro reasons about task feasibility
-
Vibe Coding a Web Pong Game with poorcoder and Grok
autocommit - Automated AI commit message generator using the llm CLI tool (setup instructions here)
-
OpenAI's o1-pro now available via API
This is their first model to only be available via the new Responses API - if you have code that uses Chat Completions you'll need to upgrade to Responses in order to support this.
Could take me a while to add support for it to my LLM tool: https://github.com/simonw/llm/issues/839
- The Awesome Power of an LLM in Your Terminal
-
Prompting Large Language Models in Bash Scripts
I feel like the incumbent for running llm prompts, including locally, on the cli is llm: https://github.com/simonw/llm?tab=readme-ov-file#installing-...
How does this compare?
-
GPT-4.5
If you want to try it out via their API you can run it through my LLM tool using uvx like this:
uvx --with 'https://github.com/simonw/llm/archive/801b08bf40788c09aed6175252876310312fe667.zip' llm -m gpt-4.5-preview 'impress me, somehow'
-
My LLM codegen workflow ATM
https://github.com/simonw/llm
It is linked to in the article - a brilliant utility from Simon.
-
Show HN: Repo-guide – AI-generated docs for codebase exploration and onboarding
Hey HN, I built repo-guide to make it easier to dive into and contribute to unfamiliar codebases. You can see an example of what it generates here: https://wolfmanstout.github.io/repo-guide/
Unlike most AI documentation tools that focus on Q&A, repo-guide generates comprehensive, browsable guides. It's designed to complement (not replace) human-authored documentation, with full transparency about AI generation.
Why?
* Deepening expertise: One of the best ways to advance as an engineer is to learn how libraries and systems you use are built under the hood. Many AI tools emphasize Q&A chat, which is great if you already know what questions to ask, but not as helpful when you’re trying to discover new details. By generating detailed, navigable guides, repo-guide helps surface interesting design choices you might otherwise miss.
* Onboarding contributors: If you’re new to a repo, figuring out directory layouts and dev tools can be a slog. A comprehensive auto-generated guide lets you ramp up faster.
How It Works:
* You install it via pip install repo-guide, then run repo-guide .
* It uses a bottom-up approach that examines each directory and file, generating Markdown docs you can browse locally (or deploy).
* Under the hood, it leverages Simon Willison’s LLM package (https://github.com/simonw/llm) to call LLM APIs (e.g., Gemini 2.0 Flash by default -- you can specify another model via --model).
* The system prompt encourages verbosity so you’ll see thorough coverage of internals (you can customize or shorten this via --custom-instructions).
What’s Next?
* Future ideas include a live chatbot that references both the generated docs and code, plus auto-generated changelogs.
* This is one of my weekend projects, so maintenance might be sporadic, but I’m happy to take feedback and suggestions!
-
A note from our sponsor - CodeRabbit
coderabbit.ai | 19 Apr 2025
Stats
simonw/llm is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of llm is Python.