llm

Access large language models from the command-line (by simonw)

Llm Alternatives

Similar projects and alternatives to llm

  1. text-generation-webui

    A Gradio web UI for Large Language Models with support for multiple inference backends.

  2. Judoscale

    Save 47% on cloud hosting with autoscaling that just works. Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.

    Judoscale logo
  3. llama.cpp

    881 llm VS llama.cpp

    LLM inference in C/C++

  4. ollama

    455 llm VS ollama

    Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.

  5. pandoc

    443 llm VS pandoc

    Universal markup converter

  6. gpt4all

    148 llm VS gpt4all

    GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

  7. aider

    106 llm VS aider

    aider is AI pair programming in your terminal

  8. guidance

    90 llm VS guidance

    Discontinued A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance] (by microsoft)

  9. CodeRabbit

    CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.

    CodeRabbit logo
  10. llamafile

    61 llm VS llamafile

    Distribute and run LLMs with a single file.

  11. simonwillisonblog

    The source code behind my blog

  12. open-webui

    40 llm VS open-webui

    User-friendly AI Interface (Supports Ollama, OpenAI API, ...)

  13. serge

    40 llm VS serge

    A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.

  14. jan

    33 llm VS jan

    Jan is an open source alternative to ChatGPT that runs 100% offline on your computer

  15. aichat

    32 llm VS aichat

    All-in-one LLM CLI tool featuring Shell Assistant, Chat-REPL, RAG, AI Tools & Agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.

  16. simpleaichat

    22 llm VS simpleaichat

    Python package for easily interfacing with chat apps, with robust features and minimal code complexity.

  17. openrouter-runner

    Inference engine powering open source models on OpenRouter

  18. guidance

    29 llm VS guidance

    A guidance language for controlling large language models.

  19. savvy-cli

    Automatically capture and surface your team's tribal knowledge

  20. langroid

    20 llm VS langroid

    Harness LLMs with Multi-Agent Programming

  21. llama-gpt

    7 llm VS llama-gpt

    A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!

  22. ad-llama

    7 llm VS ad-llama

    Structured inference with Llama 2 in your browser

  23. InfluxDB

    InfluxDB high-performance time series database. Collect, organize, and act on massive volumes of high-resolution data to power real-time intelligent systems.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better llm alternative or higher similarity.

llm discussion

Log in or Post with

llm reviews and mentions

Posts with mentions or reviews of llm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2025-04-09.
  • An LLM Query Understanding Service
    2 projects | news.ycombinator.com | 9 Apr 2025
    Prompting LLMs to turn search queries like "red loveseat" into structured search filters like {"item_type": "loveseat", "color": "red"} is a neat trick.

    I tried Doug's prompt out on a few other LLMs:

    Gemini 1.5 Flash 8B handles it well and costs about 1/1000th of a cent: https://gist.github.com/simonw/cc825bfa7f921ca9ac47d7afb6eab...

    Llama 3.2 3B is a very small local model (a 2GB file) which can handle it too: https://gist.github.com/simonw/d18422ca24528cdb9e5bd77692531...

    An even smaller model, the 1.1GB deepseek-r1:1.5b, thought about it at length and confidently spat out the wrong answer! https://gist.github.com/simonw/c37eca96dd6721883207c99d25aec...

    All three tests run with https://llm.datasette.io using the llm-gemini or llm-ollama plugins.

  • I genuinely don't understand why some people are still bullish about LLMs
    5 projects | news.ycombinator.com | 27 Mar 2025
    You mean for https://tools.simonwillison.net/colophon ?

    I've used a whole bunch of techniques.

    Most of the code in there is directly copied and pasted in from https://claude.ai or https://chatgpt.com - often using Claude Artifacts to try it out first.

    Some changes are made in VS Code using GitHub Copilot

    I've used Claude Code for a few of them https://docs.anthropic.com/en/docs/agents-and-tools/claude-c...

    Some were my own https://llm.datasette.io tool - I can run a prompt through that and save the result straight to a file

    The commit messages usually link to either a "share" transcript or my own Gist showing the prompts that I used to build the tool in question.

  • Gemini 2.5 Pro reasons about task feasibility
    7 projects | news.ycombinator.com | 26 Mar 2025
  • Vibe Coding a Web Pong Game with poorcoder and Grok
    2 projects | dev.to | 21 Mar 2025
    autocommit - Automated AI commit message generator using the llm CLI tool (setup instructions here)
  • OpenAI's o1-pro now available via API
    4 projects | news.ycombinator.com | 19 Mar 2025
    This is their first model to only be available via the new Responses API - if you have code that uses Chat Completions you'll need to upgrade to Responses in order to support this.

    Could take me a while to add support for it to my LLM tool: https://github.com/simonw/llm/issues/839

  • The Awesome Power of an LLM in Your Terminal
    3 projects | dev.to | 18 Mar 2025
  • Prompting Large Language Models in Bash Scripts
    3 projects | news.ycombinator.com | 2 Mar 2025
    I feel like the incumbent for running llm prompts, including locally, on the cli is llm: https://github.com/simonw/llm?tab=readme-ov-file#installing-...

    How does this compare?

  • GPT-4.5
    9 projects | news.ycombinator.com | 27 Feb 2025
    If you want to try it out via their API you can run it through my LLM tool using uvx like this:

      uvx --with 'https://github.com/simonw/llm/archive/801b08bf40788c09aed6175252876310312fe667.zip' llm -m gpt-4.5-preview 'impress me, somehow'
  • My LLM codegen workflow ATM
    2 projects | news.ycombinator.com | 18 Feb 2025
    https://github.com/simonw/llm

    It is linked to in the article - a brilliant utility from Simon.

  • Show HN: Repo-guide – AI-generated docs for codebase exploration and onboarding
    2 projects | news.ycombinator.com | 18 Feb 2025
    Hey HN, I built repo-guide to make it easier to dive into and contribute to unfamiliar codebases. You can see an example of what it generates here: https://wolfmanstout.github.io/repo-guide/

    Unlike most AI documentation tools that focus on Q&A, repo-guide generates comprehensive, browsable guides. It's designed to complement (not replace) human-authored documentation, with full transparency about AI generation.

    Why?

    * Deepening expertise: One of the best ways to advance as an engineer is to learn how libraries and systems you use are built under the hood. Many AI tools emphasize Q&A chat, which is great if you already know what questions to ask, but not as helpful when you’re trying to discover new details. By generating detailed, navigable guides, repo-guide helps surface interesting design choices you might otherwise miss.

    * Onboarding contributors: If you’re new to a repo, figuring out directory layouts and dev tools can be a slog. A comprehensive auto-generated guide lets you ramp up faster.

    How It Works:

    * You install it via pip install repo-guide, then run repo-guide .

    * It uses a bottom-up approach that examines each directory and file, generating Markdown docs you can browse locally (or deploy).

    * Under the hood, it leverages Simon Willison’s LLM package (https://github.com/simonw/llm) to call LLM APIs (e.g., Gemini 2.0 Flash by default -- you can specify another model via --model).

    * The system prompt encourages verbosity so you’ll see thorough coverage of internals (you can customize or shorten this via --custom-instructions).

    What’s Next?

    * Future ideas include a live chatbot that references both the generated docs and code, plus auto-generated changelogs.

    * This is one of my weekend projects, so maintenance might be sporadic, but I’m happy to take feedback and suggestions!

  • A note from our sponsor - CodeRabbit
    coderabbit.ai | 19 Apr 2025
    Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR. Learn more →

Stats

Basic llm repo stats
59
7,077
9.6
6 days ago

simonw/llm is an open source project licensed under Apache License 2.0 which is an OSI approved license.

The primary programming language of llm is Python.


Sponsored
Save 47% on cloud hosting with autoscaling that just works
Judoscale integrates with Django, FastAPI, Celery, and RQ to make autoscaling easy and reliable. Save big, and say goodbye to request timeouts and backed-up task queues.
judoscale.com

Did you know that Python is
the 2nd most popular programming language
based on number of references?