How I Program with LLMs

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Stream - Scalable APIs for Chat, Feeds, Moderation, & Video.
Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
getstream.io
featured
InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com
featured
  1. ollama

    Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.

    You can run pretty decent models on your laptop these days. Works in airplane mode.

    https://ollama.com/

  2. Stream

    Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.

    Stream logo
  3. bolt.new

    Prompt, run, edit, and deploy full-stack web applications. -- bolt.new -- Help Center: https://support.bolt.new/ -- Community Support: https://discord.com/invite/stackblitz

  4. zencontrol-python

    This is a python implementation of the Zencontrol TPI Advanced protocol for DALI lighting.

    No, I'm writing python. The LLM saves me from needing foreknowledge of this particular language's syntax and grammar. I'm still debugging like a "real" python programmer and I'm still editing/refining the code like a "real" programmer, because I am.

    Here's the code I wrote if you're curious: https://github.com/sjwright/zencontrol-python/

  5. gptel

    A simple LLM client for Emacs

    > (Github Copilot allows selecting different models, but I didn't check more carefully whether that also includes a local one, anyone knows?).

    To my knowledge, it doesn't.

    On Emacs there's gptel which integrates quiet nicely different LLM inside Emacs, including a local Ollama.

    > gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.

    https://github.com/karthink/gptel

  6. gt4llm

    A GT package for working with LLMs

  7. InfluxDB

    InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • How to Learn AI from Scratch

    3 projects | dev.to | 16 Jun 2025
  • Void + Ollama + LLMs: How I Turned My Code Editor into a Full-Blown AI Workbench

    3 projects | dev.to | 26 May 2025
  • Ollama's llama.cpp licensing issue goes unanswered for over a year

    10 projects | news.ycombinator.com | 16 May 2025
  • Show HN: Clippy, 90s UI for local LLMs

    8 projects | news.ycombinator.com | 6 May 2025
  • Code Reviews with AI: a Developer Guide

    3 projects | dev.to | 18 Feb 2025

Did you know that Emacs Lisp is
the 26th most popular programming language
based on number of references?