llm
langroid
llm | langroid | |
---|---|---|
23 | 15 | |
2,991 | 1,594 | |
- | 16.2% | |
9.4 | 9.8 | |
3 days ago | 4 days ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llm
- FLaNK AI-April 22, 2024
-
Show HN: I made a tool to clean and convert any webpage to Markdown
That's a great use case, you might be able to do this if you've got a copy and paste on the command line with
https://github.com/simonw/llm
In between. An alias like pdfwtf translating to "paste | llm command | copy"
-
Command R+: A Scalable LLM Built for Business
I added support for this model to my LLM CLI tool via a new plugin: https://github.com/simonw/llm-command-r
So now you can do this:
pipx install llm
-
The Next Generation of Claude (Claude 3)
If you're willing to use the CLI, Simon Willison's llm library[0] should do the trick.
[0] https://github.com/simonw/llm
- Show HN: I made an app to use local AI as daily driver
-
Localllm lets you develop gen AI apps on local CPUs
I'm not thrilled about https://github.com/GoogleCloudPlatform/localllm/blob/main/ll... calling their Python package "llm" and installing "llm" as a CLI command, when my similar https://llm.datasette.io/ project has that namespace reserved on PyPI already: https://pypi.org/project/llm/
- FLaNK 15 Jan 2024
- Show HN: Simple Script for Enhanced LLM Interaction in Vim
-
Bash One-Liners for LLMs
I've been gleefully exploring the intersection of LLMs and CLI utilities for a few months now - they are such a great fit for each other! The unix philosophy of piping things together is a perfect fit for how LLMs work.
I've mostly been exploring this with my https://llm.datasette.io/ CLI tool, but I have a few other one-off tools as well: https://github.com/simonw/blip-caption and https://github.com/simonw/ospeak
I'm puzzled that more people aren't loudly exploring this space (LLM+CLI) - it's really fun.
-
Semantic Kernel
Seems nice if you're using c# or java. It also supports python, but for that Simon's llm library is nice because he designed it as both a library and a command line tool: https://github.com/simonw/llm
langroid
-
OpenAI: Streaming is now available in the Assistants API
This was indeed true in the beginning, and I don’t know if this has changed. Inserting messages with Assistant role is crucial for many reasons, such as if you want to implement caching, or otherwise edit/compress a previous assistant response for cost or other reason.
At the time I implemented a work-around in Langroid[1]: since you can only insert a “user” role message, prepend the content with ASSISTANT: whenever you want it to be treated as an assistant role. This actually works as expected and I was able to do caching. I explained it in this forum:
https://community.openai.com/t/add-custom-roles-to-messages-...
[1] the Langroid code that adds a message with a given role, using this above “assistant spoofing trick”:
https://github.com/langroid/langroid/blob/main/langroid/agen...
- FLaNK Stack 29 Jan 2024
-
Ollama Python and JavaScript Libraries
Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).
For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]
[1] https://github.com/oobabooga/text-generation-webui/issues/53...
[2] https://github.com/langroid/langroid/blob/main/langroid/lang...
Related question - I assume ollama auto detects and applies the right chat formatting template for a model?
-
Pushing ChatGPT's Structured Data Support to Its Limits
we (like simpleaichat from OP) leverage Pydantic to specify the desired structured output, and under the hood Langroid translates it to either the OpenAI function-calling params or (for LLMs that don’t natively support fn-calling), auto-insert appropriate instructions into tje system-prompt. We call this mechanism a ToolMessage:
https://github.com/langroid/langroid/blob/main/langroid/agen...
We take this idea much further — you can define a method in a ChatAgent to “handle” the tool and attach the tool to the agent. For stateless tools you can define a “handle” method in the tool itself and it gets patched into the ChatAgent as the handler for the tool.
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Many services/platforms are careless/disingenuous when they claim they “train” on your documents, where they actually mean they do RAG.
An under-appreciate benefit of RAG is the ability to have the LLM cite sources for its answers (which are in principle automatically/manually verifiable). You lose this citation ability when you finetune on your documents.
In Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers) https://github.com/langroid/langroid
-
Build a search engine, not a vector DB
This resonates with the approach we’ve taken in Langroid (the Multi-Agent framework from ex-CMU/UW-Madison researchers): our DocChatAgent uses a combination of lexical and semantic retrieval, reranking and relevance extraction to improve precision and recall:
https://github.com/langroid/langroid/blob/main/langroid/agen...
-
HuggingChat – ChatGPT alternative with open source models
In the Langroid library (a multi-agent framework from ex-CMU/UW-Madison researchers) we have these and more. For example here’s a script that combines web search and RAG:
https://github.com/langroid/langroid/blob/main/examples/docq...
-
SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
Thanks, also found Langdroid: https://github.com/langroid/langroid/blob/main/README.md
- memory in ConversationalRetrievalChain removed
- [D] github repositories for ai web search agents
What are some alternatives?
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
simpleaichat - Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
modelfusion - The TypeScript library for building AI applications.
multi-gpt - A Clojure interface into the GPT API with advanced tools like conversational memory, task management, and more
autogen - A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
jehuty - Fluent API to interact with chat based GPT model
vectordb - A minimal Python package for storing and retrieving text using chunking, embeddings, and vector search.
llm-replicate - LLM plugin for models hosted on Replicate
Adala - Adala: Autonomous DAta (Labeling) Agent framework
aipl - Array-Inspired Pipeline Language
chidori - A reactive runtime for building durable AI agents