SaaSHub helps you find the best software and product alternatives Learn more →
Agency Alternatives
Similar projects and alternatives to agency
-
guidance
Discontinued A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance] (by microsoft)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Constrained-Text-Generation-Studio
Code repo for "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" at the (CAI2) workshop, jointly held at (COLING 2022)
-
panml
PanML is a high level generative AI/ML development and analysis library designed for ease of use and fast experimentation.
-
simpleaichat
Python package for easily interfacing with chat apps, with robust features and minimal code complexity.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
simpleAI
An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.
-
feste
Feste is a free and open-source framework allowing scalable composition of NLP tasks using a graph execution model that is optimized and executed by specialized schedulers.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
agency reviews and mentions
-
Show HN: LLaMA tokenizer that runs in browser
Tokenizers seem to be a massive pain in the neck if you are just calling into an API to use your model. The algorithm itself is non-trivial, and they need pretty sizable data to function: the vocabulary and the merges, which just sit there, using memory. I'm writing https://github.com/ryszard/agency in Go, and while there's a good library for the OpenAI tokenization, if you want a tokenizer for the HF models the best I found was a library calling HF's Rust implementation, which makes it horrible for distribution.
However, at some point I realized that I needed not really the tokens, but the token count, as my most important use was implementing a Token Buffer Memory (trim messages from the beginning in such a way that you never exceed a context size number of tokens). And in order to do that I don't need it to be exactly right, just mostly right, if I am ok with slightly suboptimal efficiency (keeping slightly less tokens than the model supports). So, I took files from Project Gutenberg, and compared the ratio of tokens I get using a proper tokenizer and just calling `strings.Split`, and it seems to be remarkably stable for a given model and language (multiply the length of the result of splitting on spaces by 1.55 for OpenAI and 1.7 for Claude, which leaves a tiny safety margin).
I'm not throwing shade at this project – just being able to call the tokenizer would've saved me a lot of time. But I hope that if I'm wrong about the estimates bring good enough some good person will point out the error of my ways :)
-
Understanding GPT Tokenizers
How I wish this post had appeared a few days earlier... I am writing on my own library for some agent experiments (in go, to make my life more interesting I guess), and knowing the number of tokens is important to implement a token buffer memory (as you approach the model's context window size, you prune enough messages from the beginning of the conversation that the whole thing keeps some given size, in tokens). While there's a nice native library in go for OpenAI models (https://github.com/tiktoken-go/tokenizer), the only library I found for Hugging Face models (and Claude, they published their tokenizer spec in the same JSON format) calls into HF's Rust implementation, which makes it challenging as a dependency in Go. What is more, any tokenizer needs to keep some representation of its vocabulary in memory. So, in the end I removed the true tokenizers, and ended up using an approximate version (just split it in on spaces and multiply by a factor I determined experimentally for the models I use using the real tokenizer, with a little extra for safety). If it turns out someone needs the real thing they can always provide their own token counter). I was actually rather happy with this result: I have less dependencies, and use less memory. But to get there I needed to do a deep dive too understand BPE tokenizers :)
(The library, if anyone is interested: https://github.com/ryszard/agency.)
-
[P] I got fed up with LangChain, so I made a simple open-source alternative for building Python AI apps as easy and intuitive as possible.
I completely agree about langchain being brittle; what I really hate is that it's really hard to make sense about what is going on by reading the code. I was similarly frustrated and rolled my own thing on go (shameless plug): https://github.com/ryszard/agency
- 🏢🤖Agency - An Idiomatic Go Interface for the OpenAI API🚀
- Agency - An Idiomatic Go Interface for the OpenAI API (request for feedback)
-
A note from our sponsor - SaaSHub
www.saashub.com | 1 May 2024
Stats
ryszard/agency is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of agency is Go.
Sponsored