simpleAI
guidance
simpleAI | guidance | |
---|---|---|
11 | 89 | |
323 | 12,248 | |
- | - | |
7.3 | 9.5 | |
12 months ago | 9 months ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simpleAI
-
[P] I got fed up with LangChain, so I made a simple open-source alternative for building Python AI apps as easy and intuitive as possible.
Not related to my own project SimpleAI despite the name, but looks like we can easily make the two work together, to keep it « simple ». Nice work!
-
Run and create custom ChatGPT-like bots with OpenChat
Using this as an opportunity to mention my own related project, perhaps it can end up on your nice list one day. :)
https://github.com/lhenault/SimpleAI
- [D] OpenAI API vs. Open Source Self hosted for AI Startups
-
StableLM released
You could have a look at a project I’ve been working on, SimpleAI, doing exactly this by replicating the OpenAI endpoints (you can then use their JS client for integration). Adding StableLM should be straightforward, I plan to add it to the examples in the upcoming days once I have a bit of time.
-
[P] LoopGPT: A Modular Auto-GPT Framework
I’ve built SimpleAI with exactly these kinds of use cases in mind. That should allow supporting any model with minimal / no change to your project. Good job and good luck with LoopGPT, that looks nice!
-
Using the API in Node
You could give this a shot: https://github.com/lhenault/simpleAI
-
[D] Would a Tesla M40 provide cheap inference acceleration for self-hosted LLMs?
I don't know if this applies to your use case but this would probably work if you are looking for an llm to help with programming. Haven't really played around with it but this may work for general llm tasks, it doesn't have a web UI though.
-
Alpaca, LLaMa, Vicuna [D]
As per llama.cpp specifically, you can indeed add any model, it's just a matter of doing a bit of glue code and declaring it in your models.toml config. It's quite straightforward thanks to some provided tools for Python (see here for instance). For any other language it's a matter of integrating it through the gRPC interface (which shouldn't be too hard for Llama.cpp if you're comfortable in C++). I'm planning to also add support for REST for model in the backend at some point too.
-
[D] Is there currently anything comparable to the OpenAI API?
Shameless plug but I’ve been recently working on SimpleAI, a project replicating the main endpoints from OpenAI API, allowing you to seamlessly switch from their API to your own one, as it’s compatible with OpenAI client.
-
[P] SimpleAI : A self-hosted alternative to OpenAI API
I wanted to share with you SimpleAI, a self-hosted alternative to OpenAI API.
guidance
-
Guidance: A guidance language for controlling large language models
This IS Microsoft Guidance, they seem to have spun off a separate GitHub organization for it.
https://github.com/microsoft/guidance redirects to https://github.com/guidance-ai/guidance now.
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
Llama: Add Grammar-Based Sampling
... and it sets the value of "armor" to "leather" so that you can use that value later in your code if you wish to. Guidance is pretty powerful, but I find the grammar hard to work with. I think the idea of being able to upload a bit of code or a context-free grammar to guide the model is super smart.
https://github.com/microsoft/guidance/blob/d2c5e3cbb730e337b...
-
Introducing TypeChat from Microsoft
Here's one thing I don't get.
Why all the rigamarole of hoping you get a valid response, adding last-mile validators to detect invalid responses, trying to beg the model to pretty please give me the syntax I'm asking for...
...when you can guarantee a valid JSON syntax by only sampling tokens that are valid? Instead of greedily picking the highest-scoring token every time, you select the highest-scoring token that conforms to the requested format.
This is what Guidance does already, also from Microsoft: https://github.com/microsoft/guidance
But OpenAI apparently does not expose the full scores of all tokens, it only exposes the highest-scoring token. Which is so odd, because if you run models locally, using Guidance is trivial, and you can guarantee your json is correct every time. It's faster to generate, too!
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
Perhaps something as simple as stating it was first built around OpenAI models and later expanded to local via plugins?
I've been meaning to ask you, have you seen/used MS Guidance[0] 'language' at all? I don't know if it's the right abstraction to interface as a plugin with what you've got in llm cli but there's a lot about Guidance that seems incredibly useful to local inference [token healing and acceleration especially].
[0]https://github.com/microsoft/guidance
-
AutoChain, lightweight and testable alternative to LangChain
LangChain is just too much, personal solutions are great, until you need to compare metrics or methodologies of prompt generation. Then the onus is on these n-parties who are sharing their resources to ensure that all of them used the same templates, they were generated the same way, with the only diff being the models these prompts were run on.
So maybe a simpler library like Microsoft's Guidance (https://github.com/microsoft/guidance)? It does this really well.
-
Structured Output from LLMs (Without Reprompting!)
I am unclear on the status of the project but here is the conversation that seem to be tracking it: https://github.com/microsoft/guidance/discussions/201
-
/r/guidance is now a subreddit for Guidance, Microsoft's template language for controlling language models!
Let's have a subreddit about Guidance!
- Is there a UI that can limit LLM tokens to a preset list?
-
Any suggestions for an open source model for parsing real estate listings?
You should look at guidance for an LLM to fill out a template. Define the output data structure and provide the real estate listing in the context (see the JSON template example here https://github.com/microsoft/guidance)
What are some alternatives?
OpenChat - LLMs custom-chatbots console ⚡
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
dalai - The simplest way to run LLaMA on your local machine
lmql - A language for constraint-guided and efficient LLM programming.
AlpacaDataCleaned - Alpaca dataset from Stanford, cleaned and curated
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
gptcli - ChatGPT in command line with OpenAI API (gpt-3.5-turbo/gpt-4/gpt-4-32k)
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
StableLM - StableLM: Stability AI Language Models
llama-cpp-python - Python bindings for llama.cpp
loopgpt - Modular Auto-GPT Framework
langchainrb - Build LLM-powered applications in Ruby