langchainrb
guidance
Our great sponsors
langchainrb | guidance | |
---|---|---|
16 | 89 | |
1,050 | 12,248 | |
15.4% | - | |
9.5 | 9.5 | |
5 days ago | 9 months ago | |
Ruby | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langchainrb
- Langchain.rb
-
First 15 Open Source Advent projects
8. LangChain RB | Github | tutorial
- Create AI Agents in Ruby: Implementing the ReAct Approach
-
Lost on LangChain: Can someone help with the Question Answer concept?
So I hooked up the Ruby on Rails langchainrb gem (https://github.com/andreibondarev/langchainrb) and it seems like the approach is to store the plane text entries as meta data on pinecone. I definitely DO NOT want to do this as the data is private and secure on my own DB.
-
ruby and ML/AI chatgpt
langchain
-
Anyone willing to share their experience with Boxcar.ai?
I would suggest taking a look at Langchain.rb as well. Disclosure: I'm the core maintainer.
-
Emerging Architectures for LLM Applications
Is the emerging architecture made out to be more complicated than what most of the companies are currently building? Perhaps! But this is most likely the general direction where things will start trending towards as the auxiliary ecosystem matures.
Shameless plug: For fellow Ruby-ists we're building an orchestration layer for building LLM applications, inspired by the original, Langchain.rb: https://github.com/andreibondarev/langchainrb
-
Building an app around a LLM, Rails + Python or just Python?
I'm the author of Langchain.rb.
-
5 things I wish I knew before building a GPT agent for log analysis
@dliteful23 I loved your super detailed lessons-learned article! I'm the author of Langchain.rb, I would love to hear what you think of it if you get a chance to check it out. If there's anything that you'd like to see in the framework, please do let us know and we'll make sure to build it out if it aligns with the vision.
-
LangChain: The Missing Manual
We’re building “Langchain for Ruby” under the current working name of “Langchain.rb”: https://github.com/andreibondarev/langchainrb
People that have contributed on the project thus far each have at least a decade of experience programming in Ruby. We’re trying our best to build an abstraction layer on top all of the common emerging AI/ML techniques, tools, and providers. We’re also focusbig on building an excellent developer experience that Ruby developers love and have gotten to expect.
Unlike the Python project, as it’s been pointed out here a countless number of times, we’d like to avoid deeply nested class structures that make it incredibly difficult to track and extend.
We’ve been pondering over the “what does Rails for Machine Learning look like?” question, and we’re taking a stab at answering this question.
We’re hyper-focused on the open source community and the developer community at large. All feedback/ideas/contributions/criticism are welcome and encouraged!
guidance
-
Guidance: A guidance language for controlling large language models
This IS Microsoft Guidance, they seem to have spun off a separate GitHub organization for it.
https://github.com/microsoft/guidance redirects to https://github.com/guidance-ai/guidance now.
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
Llama: Add Grammar-Based Sampling
... and it sets the value of "armor" to "leather" so that you can use that value later in your code if you wish to. Guidance is pretty powerful, but I find the grammar hard to work with. I think the idea of being able to upload a bit of code or a context-free grammar to guide the model is super smart.
https://github.com/microsoft/guidance/blob/d2c5e3cbb730e337b...
-
Introducing TypeChat from Microsoft
Here's one thing I don't get.
Why all the rigamarole of hoping you get a valid response, adding last-mile validators to detect invalid responses, trying to beg the model to pretty please give me the syntax I'm asking for...
...when you can guarantee a valid JSON syntax by only sampling tokens that are valid? Instead of greedily picking the highest-scoring token every time, you select the highest-scoring token that conforms to the requested format.
This is what Guidance does already, also from Microsoft: https://github.com/microsoft/guidance
But OpenAI apparently does not expose the full scores of all tokens, it only exposes the highest-scoring token. Which is so odd, because if you run models locally, using Guidance is trivial, and you can guarantee your json is correct every time. It's faster to generate, too!
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
Perhaps something as simple as stating it was first built around OpenAI models and later expanded to local via plugins?
I've been meaning to ask you, have you seen/used MS Guidance[0] 'language' at all? I don't know if it's the right abstraction to interface as a plugin with what you've got in llm cli but there's a lot about Guidance that seems incredibly useful to local inference [token healing and acceleration especially].
[0]https://github.com/microsoft/guidance
-
AutoChain, lightweight and testable alternative to LangChain
LangChain is just too much, personal solutions are great, until you need to compare metrics or methodologies of prompt generation. Then the onus is on these n-parties who are sharing their resources to ensure that all of them used the same templates, they were generated the same way, with the only diff being the models these prompts were run on.
So maybe a simpler library like Microsoft's Guidance (https://github.com/microsoft/guidance)? It does this really well.
-
Structured Output from LLMs (Without Reprompting!)
I am unclear on the status of the project but here is the conversation that seem to be tracking it: https://github.com/microsoft/guidance/discussions/201
-
/r/guidance is now a subreddit for Guidance, Microsoft's template language for controlling language models!
Let's have a subreddit about Guidance!
- Is there a UI that can limit LLM tokens to a preset list?
-
Any suggestions for an open source model for parsing real estate listings?
You should look at guidance for an LLM to fill out a template. Define the output data structure and provide the real estate listing in the context (see the JSON template example here https://github.com/microsoft/guidance)
What are some alternatives?
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
ruby-openai - OpenAI API + Ruby! 🤖❤️ Now with Assistants, Threads, Messages, Runs and Text to Speech 🍾
lmql - A language for constraint-guided and efficient LLM programming.
hnsqlite - hnsqlite integrates hnswlib and sqlite for simple text embedding search
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
machine-learning-with-ruby - Curated list: Resources for machine learning in Ruby
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
llama-cpp-python - Python bindings for llama.cpp
guidance - A guidance language for controlling large language models.
llama.cpp - LLM inference in C/C++