guidance
NeMo-Guardrails
Our great sponsors
guidance | NeMo-Guardrails | |
---|---|---|
22 | 13 | |
16,895 | 3,216 | |
5.3% | 9.8% | |
9.8 | 9.9 | |
2 days ago | 2 days ago | |
Jupyter Notebook | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
guidance
-
Show HN: Prompts as (WASM) Programs
> The most obvious usage of this is forcing a model to output valid JSON
Isn't this something that Outlines [0], Guidance [1] and others [2] already solve much more elegantly?
0. https://github.com/outlines-dev/outlines
-
Show HN: Fructose, LLM calls as strongly typed functions
Why do you have Guidance in caps?
https://github.com/guidance-ai/guidance
or ...
https://huggingface.co/docs/text-generation-inference/concep...
or ... ?
A quick glance through these, they don't seem yet to call json_object on OpenAI with the word JSON in the prompt, which works wonders with the 0125 models.
- Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
-
Prompting LLMs to constrain output
have been experimenting with guidance and lmql. a bit too early to give any well formed opinions but really do like the idea of constraining llm output.
-
New: LangChain templates – fastest way to build a production-ready LLM app
AutoGen (https://github.com/microsoft/autogen) is orthogonal: it's designed for agents to converse with each other.
The original comparison to LangChain from Microsoft was Guidance (https://github.com/guidance-ai/guidance) which appears to have shifted development a bit. I haven't had much experience with it but from the examples it still seems like needless overhead.
-
Is supervised learning dead for computer vision?
Thanks for your comment.
I did not know about "Betteridge's law of headlines", quite interesting. Thanks for sharing :)
You raise some interesting points.
1) Safety: It is true that LVMs and LLMs have unknown biases and could potentially create unsafe content. However, this is not necessarily unique to them, for example, Google had the same problem with their supervised learning model https://www.theverge.com/2018/1/12/16882408/google-racist-go.... It all depends on the original data. I believe we need systems on top of our models to ensure safety. It is also possible to restrict the output domain of our models (https://github.com/guidance-ai/guidance). Instead of allowing our LVMs to output any words, we could restrict it to only being able to answer "red, green, blue..." when giving the color of a car.
2) Cost: You are right right now LVMs are quite expensive to run. As you said are a great way to go to market faster but they cannot run on low-cost hardware for the moment. However, they could help with training those smaller models. Indeed, with see in the NLP domain that a lot of smaller models are trained on data created with GPT models. You can still distill the knowledge of your LVMs into a custom smaller model that can run on embedded devices. The advantage is that you can use your LVMs to generate data when it is scarce and use it as a fallback when your smaller device is uncertain of the answer.
3) Labelling data: I don't think labeling data is necessarily cheap. First, you have to collect the data, depending on the frequency of your events could take months of monitoring if you want to build a large-scale dataset. Lastly, not all labeling is necessarily cheap. I worked at a semiconductor company and labeled data was scarce as it required expert knowledge and could only be done by experienced employees. Indeed not all labelling can be done externally.
However, both approaches are indeed complementary and I think systems that will work the best will rely on both.
Thanks again for the thought-provoking discussion. I hope this answer some of the concerns you raised
-
Show HN: Elelem – TypeScript LLMs with tracing, retries, and type safety
I've had a bit of trouble getting function calling to work with cases that aren't just extracting some data from the input. The format is correct but it was harder to get the correct data if it wasn't a simple extraction.
Hopefully OpenAI and others will offer something like https://github.com/guidance-ai/guidance at some point to guarantee overall output structure.
Failed validations will retry, but from what I've seen JSONSchema + generated JSON examples are decently reliable in practice for gpt-3.5-turbo and extremely reliable on gpt-4.
-
Show HN: Magentic – Use LLMs as simple Python functions
Right now it just works with OpenAI chat models (gpt-3.5-turbo, gpt-4) but if there's interest I plan to extend it to have several backends. These would probably each be an existing library that implements generating structured output like https://github.com/outlines-dev/outlines or https://github.com/guidance-ai/guidance. If you have ideas how this should be done let me know - on a github issue would be great to make it visible to others.
NeMo-Guardrails
-
Run and create custom ChatGPT-like bots with OpenChat
- https://github.com/NVIDIA/NeMo-Guardrails/
Yes, this is feasible.
Look into https://github.com/NVIDIA/NeMo-Guardrails and specifically to your question there are "topical rails" to ensure the conversation stays on a set of topics you greenlighted.
Also takes care of jailbreaks and allows custom conversation flow templates.
- LangChain: The Missing Manual
-
The Dual LLM pattern for building AI assistants that can resist prompt injection
Here's "jailbreak detection", in the NeMo-Guardrails project from Nvidia:
https://github.com/NVIDIA/NeMo-Guardrails/blob/327da8a42d5f8...
I.e. they ask the llm if the prompt will break the llm. (I believe that more data /some evaluation on how well this performs is intended to be released. Probably fair to call this stuff "not battle tested".)
-
How To Setup a Model With Guardrails?
I have been playing around with some models locally and creating a discord bot as a fun side project, and I wanted to setup some guardrails on inputs / outputs of the bot to make sure that it isn't violating any ethical boundaries. I was going to use Nvidia's Nemo guardrails, but they only support openai currently. Are there any other good ways to control inputs?
-
RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
Thanks, I hadn't seen those. I did find https://github.com/NVIDIA/NeMo-Guardrails earlier but haven't looked into it yet.
I'm not sure it solves the problem of restricting the information it uses though. For example, as a proof of concept for a customer, I tried providing information from a vector database as context, but GPT would still answer questions that were not provided in that context. It would base its answers on information that was already crawled from the customer website and in the model. That is concerning because the website might get updated but you can't update the model yourself (among other reasons).
-
Should LangChain be used in Prod?
you can use guard rails with langchain - https://github.com/NVIDIA/NeMo-Guardrails
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
langchain - 🦜🔗 Build context-aware reasoning applications
langchainrb - Build LLM-backed Ruby applications
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
outlines - Structured Text Generation
localLLM_langchain - Local LLM Agent with Langchain
pgvector - Open-source vector similarity search for Postgres