NeMo-Guardrails
guidance
NeMo-Guardrails | guidance | |
---|---|---|
13 | 23 | |
3,373 | 17,357 | |
4.7% | 2.7% | |
9.9 | 9.8 | |
5 days ago | 6 days ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
NeMo-Guardrails
- NeMO Guardrails from Nvidia
-
Run and create custom ChatGPT-like bots with OpenChat
- https://github.com/NVIDIA/NeMo-Guardrails/
- LangChain: The Missing Manual
-
The Dual LLM pattern for building AI assistants that can resist prompt injection
Here's "jailbreak detection", in the NeMo-Guardrails project from Nvidia:
https://github.com/NVIDIA/NeMo-Guardrails/blob/327da8a42d5f8...
I.e. they ask the llm if the prompt will break the llm. (I believe that more data /some evaluation on how well this performs is intended to be released. Probably fair to call this stuff "not battle tested".)
-
How To Setup a Model With Guardrails?
I have been playing around with some models locally and creating a discord bot as a fun side project, and I wanted to setup some guardrails on inputs / outputs of the bot to make sure that it isn't violating any ethical boundaries. I was going to use Nvidia's Nemo guardrails, but they only support openai currently. Are there any other good ways to control inputs?
-
RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
Thanks, I hadn't seen those. I did find https://github.com/NVIDIA/NeMo-Guardrails earlier but haven't looked into it yet.
I'm not sure it solves the problem of restricting the information it uses though. For example, as a proof of concept for a customer, I tried providing information from a vector database as context, but GPT would still answer questions that were not provided in that context. It would base its answers on information that was already crawled from the customer website and in the model. That is concerning because the website might get updated but you can't update the model yourself (among other reasons).
- How do we prevent prompt injection in a GPT API app?
- Nvidia NeMo Guardrails – open-source guardrails to conversational systems
-
Should LangChain be used in Prod?
you can use guard rails with langchain - https://github.com/NVIDIA/NeMo-Guardrails
guidance
-
Anthropic's Haiku Beats GPT-4 Turbo in Tool Use
[1]: https://github.com/guidance-ai/guidance/tree/main
-
Show HN: Prompts as (WASM) Programs
> The most obvious usage of this is forcing a model to output valid JSON
Isn't this something that Outlines [0], Guidance [1] and others [2] already solve much more elegantly?
0. https://github.com/outlines-dev/outlines
1. https://github.com/guidance-ai/guidance
2. https://github.com/sgl-project/sglang
- Show HN: Fructose, LLM calls as strongly typed functions
-
LiteLlama-460M-1T has 460M parameters trained with 1T tokens
Or combine it with something like llama.cpp's grammer or microsoft's guidance-ai[0] (which I prefer) which would allow adding some react-style prompting and external tools. As others have mentioned, instruct tuning would help too.
[0] https://github.com/guidance-ai/guidance
- Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
-
Prompting LLMs to constrain output
have been experimenting with guidance and lmql. a bit too early to give any well formed opinions but really do like the idea of constraining llm output.
- Guidance is back 🥳
- New: LangChain templates – fastest way to build a production-ready LLM app
-
Is supervised learning dead for computer vision?
Thanks for your comment.
I did not know about "Betteridge's law of headlines", quite interesting. Thanks for sharing :)
You raise some interesting points.
1) Safety: It is true that LVMs and LLMs have unknown biases and could potentially create unsafe content. However, this is not necessarily unique to them, for example, Google had the same problem with their supervised learning model https://www.theverge.com/2018/1/12/16882408/google-racist-go.... It all depends on the original data. I believe we need systems on top of our models to ensure safety. It is also possible to restrict the output domain of our models (https://github.com/guidance-ai/guidance). Instead of allowing our LVMs to output any words, we could restrict it to only being able to answer "red, green, blue..." when giving the color of a car.
2) Cost: You are right right now LVMs are quite expensive to run. As you said are a great way to go to market faster but they cannot run on low-cost hardware for the moment. However, they could help with training those smaller models. Indeed, with see in the NLP domain that a lot of smaller models are trained on data created with GPT models. You can still distill the knowledge of your LVMs into a custom smaller model that can run on embedded devices. The advantage is that you can use your LVMs to generate data when it is scarce and use it as a fallback when your smaller device is uncertain of the answer.
3) Labelling data: I don't think labeling data is necessarily cheap. First, you have to collect the data, depending on the frequency of your events could take months of monitoring if you want to build a large-scale dataset. Lastly, not all labeling is necessarily cheap. I worked at a semiconductor company and labeled data was scarce as it required expert knowledge and could only be done by experienced employees. Indeed not all labelling can be done externally.
However, both approaches are indeed complementary and I think systems that will work the best will rely on both.
Thanks again for the thought-provoking discussion. I hope this answer some of the concerns you raised
-
Show HN: Elelem – TypeScript LLMs with tracing, retries, and type safety
I've had a bit of trouble getting function calling to work with cases that aren't just extracting some data from the input. The format is correct but it was harder to get the correct data if it wasn't a simple extraction.
Hopefully OpenAI and others will offer something like https://github.com/guidance-ai/guidance at some point to guarantee overall output structure.
Failed validations will retry, but from what I've seen JSONSchema + generated JSON examples are decently reliable in practice for gpt-3.5-turbo and extremely reliable on gpt-4.
What are some alternatives?
guidance - A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance]
lmql - A language for constraint-guided and efficient LLM programming.
langchainrb - Build LLM-powered applications in Ruby
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
langchain - 🦜🔗 Build context-aware reasoning applications
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
outlines - Structured Text Generation
pgvector - Open-source vector similarity search for Postgres
localLLM_langchain - Local LLM Agent with Langchain