guardrails
Promptify
guardrails | Promptify | |
---|---|---|
13 | 29 | |
3,361 | 3,046 | |
6.4% | 2.3% | |
9.9 | 8.5 | |
6 days ago | about 2 months ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
guardrails
- Guardrails AI
- Does anyone have an example of a langchain based customer facing agent like a cashier/waitress?
- Is there a UI that can limit LLM tokens to a preset list?
-
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain
You're absolutely correct, and I agree that there's potentially a risk of quality loss. But likewise, since these are all intrinsically linked, it may be possible to leverage strength by combining these tasks. I'm unaware of a paper reviewing the reliability and/or performance of LLMs in this specific scenario. If you find any, do share :) With regards to generating JSON responses - there are simple ways to nudge the model and even validate it, using libraries such as https://github.com/promptslab/Promptify, https://github.com/eyurtsev/kor and https://github.com/ShreyaR/guardrails
- Ask HN: People who were laid off or quit recently, how are you doing?
-
Ask HN: AI to study my DSL and then output it?
There are a couple different approaches:
- Use multi-shot prompting with something like guardrails to try prompting a commercial model until it works. [1]
- Use a local model with something with a final layer that steers token selection towards syntactically valid tokens [2]
[1] https://github.com/ShreyaR/guardrails
[2] "Structural Alignment: Modifying Transformers (like GPT) to Follow a JSON Schema" @ https://github.com/newhouseb/clownfish.
-
Introducing :🤖 Megabots - State-of-the-art, production ready full-stack LLM apps made mega-easy with LangChain and FastAPI
👍 validate and correct the outputs of LLMs using guardrails
- For consistent output from vicuna 13b
-
[D] Is all the talk about what GPT can do on Twitter and Reddit exaggerated or fairly accurate?
not vouching for it, but I know this is at least a thing that exists and I like the general idea: https://github.com/shreyar/guardrails
- Introducing Agents in Haystack: Make LLMs resolve complex tasks
Promptify
-
Promptify 2.0: More Structured, More Powerful LLMs with Prompt-Optimization, Prompt-Engineering, and Structured Json Parsing with GPT-n Models! 🚀
First up, a huge Thank You for making Promptify a hit with over 2.3k+ stars on Github ! 🌟
-
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain
You're absolutely correct, and I agree that there's potentially a risk of quality loss. But likewise, since these are all intrinsically linked, it may be possible to leverage strength by combining these tasks. I'm unaware of a paper reviewing the reliability and/or performance of LLMs in this specific scenario. If you find any, do share :) With regards to generating JSON responses - there are simple ways to nudge the model and even validate it, using libraries such as https://github.com/promptslab/Promptify, https://github.com/eyurtsev/kor and https://github.com/ShreyaR/guardrails
- Promptify: Prompt Engineering Library
-
A python module to generate optimized prompts, Prompt-engineering & solve different NLP problems using GPT-n (GPT-3, ChatGPT) based models and return structured python object for easy parsing
Examples: https://github.com/promptslab/Promptify/tree/main/examples
-
Promptify - Prompt Engineering for Named Entity Recognition(NER)
In this blog, we are going to try to understand how promptify is going to be used along with LLMs(Large Language Models) to perform named entity recognition(NER).
-
[D] What ML dev tools do you wish you'd discovered earlier?
Check Promptify for LLM https://github.com/promptslab/Promptify
-
[P] Extracting Causal Chains from Text Using Language Models
Awesome project! I am working on something similar using Promptify (extending this PR -> https://github.com/promptslab/Promptify/issues/3)
- Classification using prompt or fine tuning?
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
finetuner - :dart: Task-oriented embedding tuning for BERT, CLIP, etc.
GPTCache - Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
causal-chains - Library for creating causal chains using language models.
JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
kor - LLM(😽)
truss - Assertions micro-library for Clojure/Script
llm-api-starterkit - Beginner-friendly repository for launching your first LLM API with Python, LangChain and FastAPI, using local models or the OpenAI API.
dynamic-gpt-ui - Dynamic UI generation with GPT-3 (OpenAI)
Learn_Prompting - Prompt Engineering, Generative AI, and LLM Guide by Learn Prompting | Join our discord for the largest Prompt Engineering learning community
ghostwheel - Hassle-free inline clojure.spec with semi-automatic generative testing and side effect detection
LLM-Prompt-Library - Advanced Code and Text Manipulation Prompts for Various LLMs. Suitable for GPT-4, Claude, Llama3, Gemini, and other high-performing open-source LLMs.