Promptify
Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research (by promptslab)
guardrails
Adding guardrails to large language models. (by guardrails-ai)
Our great sponsors
Promptify | guardrails | |
---|---|---|
29 | 13 | |
3,020 | 3,284 | |
3.8% | 9.8% | |
8.5 | 9.9 | |
about 1 month ago | 6 days ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Promptify
Posts with mentions or reviews of Promptify.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-13.
-
Promptify 2.0: More Structured, More Powerful LLMs with Prompt-Optimization, Prompt-Engineering, and Structured Json Parsing with GPT-n Models! 🚀
First up, a huge Thank You for making Promptify a hit with over 2.3k+ stars on Github ! 🌟
-
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain
You're absolutely correct, and I agree that there's potentially a risk of quality loss. But likewise, since these are all intrinsically linked, it may be possible to leverage strength by combining these tasks. I'm unaware of a paper reviewing the reliability and/or performance of LLMs in this specific scenario. If you find any, do share :) With regards to generating JSON responses - there are simple ways to nudge the model and even validate it, using libraries such as https://github.com/promptslab/Promptify, https://github.com/eyurtsev/kor and https://github.com/ShreyaR/guardrails
- Promptify: Prompt Engineering Library
-
A python module to generate optimized prompts, Prompt-engineering & solve different NLP problems using GPT-n (GPT-3, ChatGPT) based models and return structured python object for easy parsing
Examples: https://github.com/promptslab/Promptify/tree/main/examples
-
Promptify - Prompt Engineering for Named Entity Recognition(NER)
In this blog, we are going to try to understand how promptify is going to be used along with LLMs(Large Language Models) to perform named entity recognition(NER).
-
[D] What ML dev tools do you wish you'd discovered earlier?
Check Promptify for LLM https://github.com/promptslab/Promptify
-
[P] Extracting Causal Chains from Text Using Language Models
Awesome project! I am working on something similar using Promptify (extending this PR -> https://github.com/promptslab/Promptify/issues/3)
- Classification using prompt or fine tuning?
guardrails
Posts with mentions or reviews of guardrails.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-07-10.
- Guardrails AI
- Does anyone have an example of a langchain based customer facing agent like a cashier/waitress?
- Is there a UI that can limit LLM tokens to a preset list?
-
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain
You're absolutely correct, and I agree that there's potentially a risk of quality loss. But likewise, since these are all intrinsically linked, it may be possible to leverage strength by combining these tasks. I'm unaware of a paper reviewing the reliability and/or performance of LLMs in this specific scenario. If you find any, do share :) With regards to generating JSON responses - there are simple ways to nudge the model and even validate it, using libraries such as https://github.com/promptslab/Promptify, https://github.com/eyurtsev/kor and https://github.com/ShreyaR/guardrails
- Ask HN: People who were laid off or quit recently, how are you doing?
-
Ask HN: AI to study my DSL and then output it?
There are a couple different approaches:
- Use multi-shot prompting with something like guardrails to try prompting a commercial model until it works. [1]
- Use a local model with something with a final layer that steers token selection towards syntactically valid tokens [2]
[1] https://github.com/ShreyaR/guardrails
[2] "Structural Alignment: Modifying Transformers (like GPT) to Follow a JSON Schema" @ https://github.com/newhouseb/clownfish.
-
Introducing :🤖 Megabots - State-of-the-art, production ready full-stack LLM apps made mega-easy with LangChain and FastAPI
👍 validate and correct the outputs of LLMs using guardrails
- For consistent output from vicuna 13b
-
[D] Is all the talk about what GPT can do on Twitter and Reddit exaggerated or fairly accurate?
not vouching for it, but I know this is at least a thing that exists and I like the general idea: https://github.com/shreyar/guardrails
- Introducing Agents in Haystack: Make LLMs resolve complex tasks