guardrails
empirical-philosophy
Our great sponsors
guardrails | empirical-philosophy | |
---|---|---|
13 | 9 | |
3,284 | 141 | |
9.8% | - | |
9.9 | 2.5 | |
6 days ago | 12 months ago | |
Python | TypeScript | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
guardrails
- Guardrails AI
- Does anyone have an example of a langchain based customer facing agent like a cashier/waitress?
- Is there a UI that can limit LLM tokens to a preset list?
-
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain
You're absolutely correct, and I agree that there's potentially a risk of quality loss. But likewise, since these are all intrinsically linked, it may be possible to leverage strength by combining these tasks. I'm unaware of a paper reviewing the reliability and/or performance of LLMs in this specific scenario. If you find any, do share :) With regards to generating JSON responses - there are simple ways to nudge the model and even validate it, using libraries such as https://github.com/promptslab/Promptify, https://github.com/eyurtsev/kor and https://github.com/ShreyaR/guardrails
- Ask HN: People who were laid off or quit recently, how are you doing?
-
Ask HN: AI to study my DSL and then output it?
There are a couple different approaches:
- Use multi-shot prompting with something like guardrails to try prompting a commercial model until it works. [1]
- Use a local model with something with a final layer that steers token selection towards syntactically valid tokens [2]
[1] https://github.com/ShreyaR/guardrails
[2] "Structural Alignment: Modifying Transformers (like GPT) to Follow a JSON Schema" @ https://github.com/newhouseb/clownfish.
-
Introducing :🤖 Megabots - State-of-the-art, production ready full-stack LLM apps made mega-easy with LangChain and FastAPI
đź‘Ť validate and correct the outputs of LLMs using guardrails
- For consistent output from vicuna 13b
-
[D] Is all the talk about what GPT can do on Twitter and Reddit exaggerated or fairly accurate?
not vouching for it, but I know this is at least a thing that exists and I like the general idea: https://github.com/shreyar/guardrails
- Introducing Agents in Haystack: Make LLMs resolve complex tasks
empirical-philosophy
-
Google “We Have No Moat, and Neither Does OpenAI”
One way that I've been framing this in my head (and in an application I'm building) is that gpt-3 will be useful for analytic tasks where as gpt-4 will be required for synthetic tasks. I'm using "analytic" and "synthetic" in the same way as in this writeup https://github.com/williamcotton/empirical-philosophy/blob/m...
- How ReAct Prompting Works in Detail
-
Ask HN: People who were laid off or quit recently, how are you doing?
Hey Simon! I've been digging your writings on LLMs lately.
I've been having some decent luck with some of the approaches that I've discussed in the following articles and projects:
From Prompt Alchemy to Prompt Engineering: An Introduction to Analytic Augmentation: https://github.com/williamcotton/empirical-philosophy/blob/m...
https://www.williamcotton.com/articles/writing-web-applicati...
https://github.com/williamcotton/transynthetical-engine
I'd love to hear your thoughts on the matter!
-
We need to tell people ChatGPT will lie to them, not debate linguistics
You’re not actually doing any research.
Here is my research: https://github.com/williamcotton/empirical-philosophy/blob/m...
It is clear that analytic augmentations will result in more factual information.
Your claims are unfounded and untested.
-
ChatGPT and Wolfram Is Insane
Take a look at
https://github.com/williamcotton/empirical-philosophy/blob/m...
https://langchain.readthedocs.io/en/latest/
They can be taught!
-
Prompt Engineering Guide: Guides, papers, and resources for prompt engineering
I've been developing a methodology around prompt engineering that I have found very useful:
https://github.com/williamcotton/empirical-philosophy/blob/m...
A few more edits and it's ready for me to submit to HN and then get literally no further attention!
-
Professor writes history essays with ChatGPT and has students correct them
That's not a rebuttable of a claim that Bing is more accurate.
A proper rebuttable would involve empirical evidence that Bing is no more accurate than other LLM tools that do not add analytical augmentations such as search results to their prompts.
Based on empirical evidence, I find that analytical augmentations do indeed result in more accurate results:
https://github.com/williamcotton/empirical-philosophy/blob/m...
What are some alternatives?
lmql - A language for constraint-guided and efficient LLM programming.
magma-chat - Ruby on Rails 7-based ChatGPT Bot Platform
GPTCache - Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
pal - PaL: Program-Aided Language Models (ICML 2023)
JARVIS - JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
datasette-chatgpt-plugin - A Datasette plugin that turns a Datasette instance into a ChatGPT plugin
dynamic-gpt-ui - Dynamic UI generation with GPT-3 (OpenAI)
transynthetical-engine - Applied methods of analytical augmentation to build tools using large-language models.
truss - Assertions micro-library for Clojure/Script
stable-diffusion-webui - Stable Diffusion web UI
ghostwheel - Hassle-free inline clojure.spec with semi-automatic generative testing and side effect detection
serge - A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.