kor
llm-api-starterkit
kor | llm-api-starterkit | |
---|---|---|
8 | 2 | |
1,520 | 86 | |
- | - | |
6.9 | 5.2 | |
2 days ago | 10 months ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kor
-
Pydentic in prompt engineering
Check out kor
-
27-Jun-2023
Extract structured data from text using LLMs (https://github.com/eyurtsev/kor)
- Kor: Extract structured data using LLMs
-
Guidance on creating a very lightweight model that does one task very well
Check out https://github.com/eyurtsev/kor
-
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain
You're absolutely correct, and I agree that there's potentially a risk of quality loss. But likewise, since these are all intrinsically linked, it may be possible to leverage strength by combining these tasks. I'm unaware of a paper reviewing the reliability and/or performance of LLMs in this specific scenario. If you find any, do share :) With regards to generating JSON responses - there are simple ways to nudge the model and even validate it, using libraries such as https://github.com/promptslab/Promptify, https://github.com/eyurtsev/kor and https://github.com/ShreyaR/guardrails
-
Information extraction in large documents with LLMs
Currently, I'm experimenting with GPT-3.5-turbo in conjunction with the kor library (langchain for information extraction) to define a prompt template with various examples of what I'm looking for.
-
RasaGPT: First headless LLM chatbot built on top of Rasa, Langchain and FastAPI
yes. there are a few approaches which i intend to take and some helpful resources:
You could implement a Dual LLM Pattern Model https://simonwillison.net/2023/Apr/25/dual-llm-pattern/
You could also leverage a concept like Kor which is a kind of pydantic for LLMs: https://github.com/eyurtsev/kor
in short and as mentioned in the README.md this is absolutely vulnerable to prompt injection. I think this is not a fully solved issue but some interesting community research has been done to help address these things in production
llm-api-starterkit
What are some alternatives?
Promptify - Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
motorhead - 🧠 Motorhead is a memory and information retrieval server for LLMs.
langcorn - ⛓️ Serving LangChain LLM apps and agents automagically with FastApi. LLMops
lambdaprompt - λprompt - A functional programming interface for building AI systems
tonic_validate - Metrics to evaluate the quality of responses of your Retrieval Augmented Generation (RAG) applications.
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
lanarky - The web framework for building LLM microservices
sketch - AI code-writing assistant that understands data content
deeplake - Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
rasa-haystack
aegis - Self-hardening firewall for large language models