-
DoctorGPT
💻📚💡 DoctorGPT provides advanced LLM prompting for PDFs and webpages. (by FeatureBaseDB)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
It's maybe interesting to think about a simple chat history over time giving some of the knowledge needed for improved interactions as related to the more complex patterns outlined here.
Here's a similar paper I ran across this morning: https://arxiv.org/abs/2305.18323. Github is here: https://github.com/billxbf/ReWOO
I indexed both documents with my own project, which uses semantic graphs to help with prompt assembly: https://github.com/FeatureBaseDB/DoctorGPT. DoctorGPT doesn't have dynamic prompt chaining yet, but I'm working on it. I hesitate posting any of the analysis of these papers using DoctorGPT here because it would be generated by the LLM, and not me...and some people seem to have an issue with that given this is a human forum.
My sense is that SKGs are important in refining questions, offering alternative approaches, managing context, reflecting on LLM responses, and more.
It's maybe interesting to think about a simple chat history over time giving some of the knowledge needed for improved interactions as related to the more complex patterns outlined here.
Here's a similar paper I ran across this morning: https://arxiv.org/abs/2305.18323. Github is here: https://github.com/billxbf/ReWOO
I indexed both documents with my own project, which uses semantic graphs to help with prompt assembly: https://github.com/FeatureBaseDB/DoctorGPT. DoctorGPT doesn't have dynamic prompt chaining yet, but I'm working on it. I hesitate posting any of the analysis of these papers using DoctorGPT here because it would be generated by the LLM, and not me...and some people seem to have an issue with that given this is a human forum.
My sense is that SKGs are important in refining questions, offering alternative approaches, managing context, reflecting on LLM responses, and more.
I recognize there's plenty of catnip here when it comes to calling this "engineering" or not, however, whatever you want to call it (prompt fiddling?), the techniques are crucial if you want to achieve reasonably consistent output from current-state LLMs. As models improve concerns about context window limitations will be reduced and it will be easier to discern user intent.
These are good straight-to-the-point guides:
- Prompt Engineering by BrexHQ: https://github.com/brexhq/prompt-engineering
- OpenAI guidance: https://help.openai.com/en/articles/6654000-best-practices-f...
- https://devblogs.microsoft.com/dotnet/gpt-prompt-engineering...
- (great examples): https://www.deeplearning.ai/short-courses/chatgpt-prompt-eng...
tl;dr:
Related posts
-
Show HN: FileKitty – Combine and label text files for LLM prompt contexts
-
Effortlessly Create an AI Dungeon Master Bot Using Julep and Chainlit
-
An Exploration of Software-defined networks in video streaming, Part Three: Performance of a streaming system over a SDN
-
Clasificador de imágenes con una red neuronal convolucional (CNN)
-
CommaAgents, LLM AutoGenish like system for building LLM systems