pal
prompt-lib
pal | prompt-lib | |
---|---|---|
4 | 1 | |
436 | 98 | |
1.4% | - | |
3.1 | 7.2 | |
10 months ago | 6 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pal
-
Prompt Engineering Guide: Guides, papers, and resources for prompt engineering
Using the terminology that I'm working with this is an example of a second-order analytic augmentation!
Here's another approach of second-order analytic augmentation, PAL: https://reasonwithpal.com
And third-order, Toolformer: https://arxiv.org/abs/2302.04761
The difference isn't in what is going on but rather with framing the approach within the analytic-synthetic distinction developed by Kant and the analytic philosophers who were influenced by his work. There's a dash of functional programming thrown in for good measure!
I have scribbled on a print-out of the article on my desk:
Nth Order
- [R] Faithful Chain-of-Thought Reasoning
-
GPT-3: Techniques to improve reliability
GitHub: https://github.com/reasoning-machines/pal
tl;dr -- LLMs are bad at basic arithmetic and logic (as their opening examples with math word problems show), but they do much better if instead of asking them for the answer, you ask for code to compute the answer. Then evaluate or run the code to get the answer.
prompt-lib
-
Using Da-Vinci-003 in a Jupyter Notebook
While it's a bit of an overkill, prompt-lib provides a notebook to do this: https://github.com/reasoning-machines/prompt-lib/blob/main/notebooks/QueryOpenAI.ipynb
What are some alternatives?
openai-cookbook - Examples and guides for using the OpenAI API
HugNLP - CIKM2023 Best Demo Paper Award. HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊
qagnn - [NAACL 2021] QAGNN: Question Answering using Language Models and Knowledge Graphs 🤖
self-refine - LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
memprompt - A method to fix GPT-3 after deployment with user feedback, without re-training.
Awesome-Prompt-Engineering - This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
empirical-philosophy - A collection of empirical experiments using large language models and other neural network architectures to test the usefulness of metaphysical constructs.
temporal-graph-gen - Pre-trained models for our work on Temporal Graph Generation
graph-of-thoughts - Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"
Prompt-Engineering-Guide - 🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
knowledge-rumination - [EMNLP 2023] Knowledge Rumination for Pre-trained Language Models