pal
qagnn
pal | qagnn | |
---|---|---|
4 | 6 | |
436 | 588 | |
1.4% | - | |
3.1 | 0.0 | |
10 months ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pal
-
Prompt Engineering Guide: Guides, papers, and resources for prompt engineering
Using the terminology that I'm working with this is an example of a second-order analytic augmentation!
Here's another approach of second-order analytic augmentation, PAL: https://reasonwithpal.com
And third-order, Toolformer: https://arxiv.org/abs/2302.04761
The difference isn't in what is going on but rather with framing the approach within the analytic-synthetic distinction developed by Kant and the analytic philosophers who were influenced by his work. There's a dash of functional programming thrown in for good measure!
I have scribbled on a print-out of the article on my desk:
Nth Order
- [R] Faithful Chain-of-Thought Reasoning
-
GPT-3: Techniques to improve reliability
GitHub: https://github.com/reasoning-machines/pal
tl;dr -- LLMs are bad at basic arithmetic and logic (as their opening examples with math word problems show), but they do much better if instead of asking them for the answer, you ask for code to compute the answer. Then evaluate or run the code to get the answer.
qagnn
-
[D] what percent of top conference papers fudge results?
QA-GNN (https://github.com/michiyasunaga/qagnn from a Stanford lab) had some issues with their evaluation, but more importantly this work 'GNN is counting?...' (https://openreview.net/forum?id=hzmQ4wOnSb) showed that they can achieve better results with an extremely simplistic 1-dim GNN model - so the performance of QA-GNN was mainly due to data. AFAIK there were discussions around this, but now if you go to QA-GNN repo they have disabled issues tab.
- Stanfordās AI Researchers Introduce QA-GNN Model That Jointly Reasons With Language Models And Knowledge Graphs
- [R] Stanfordās AI Researchers Introduce QA-GNN Model That Jointly Reasons With Language Models And Knowledge Graphs
What are some alternatives?
openai-cookbook - Examples and guides for using the OpenAI API
kiri - Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.
memprompt - A method to fix GPT-3 after deployment with user feedback, without re-training.
RecBole - A unified, comprehensive and efficient recommendation library
prompt-lib - A set of utilities for running few-shot prompting experiments on large-language models
kiri - Kiri is a visual tool designed for reviewing schematics and layouts of KiCad projects that are version-controlled with Git.
empirical-philosophy - A collection of empirical experiments using large language models and other neural network architectures to test the usefulness of metaphysical constructs.
LinkBERT - [ACL 2022] LinkBERT: A Knowledgeable Language Model š Pretrained with Document Links
temporal-graph-gen - Pre-trained models for our work on Temporal Graph Generation
Prompt-Engineering-Guide - š Guides, papers, lecture, notebooks and resources for prompt engineering
question_extractor - Generate question/answer training pairs out of raw text.