PIXIU
LeanDojoChatGPT
PIXIU | LeanDojoChatGPT | |
---|---|---|
6 | 2 | |
406 | 99 | |
8.9% | - | |
8.9 | 5.3 | |
7 days ago | about 1 month ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PIXIU
LeanDojoChatGPT
-
'A-Team' of Math Proves a Critical Link Between Addition and Sets
Check out this paper:
https://leandojo.org/
People have already trained models to assist suggestion tactics. They then linked it up to ChatGPT to interactively solve proofs.
In this scenario, ChatGPT asks the model for tactic suggestions, applies it to the proof and uses the feedback from Lean to then proceed with the next step.
FYI, The programmatic interface to Lean was written by an OpenAI employee who was on the Lean team a few years ago.
Also, check out Lean’s roadmap. They aspire to position Lean to becoming a target for LLMs because it has been designed for verification from the ground up.
As math and compsci nerds contribute to mathlib, all of those proofs are also building up a huge corpus that will likely be leveraged for both verification and optimization.
If AI can make verification a lot easier, then we’re likely going to see verification change programming similarly to the way it changed electronics.
-
Formalizing 100 Theorems
Good questions!
Nowadays, there is indeed a movement towards interoperability between the various proof assistants, one of these bridge-building projects is called Dedukti: https://deducteam.github.io/ It's a challenging project because the different proof assistants which are currently in use differ in their foundational perspectives and their idioms. The question how to best formalize mathematics is still an open research problem, just as the question how to best develop programs, but we already have quite a good understanding of many important issues in this area.
Also, by now there are attempts to use AI for discovering proofs, see for instance https://leandojo.org/ or https://github.com/lean-dojo/LeanDojoChatGPT.
What are some alternatives?
spacy-llm - 🦙 Integrating LLMs into structured NLP pipelines
upgini - Data search & enrichment library for Machine Learning → Easily find and add relevant features to your ML & AI pipeline from hundreds of public and premium external data sources, including open & commercial LLMs
Baichuan-13B - A 13B large language model developed by Baichuan Intelligent Technology
marqo - Unified embedding generation and search engine. Also available on cloud - cloud.marqo.ai
Baichuan-7B - A large-scale 7B pretraining language model developed by BaiChuan-Inc.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
chatgpt-extractive-shortener - Shortens a paragraph of text with ChatGPT, using successive rounds of word-level extractive summarization.
set.mm - Metamath source file for logic and set theory
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
linc - 🔗 LINC: Logical Inference via Neurosymbolic Computation [EMNLP2023]
happy-transformer - Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.
FlexGen - Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen]