pasv
Coq-HoTT
pasv | Coq-HoTT | |
---|---|---|
5 | 4 | |
44 | 1,218 | |
- | 0.7% | |
10.0 | 9.8 | |
almost 7 years ago | 5 days ago | |
Common Lisp | Coq | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pasv
-
Verified Rust for low-level systems code
Then you go to the more elaborate prover. We used the Boyer-Moore prover for that. After proving a implies b, that became a theorem/rule the fast prover could use when it matched. So if the same situation came up again in code, the rule would be re-used automatically.
I notice that the examples for this verified Rust system don't seem to include a termination check for loops. You prove that loops terminate by demonstrating that some nonnegative integer expression decreases on each iteration and never goes negative. If you can't prove that easily, the code has no place in mission-critical code.
Microsoft's F* is probably the biggest success in this area.[3]
[1] https://archive.org/details/manualzilla-id-5928072/page/n3/m...
[2] https://github.com/John-Nagle/pasv
[3] https://www.microsoft.com/en-us/research/video/programming-w...
-
Why Is Common Lisp Not the Most Popular Programming Language?
This is a generic problem with macro systems, of course, which is why C deliberately had a weak macro system.
LISP is a blast from the path. It's fun for retro reasons, but things have moved on.
[1] https://github.com/John-Nagle/nqthm
[2] https://github.com/John-Nagle/pasv/tree/master/src/CPC4
-
Will Computers Redefine the Roots of Math?
> In the 70's, this wasn't considered a 'real' proof.
I ran into that decades ago. We used the original Boyer-Moore theorem prover [1] as part of a program verification system. The system had two provers, the Nelson-Oppen simplifier (the first SAT solver) to automatically handle the easy proofs, and the Boyer-Moore system for the hard ones. To make sure that both had consistent theories, I used the Boyer-Moore prover to prove the "axioms" of the Nelson-Oppen system, especially what are usually called McCarthy's axioms (the ones that use Select and Store) for arrays.
The Boyer-Moore system uses a strictly constructive approach to mathematics. It starts from something like Peano arithmetic (there is a number zero, and an operation add 1) and builds up number theory. So I added a concept of arrays, represented as (index, value) tuples in sorted order, and was able to prove the usual "axioms" for arrays as theorems.
The machine proofs were long and involved much case analysis.[2] I submitted a paper to JACM in the early 1980s and got back reviews saying that it was just too long and inelegant to be at the fundamentals of computer science. That might not be the case today.
A few years back, I put the Boyer-Moore prover on Github, after getting it to work with Gnu Common LISP. So you can still run all this 1980s stuff. It's much faster today. It took about 45 minutes to grind through these proofs on a VAX 11/780 in the early 1980s. Now it takes about a second.
The proof log [2] is amusing. It's building up number theory from a very low level, starting by proving that X + 0 = X. Each theorem proved can be used as a lemma by later theorems, so you guide the process by giving it problems to solve in the right order. By line 1900, it's proving that multiplication distributes over addition. Array theory, the new stuff, starts around line 2994.
The reason this is so complicated and ugly is that there's no use of axiomatic set theory. Arrays are easy if you have sets. But there are no sets here. Sets don't fit well into this strict constructive theory, because EQUAL means identical. You can't create weaker definitions of equality which say that two sets are equal if they contain the same elements regardless of order, because that introduces a risk of unsoundness. Effort must be put into keeping the tuples of the array representation in ascending order by subscript, which implies much case analysis. Mathematicians hate case analysis. Computers are good at it.
[1] https://github.com/John-Nagle/nqthm
[2] https://github.com/John-Nagle/pasv/blob/master/src/work/temp...
-
What I've Learned About Formal Methods in Half a Year
behave as if it does. The other extreme would be a GUI program.
[1] http://www.animats.com/papers/verifier/verifiermanual.pdf
[2] https://github.com/John-Nagle/pasv
-
Grothendieck's Approach to Equality [pdf]
which proves that the storing operation always produces a validly ordered array. That's essentially a code proof of correctness for a recursive function The Boyer-Moore prover was able to grind out a proof of that without help. That was a long proof, too.
I submitted this to JACM. It was rejected, mostly for uglyness. The concept that you needed all this heavy machine-driven case analysis to prove a nice simple "axiom" upset mathematicians. Today it would be less of an issue. People are now more used to proofs that take a lot of grinding through cases.
You could build up set theory this way, via ordered lists, if you wanted.
So that's a classic of what happens if you take "equal" seriously.
[1] http://www-formal.stanford.edu/jmc/towards.pdf
[2] https://theory.stanford.edu/~arbrad/papers/arrays.pdf
[3] https://github.com/John-Nagle/pasv/blob/master/src/work/temp...
[4] https://github.com/John-Nagle/nqthm
Coq-HoTT
-
What do we mean by "the foundations of mathematics"?
https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t... :
> * Today, Zermelo–Fraenkel set theory [ZFC], with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common* foundation of mathematics.
Foundation of mathematics: https://en.wikipedia.org/wiki/Foundations_of_mathematics
Implementation of mathematics in set theory:
> The implementation of a number of basic mathematical concepts is carried out in parallel in ZFC (the dominant set theory) and in NFU, the version of Quine's New Foundations shown to be consistent by R. B. Jensen in 1969 (here understood to include at least axioms of Infinity and Choice).
> What is said here applies also to two families of set theories: on the one hand, a range of theories including Zermelo set theory near the lower end of the scale and going up to ZFC extended with large cardinal hypotheses such as "there is a measurable cardinal"; and on the other hand a hierarchy of extensions of NFU which is surveyed in the New Foundations article. These correspond to different general views of what the set-theoretical universe is like
IEEE-754 specifies that float64s have ±infinity and specify ZeroDivisionError. Symbolic CAS with MPFR needn't be limited to float64s.
HoTT in CoQ: Coq-HoTT: https://github.com/HoTT/Coq-HoTT
leanprover-community/mathlib4//
-
Will Computers Redefine the Roots of Math?
For those interested in formalisation of homotopy type theory, there are several (more or less) active and developed libraries. To mention a few:
UniMath (https://github.com/UniMath/UniMath, mentioned in the article)
Coq-HoTT (https://github.com/HoTT/Coq-HoTT)
agda-unimath (https://unimath.github.io/agda-unimath/)
cubical agda (https://github.com/agda/cubical)
All of these are open to contributions, and there are lots of useful basic things that haven't been done and which I think would make excellent semester projects for a cs/math undergrad (for example).
-
Homotopy Type Theory
HoTT is somewhat independent of the choice of proof assistant.
Coq: https://github.com/HoTT/HoTT
Lean: https://github.com/gebner/hott3
idk what you mean by "blue screened", or results being on the way. afaict most of the non-foundational work present (what I assume you mean by "results") in these libraries are basic properties of basic mathematical concepts being rebuilt on HoTT.
-
What is the benefit of using a text editor like MikTex, Texmaker, etc. over Overleaf?
The major one for me is version control (git). Imaging having to write a book like HoTT without having revisions and easy way to work on your changes without interfering with anyone else's work and then easily merging everything together.
What are some alternatives?
UniMath - This coq library aims to formalize a substantial body of mathematics using the univalent point of view.
lean - Lean Theorem Prover
cubical - An experimental library for Cubical Agda
verus-analyzer - A Verus compiler front-end for IDEs (derived from rust-analyzer)
Agda - Agda formalisation of the Introduction to Homotopy Type Theory
cubicaltt - Experimental implementation of Cubical Type Theory
hott3 - HoTT in Lean 3
mathlib - Lean 3's obsolete mathematical components library: please use mathlib4
TypeTopology - Logical manifestations of topological concepts, and other things, via the univalent point of view.
nqthm - nqthm - the original Boyer-Moore theorem prover, from 1992