louise
mediKanren
Our great sponsors
louise | mediKanren | |
---|---|---|
8 | 6 | |
91 | 316 | |
- | - | |
8.0 | 8.1 | |
3 months ago | 11 days ago | |
Prolog | Racket | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
louise
-
Prolog for future AI
and this is a cool repo to track: https://github.com/stassa/louise
-
What do we think about Meta-Interpretive Learning?
From what I understand this is a relatively new approach to ML? Has anyone heard of this? I was hoping to get a general feel for what people in the industry believe for the perspectives of this approach. If you're curious, here's an implementation of MIL.
-
Potassco: The Answer Set Solving Collection
Thanks, that's a nice example.
>> For an example. The potential hypothesis here are pre generated, but you can imagine an algorithm or adapt an existing one with a tight generalise/specialise loop.
Yes! I'm thinking of how to adapt Louise (https://github.com/stassa/louise) to do that. The fact that s(CASP) is basically a Prolog-y version of ASP (with constraints) could make it a very natural sort of modification. Or, of course, there's always Well-Founded Semantics (https://www.swi-prolog.org/pldoc/man?section=WFS).
-
AI Is Ushering in a New Scientific Revolution
Well, since we're going a little mad with speculation in this thread, I have to point out that true one-shot learning (as opposed to "one-billion-plus-one-shot") works just fine, but only in the symbolic machine learning paradigm. For example, see:
https://github.com/stassa/louise#capabilities
In particular the second example listed there. A trivial example, but one that cannot be reproduced by current approaches without big-data pre-training.
I bring up the 80/20% training/test split that is standard in machine learning because I remember an interaction with my supervisor at the start of my PhD. In one of our meetings my supervisor asked me about the details of some experiments I was running, with a system called metagol (linked from the Louise repository above). En passant, I mentioned that I was training with ta 20/80% training/test split and my supervisor stopped me to ask me if I thought that was a standard setup for machine learning. Thinking he meant the splitting of my training data to training and testing partitions I asked, a bit bemused, that yes, of course, that's the standard thing. To which my supervisor laughed and replied "I don't think so". Later of course I realised that he meant that the done thing in machine learning is to use most of the data for training and leave as little as possible for testing.
In Inductive Logic Programming, it's typically the other way around, and the datasets are often a few examples, like a dozen or so. Of course our systems don't do the spectacular, impressive things that deep learning systems do, but then again we don't have a dozen thousand graduates racing to out-do each other with new feats of engineering. Which is a bit of a shame because I think that if we had no more than a thousand people working on ILP, we 'd make huge progress in applications, as well as in understanding of machine learning in general.
Oh well. It's probably all for the best. Who wants to build genuinely useful and intelligent systems anyway?
-
Annotated implementation of microKanren: an embeddable logic language
Note you can do machine learning of logic programs. My PhD research:
https://github.com/stassa/louise
In which case it _is_ machine learning and it still really works :D
-
A.I. Can Now Write Its Own Computer Code. That’s Good News for Humans
If you want to auto-write Haskell, use MagicHaskeller:
http://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.ht...
And if you want to auto-write Prolog, use my own Louise:
https://github.com/stassa/louise
-
How Good Is Codex?
My guess is the end result of all this "AI" assisted code-generation is that it will have the same impact on the software engineering industry as spreadsheets had on accounting. I also believe that this AI-powered stuff is a bit of a "two-steps forward, one step back" situation and the real innovation will begin when ideas from tools like Louise [1] are integrated into the approach taken in Codex.
When VisiCalc was released departments of 30 accountants were reduced to 5 accountants because of the improvement for individual worker efficiency, however accounting itself remains largely unchanged and accountants are still a respected profession who perform important functions. There's plenty of programming problems in the world that simply aren't being solved because we haven't figured out how to reduce the burden of producing the software; code generation will simply increase the output of an individual software developer.
The same forces behind "no-code" are at work here. In fact I see a future where these two solutions intermingle: where "no-code" becomes synonymous with prompt-driven development. As we all know, however, these solutions will only take you so far -- and essentially only allow you to express problems in domains that are already well-solved. We're just expressing a higher level of program abstraction; programs that generate programs. This is a good thing and it is not a threat to the existence of our industry. Even in Star Trek they still have engineers who fix their computers...
[1] - https://github.com/stassa/louise
- Louise: A machine learning system that learns Prolog programs
mediKanren
-
Annotated implementation of microKanren: an embeddable logic language
Not really production, but probably THE most impressive biomedicine research work I've seen (and I'm an academic MD):
https://github.com/webyrd/mediKanren
This is a FOL theorem prover that uses medical research articles as terms. They use it to do genetics and drug repurposing metaresearch. It's like the wet dream of all the biomed machine learning fanboys out there, except that:
1. it's not machine learning
and
2. it really works
-
Human Knowledge and PhDs
And wow, he uses logic programming to deduce a diagnostic from the facts https://github.com/webyrd/mediKanren .. and used to find out what his son had https://www.statnews.com/2019/07/25/ai-expert-writing-code-save-son/
- With a nudge from AI, ketamine emerges as a potential rare disease treatment
-
William Byrd on Logic and Relational Programming, MiniKanren (2014)
Hi Kamaal!
I know Cisco is using core.logic, which is David Nolen's Clojure variant of miniKanren, in their ThreatGrid product. I think the Enterprisey uses of mediKanren are a bit different than the purely relational programming that I find most interesting, though.
Having said that, we are now on our second generation of mediKanren, which is software that performs reasoning over large biomedical knowledge graphs:
https://github.com/webyrd/mediKanren/tree/master/medikanren2
mediKanren is being developed by the Hugh Kaul Precision Medicine Institute at the University of Alabama at Birmingham (HKPMI). HKPMI is run by Matt Might, who you may know from his work on abstract interpretation and parsing with derivatives, or from his more recent work on precision medicine. mediKanren is part of the NIH NCATS Biomedical Data Translator Project, and is funded by NCATS:
https://ncats.nih.gov/translator
Greg Rosenblatt, who sped up Barliman's relational interpreter many order of magnitude, has been hacking on dbKanren, which augments miniKanren with automatic goal reordering, stratified queries/aggregation, a graph database engine, and many other goodies. dbKanren is the heart of mediKanren 2.
I can imagine co-writing a book on mediKanren 2, and its uses for precision medicine...
Cheers,
--Will
-
Bertrand Might: Life, legacy and next steps
The Precision Medicine Institute that I now run produces mediKanren: https://github.com/webyrd/mediKanren
It's an open source logical reasoning engine (read: 1960's AI) for drug repurposing that we deploy routinely to help patients.
There is always a need for better relationalization of biological data sets that feed such tools too.
For example, SemMedDB is really showing its age for NLP of the scientific literature and yet it is still astonishingly useful for helping patients even as is.
What are some alternatives?
edcg - Extended DCG syntax for Prolog by Peter Van Roy
racketscript - Racket to JavaScript Compiler
nests-and-insects - A Roguelike Tabletop RPG
microKanren - The implementation of microKanren, a featherweight relational programming language
muKanren_reading - [Mirror] A close reading of the μKanren paper.
awesome-racket - A curated list of awesome Racket frameworks, libraries and software, maintained by Community
Gleemin - A Magic: the Gathering™ expert system
gui
thelma - An implementation of Meta-Interpretive Learning
frog - Frog is a static blog generator implemented in Racket, targeting Bootstrap and able to use Pygments.
microKanren-py - Simple python3 implementation of microKanren with lots of type annotations for clarity
Shin-Barliman - Research project: Program synthesis using updated interface, template and types.