louise VS Gleemin

Compare louise vs Gleemin and see what are their differences.

louise

Polynomial-time Meta-Interpretive Learning (by stassa)

Gleemin

A Magic: the Gathering™ expert system (by stassa)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
louise Gleemin
8 4
91 86
- -
8.0 0.0
3 months ago about 12 years ago
Prolog Prolog
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

louise

Posts with mentions or reviews of louise. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-24.
  • Prolog for future AI
    2 projects | /r/prolog | 24 Jun 2023
    and this is a cool repo to track: https://github.com/stassa/louise
  • What do we think about Meta-Interpretive Learning?
    1 project | /r/MLQuestions | 11 Mar 2023
    From what I understand this is a relatively new approach to ML? Has anyone heard of this? I was hoping to get a general feel for what people in the industry believe for the perspectives of this approach. If you're curious, here's an implementation of MIL.
  • Potassco: The Answer Set Solving Collection
    2 projects | news.ycombinator.com | 20 Dec 2022
    Thanks, that's a nice example.

    >> For an example. The potential hypothesis here are pre generated, but you can imagine an algorithm or adapt an existing one with a tight generalise/specialise loop.

    Yes! I'm thinking of how to adapt Louise (https://github.com/stassa/louise) to do that. The fact that s(CASP) is basically a Prolog-y version of ASP (with constraints) could make it a very natural sort of modification. Or, of course, there's always Well-Founded Semantics (https://www.swi-prolog.org/pldoc/man?section=WFS).

  • AI Is Ushering in a New Scientific Revolution
    1 project | news.ycombinator.com | 8 Jun 2022
    Well, since we're going a little mad with speculation in this thread, I have to point out that true one-shot learning (as opposed to "one-billion-plus-one-shot") works just fine, but only in the symbolic machine learning paradigm. For example, see:

    https://github.com/stassa/louise#capabilities

    In particular the second example listed there. A trivial example, but one that cannot be reproduced by current approaches without big-data pre-training.

    I bring up the 80/20% training/test split that is standard in machine learning because I remember an interaction with my supervisor at the start of my PhD. In one of our meetings my supervisor asked me about the details of some experiments I was running, with a system called metagol (linked from the Louise repository above). En passant, I mentioned that I was training with ta 20/80% training/test split and my supervisor stopped me to ask me if I thought that was a standard setup for machine learning. Thinking he meant the splitting of my training data to training and testing partitions I asked, a bit bemused, that yes, of course, that's the standard thing. To which my supervisor laughed and replied "I don't think so". Later of course I realised that he meant that the done thing in machine learning is to use most of the data for training and leave as little as possible for testing.

    In Inductive Logic Programming, it's typically the other way around, and the datasets are often a few examples, like a dozen or so. Of course our systems don't do the spectacular, impressive things that deep learning systems do, but then again we don't have a dozen thousand graduates racing to out-do each other with new feats of engineering. Which is a bit of a shame because I think that if we had no more than a thousand people working on ILP, we 'd make huge progress in applications, as well as in understanding of machine learning in general.

    Oh well. It's probably all for the best. Who wants to build genuinely useful and intelligent systems anyway?

  • Annotated implementation of microKanren: an embeddable logic language
    9 projects | news.ycombinator.com | 25 May 2022
    Note you can do machine learning of logic programs. My PhD research:

    https://github.com/stassa/louise

    In which case it _is_ machine learning and it still really works :D

  • A.I. Can Now Write Its Own Computer Code. That’s Good News for Humans
    1 project | news.ycombinator.com | 10 Sep 2021
    If you want to auto-write Haskell, use MagicHaskeller:

    http://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.ht...

    And if you want to auto-write Prolog, use my own Louise:

    https://github.com/stassa/louise

  • How Good Is Codex?
    1 project | news.ycombinator.com | 19 Aug 2021
    My guess is the end result of all this "AI" assisted code-generation is that it will have the same impact on the software engineering industry as spreadsheets had on accounting. I also believe that this AI-powered stuff is a bit of a "two-steps forward, one step back" situation and the real innovation will begin when ideas from tools like Louise [1] are integrated into the approach taken in Codex.

    When VisiCalc was released departments of 30 accountants were reduced to 5 accountants because of the improvement for individual worker efficiency, however accounting itself remains largely unchanged and accountants are still a respected profession who perform important functions. There's plenty of programming problems in the world that simply aren't being solved because we haven't figured out how to reduce the burden of producing the software; code generation will simply increase the output of an individual software developer.

    The same forces behind "no-code" are at work here. In fact I see a future where these two solutions intermingle: where "no-code" becomes synonymous with prompt-driven development. As we all know, however, these solutions will only take you so far -- and essentially only allow you to express problems in domains that are already well-solved. We're just expressing a higher level of program abstraction; programs that generate programs. This is a good thing and it is not a threat to the existence of our industry. Even in Star Trek they still have engineers who fix their computers...

    [1] - https://github.com/stassa/louise

  • Louise: A machine learning system that learns Prolog programs
    1 project | news.ycombinator.com | 28 Dec 2020

Gleemin

Posts with mentions or reviews of Gleemin. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-05-25.
  • Annotated implementation of microKanren: an embeddable logic language
    9 projects | news.ycombinator.com | 25 May 2022
    Here's some stuff I've written in Prolog, some for my own enjoyment, one for my degree project.

    Most of the benefits I found come down to two things:

    a) Prolog, like the various kanrens, is a relational language so a program is effectively a database. There's no need to do anything special to glue together a data layer and a logic layer, because you have both written in Prolog.

    b) Prolog's declarative style makes translating rules and directives to code a breeze. The three projects below are all games and benefit heavily from this feature. I

    1. Warhammer 40K simulation:

    https://github.com/stassa/wh40ksim

    Runs simulations of combat between WH40k units.

    2. Gleemin, a Magic: the Gathering expert system:

    https://github.com/stassa/Gleemin

    Doesn't work anymore! Because backwards compatibility. Includes a) a parser for the rules text on M:tG cards written in Prolog's Definite Clause Grammars notation, b) a rules engine and c) a (primitive) AI player. The parser translates rules text from cards into rules engine calls. The cards themselves are Prolog predicates. Your data and your program are one and now you can also do stuff with them.

    3. Nests & Insects, a roguelike TTRPG:

    https://github.com/stassa/nests-and-insects

    WIP! Here I use Prolog to keep the data about my tabletop rpg organised, and also to automatically fill-in the character sheets typeset in the rulebook. The Prolog code runs a character creation process and generates completed character sheets. I plan to do the same for enemies' stat blocks, various procedural generation tables, etc. I also use Prolog to typeset the ASCII-styled rulebook, but that's probably not a good application of Prolog.

    You asked about "logic programming" in general and not miniKanren in particular. I haven't actually used miniKanren, so I commented about the logic programming language I've used the most, Prolog. I hope that's not a thread hijack!

    All three of the projects above are basically games. I have more "serious" stuff on my github but I feel a certain shortfall of gravitas, I suppose.

  • 50 Years of Prolog and Beyond
    4 projects | news.ycombinator.com | 29 Jan 2022
    official name):

    https://github.com/stassa/Gleemin/blob/master/mgl_interprete...

    The first two-thirds of the source in the linked file is a grammar of a subset

  • An embeddable Prolog scripting language for Go
    8 projects | news.ycombinator.com | 26 Jan 2022
    I've been keeping an eye on this to use for the rules engine in a card game I'm writing[0]. Very excited to get back into using Prolog; I think it's fallen by the wayside a bit in the last decade or two but there's some sectors that still have strong arguments for using it if not as the main language then at least an extension language.

    [0] Inspired by a HN comment a while back about Gleemin, the MTG expert engine in Prolog: https://github.com/stassa/Gleemin

  • The Computers Are Getting Better at Writing
    5 projects | news.ycombinator.com | 3 May 2021
    Representing costs in a meaningful manner is a constant problem in every M:tG generator I've seen.

    The problems I highlight above are not with grammaticality, which is certainly a big step forward with respect to the past. But many of the abilities still don't make a lot of sense, or don't make sense to be on the same card, or have weird costs etc.

    My intuition is that it would take a lot more than language modelling to generate M:tG cards that make enough sense that it's more fun to generate them than create them yourself. I think it would be necessary to have background knowledge of the game, at least its rules, if not some concept of a metagame.

    Also, I note that the new online version of the game is capable of parsing cads as scripts in a programming language using a hand-crafted grammar rather than a machine-learned model [4] [5]. So it seems to me that the state-of-the-art for M:tG language modelling is still a hand-crafted grammar.

    __________________

    [1] https://github.com/stassa/Gleemin - unfortunately, doesn't run anymore after multiple changes to Prolog interepreters used to create and then port the project over.

    [2] https://github.com/stassa/THELEMA - should work with older versions of Swi-Prolog, unfortunately not documented in the README.

    [3] https://link.springer.com/article/10.1007/s10994-020-05945-w - see Section 3.3 "Experiment 3: M:tG fragment".

    [4] https://www.reddit.com/r/magicTCG/comments/74hw1z/magic_aren...

    [5] https://www.reddit.com/r/magicTCG/comments/9kxid9/mtgadisper...

What are some alternatives?

When comparing louise and Gleemin you can also consider the following projects:

edcg - Extended DCG syntax for Prolog by Peter Van Roy

ciao - Ciao is a modern Prolog implementation that builds up from a logic-based simple kernel designed to be portable, extensible, and modular.

nests-and-insects - A Roguelike Tabletop RPG

muKanren_reading - [Mirror] A close reading of the μKanren paper.

gpt-3-experiments - Test prompts for OpenAI's GPT-3 API and the resulting AI-generated texts.

thelma - An implementation of Meta-Interpretive Learning

microKanren-py - Simple python3 implementation of microKanren with lots of type annotations for clarity

mediKanren - Proof-of-concept for reasoning over the SemMedDB knowledge base, using miniKanren + heuristics + indexing.

aleph - Port of Aleph to SWI-Prolog