louise VS wh40ksim

Compare louise vs wh40ksim and see what are their differences.

louise

Polynomial-time Meta-Interpretive Learning (by stassa)

wh40ksim

Warhammer 40k Combat simulator (by stassa)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
louise wh40ksim
8 1
91 5
- -
8.0 10.0
3 months ago over 5 years ago
Prolog Prolog
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

louise

Posts with mentions or reviews of louise. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-24.
  • Prolog for future AI
    2 projects | /r/prolog | 24 Jun 2023
    and this is a cool repo to track: https://github.com/stassa/louise
  • What do we think about Meta-Interpretive Learning?
    1 project | /r/MLQuestions | 11 Mar 2023
    From what I understand this is a relatively new approach to ML? Has anyone heard of this? I was hoping to get a general feel for what people in the industry believe for the perspectives of this approach. If you're curious, here's an implementation of MIL.
  • Potassco: The Answer Set Solving Collection
    2 projects | news.ycombinator.com | 20 Dec 2022
    Thanks, that's a nice example.

    >> For an example. The potential hypothesis here are pre generated, but you can imagine an algorithm or adapt an existing one with a tight generalise/specialise loop.

    Yes! I'm thinking of how to adapt Louise (https://github.com/stassa/louise) to do that. The fact that s(CASP) is basically a Prolog-y version of ASP (with constraints) could make it a very natural sort of modification. Or, of course, there's always Well-Founded Semantics (https://www.swi-prolog.org/pldoc/man?section=WFS).

  • AI Is Ushering in a New Scientific Revolution
    1 project | news.ycombinator.com | 8 Jun 2022
    Well, since we're going a little mad with speculation in this thread, I have to point out that true one-shot learning (as opposed to "one-billion-plus-one-shot") works just fine, but only in the symbolic machine learning paradigm. For example, see:

    https://github.com/stassa/louise#capabilities

    In particular the second example listed there. A trivial example, but one that cannot be reproduced by current approaches without big-data pre-training.

    I bring up the 80/20% training/test split that is standard in machine learning because I remember an interaction with my supervisor at the start of my PhD. In one of our meetings my supervisor asked me about the details of some experiments I was running, with a system called metagol (linked from the Louise repository above). En passant, I mentioned that I was training with ta 20/80% training/test split and my supervisor stopped me to ask me if I thought that was a standard setup for machine learning. Thinking he meant the splitting of my training data to training and testing partitions I asked, a bit bemused, that yes, of course, that's the standard thing. To which my supervisor laughed and replied "I don't think so". Later of course I realised that he meant that the done thing in machine learning is to use most of the data for training and leave as little as possible for testing.

    In Inductive Logic Programming, it's typically the other way around, and the datasets are often a few examples, like a dozen or so. Of course our systems don't do the spectacular, impressive things that deep learning systems do, but then again we don't have a dozen thousand graduates racing to out-do each other with new feats of engineering. Which is a bit of a shame because I think that if we had no more than a thousand people working on ILP, we 'd make huge progress in applications, as well as in understanding of machine learning in general.

    Oh well. It's probably all for the best. Who wants to build genuinely useful and intelligent systems anyway?

  • Annotated implementation of microKanren: an embeddable logic language
    9 projects | news.ycombinator.com | 25 May 2022
    Note you can do machine learning of logic programs. My PhD research:

    https://github.com/stassa/louise

    In which case it _is_ machine learning and it still really works :D

  • A.I. Can Now Write Its Own Computer Code. That’s Good News for Humans
    1 project | news.ycombinator.com | 10 Sep 2021
    If you want to auto-write Haskell, use MagicHaskeller:

    http://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.ht...

    And if you want to auto-write Prolog, use my own Louise:

    https://github.com/stassa/louise

  • How Good Is Codex?
    1 project | news.ycombinator.com | 19 Aug 2021
    My guess is the end result of all this "AI" assisted code-generation is that it will have the same impact on the software engineering industry as spreadsheets had on accounting. I also believe that this AI-powered stuff is a bit of a "two-steps forward, one step back" situation and the real innovation will begin when ideas from tools like Louise [1] are integrated into the approach taken in Codex.

    When VisiCalc was released departments of 30 accountants were reduced to 5 accountants because of the improvement for individual worker efficiency, however accounting itself remains largely unchanged and accountants are still a respected profession who perform important functions. There's plenty of programming problems in the world that simply aren't being solved because we haven't figured out how to reduce the burden of producing the software; code generation will simply increase the output of an individual software developer.

    The same forces behind "no-code" are at work here. In fact I see a future where these two solutions intermingle: where "no-code" becomes synonymous with prompt-driven development. As we all know, however, these solutions will only take you so far -- and essentially only allow you to express problems in domains that are already well-solved. We're just expressing a higher level of program abstraction; programs that generate programs. This is a good thing and it is not a threat to the existence of our industry. Even in Star Trek they still have engineers who fix their computers...

    [1] - https://github.com/stassa/louise

  • Louise: A machine learning system that learns Prolog programs
    1 project | news.ycombinator.com | 28 Dec 2020

wh40ksim

Posts with mentions or reviews of wh40ksim. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-05-25.
  • Annotated implementation of microKanren: an embeddable logic language
    9 projects | news.ycombinator.com | 25 May 2022
    Here's some stuff I've written in Prolog, some for my own enjoyment, one for my degree project.

    Most of the benefits I found come down to two things:

    a) Prolog, like the various kanrens, is a relational language so a program is effectively a database. There's no need to do anything special to glue together a data layer and a logic layer, because you have both written in Prolog.

    b) Prolog's declarative style makes translating rules and directives to code a breeze. The three projects below are all games and benefit heavily from this feature. I

    1. Warhammer 40K simulation:

    https://github.com/stassa/wh40ksim

    Runs simulations of combat between WH40k units.

    2. Gleemin, a Magic: the Gathering expert system:

    https://github.com/stassa/Gleemin

    Doesn't work anymore! Because backwards compatibility. Includes a) a parser for the rules text on M:tG cards written in Prolog's Definite Clause Grammars notation, b) a rules engine and c) a (primitive) AI player. The parser translates rules text from cards into rules engine calls. The cards themselves are Prolog predicates. Your data and your program are one and now you can also do stuff with them.

    3. Nests & Insects, a roguelike TTRPG:

    https://github.com/stassa/nests-and-insects

    WIP! Here I use Prolog to keep the data about my tabletop rpg organised, and also to automatically fill-in the character sheets typeset in the rulebook. The Prolog code runs a character creation process and generates completed character sheets. I plan to do the same for enemies' stat blocks, various procedural generation tables, etc. I also use Prolog to typeset the ASCII-styled rulebook, but that's probably not a good application of Prolog.

    You asked about "logic programming" in general and not miniKanren in particular. I haven't actually used miniKanren, so I commented about the logic programming language I've used the most, Prolog. I hope that's not a thread hijack!

    All three of the projects above are basically games. I have more "serious" stuff on my github but I feel a certain shortfall of gravitas, I suppose.

What are some alternatives?

When comparing louise and wh40ksim you can also consider the following projects:

edcg - Extended DCG syntax for Prolog by Peter Van Roy

nests-and-insects - A Roguelike Tabletop RPG

muKanren_reading - [Mirror] A close reading of the μKanren paper.

Gleemin - A Magic: the Gathering™ expert system

microKanren-py - Simple python3 implementation of microKanren with lots of type annotations for clarity

thelma - An implementation of Meta-Interpretive Learning

scryer-prolog - A modern Prolog implementation written mostly in Rust.

mediKanren - Proof-of-concept for reasoning over the SemMedDB knowledge base, using miniKanren + heuristics + indexing.