llm-mlc VS ad-llama

Compare llm-mlc vs ad-llama and see what are their differences.

llm-mlc

LLM plugin for running models using MLC (by simonw)

ad-llama

Structured inference with Llama 2 in your browser (by gsuuon)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
llm-mlc ad-llama
3 6
172 47
- -
5.1 8.7
2 months ago 13 days ago
Python TypeScript
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

llm-mlc

Posts with mentions or reviews of llm-mlc. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-04.
  • LLM now provides tools for working with embeddings
    7 projects | news.ycombinator.com | 4 Sep 2023
    I'm still iterating on that. Plugins get complete control over the prompts, so they can handle the various weirdnesses of them. Here's some relevant code:

    https://github.com/simonw/llm-gpt4all/blob/0046e2bf5d0a9c369...

    https://github.com/simonw/llm-mlc/blob/b05eec9ba008e700ecc42...

    https://github.com/simonw/llm-llama-cpp/blob/29ee8d239f5cfbf...

    I'm not completely happy with this yet. Part of the problem is that different models on the same architecture may have completely different prompting styles.

    I expect I'll eventually evolve the plugins to allow them to be configured in an easier and more flexible way. Ideally I'd like you to be able to run new models on existing architectures using an existing plugin.

  • Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2
    12 projects | news.ycombinator.com | 16 Aug 2023
    What is the advantage of this versus running something like https://github.com/simonw/llm , which also gives you options to e.g. use https://github.com/simonw/llm-mlc for accelerated inference?
  • Show HN: LLMs can generate valid JSON 100% of the time
    25 projects | news.ycombinator.com | 14 Aug 2023
    I'm quite impressed with Llama 2 13B - the more time I spend with it the more I think it might be genuinely useful for more than just playing around with local LLMs.

    I'm using the MLC version (since that works with a GPU on my M2 Mac) via my https://github.com/simonw/llm-mlc plugin.

ad-llama

Posts with mentions or reviews of ad-llama. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-18.
  • Show HN: A murder mystery game built on an open-source gen-AI agent framework
    3 projects | news.ycombinator.com | 18 Sep 2023
  • Guidance: A guidance language for controlling large language models
    10 projects | news.ycombinator.com | 16 Sep 2023
    I took a stab at making something[1] like guidance - I'm not sure exactly how guidance does it (and I'm also really curious how it would work with chat api's) but here's how my solution works.

    Each expression becomes a new inference request, so it's not a single inference pass. Because each subsequent pass includes the previously inferenced text, the LLM ends up doing a lot of prefill and less decode. You only decode as much as you actually inference, the repeated passes only end up costing more in prefill (which tend to be much faster tok/s).

    To work with chat tuned instruction models, you can basically still treat it as a completion model. I provide the previously completed inference text as a partially completed assistant response, e.g. with llama 2 it goes after [/INST]. You can add a bit of instruction for each inference expression which gets added to the [INST]. This approach lets you start off the inference with `{ "someField": "` for example to guarantee (at least the start of) a json response and allow you to add a little bit of instruction or context just for that field.

    I didn't even try with openai api's since afaict you can't provide a partial assistant response for it to continue from. Even if you were to request a single token at a time and use logit_bias for biased sampling, I don't see how you can get it to continue a partially completed inference.

    [1] https://github.com/gsuuon/ad-llama

  • Simulating History with ChatGPT
    1 project | news.ycombinator.com | 12 Sep 2023
    Can you point me to some text-adventure engines? I'm hacking on an in-browser local llm structured inference library[1] and am trying to put together a text game demo[2] for it. It didn't even occur to me that text-adventure game engines exist, I was apparently re-inventing the wheel.

    [1] https://github.com/gsuuon/ad-llama

    [2] https://ad-llama.vercel.app/murder/

  • Ask HN: Which programming language to learn in AI era?
    1 project | news.ycombinator.com | 30 Aug 2023
    Yup, I'm building a library that runs LLM's in browser with tagged template literals: https://github.com/gsuuon/ad-llama

    I think it has fundamental DX benefits over python for complex prompt chaining (or I wouldn't be building it!) Even still -- if their focus is purely on AI, python is still the better choice starting from scratch. The python AI ecosystem has many more libraries, stack overflow answers, tutorials, etc available.

  • Show HN: LLMs can generate valid JSON 100% of the time
    25 projects | news.ycombinator.com | 14 Aug 2023
    Generating an FSM over the vocabulary is a really interesting approach to guided sampling! I'm hacking on a structured inference library (https://github.com/gsuuon/ad-llama) - I also tried to add a vocab preprocessing step to generate a valid tokens mask (just with regex or static strings initially) but discovered that doing so would cause unlikely / unnatural tokens to be masked rather than the token which represents the natural encoding given the existing sampled tokens.

    Given the stateful nature of tokenizers, I decided that trying to preprocess the individual token ids was a losing battle. Even in the simple case of whitespace - tokenizer merges can really screw up generating a static mask, e.g. we expect a space next, but a token decodes to 'foo', but is actually a '_foo' and would've decoded with a whitespace if it were following a valid pair. When I go to construct the static vocab mask, it would then end up matching against 'foo' instead of ' foo'.

    How did you work around this for the FSM approach? Does it somehow include information about merges / whitespace / tokenizer statefulness?

What are some alternatives?

When comparing llm-mlc and ad-llama you can also consider the following projects:

llm-gpt4all - Plugin for LLM adding support for the GPT4All collection of models

llm - Access large language models from the command-line

can-ai-code - Self-evaluating interview for AI coders

grontown - A murder mystery featuring generative agents

llama-gpt - A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!

eastworld - Framework for Generative Agents in Games

outlines - Structured Text Generation

hof - Framework that joins data models, schemas, code generation, and a task engine. Language and technology agnostic.

TypeChat - TypeChat is a library that makes it easy to build natural language interfaces using types.

llama.cpp - LLM inference in C/C++

api - Structured LLM APIs

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured