chatbot-ui VS llamafile

Compare chatbot-ui vs llamafile and see what are their differences.

llamafile

Distribute and run LLMs with a single file. (by Mozilla-Ocho)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
chatbot-ui llamafile
63 35
26,308 14,839
- 27.7%
9.4 9.6
3 days ago 6 days ago
TypeScript C++
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

chatbot-ui

Posts with mentions or reviews of chatbot-ui. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-03.
  • AI programming tools should be added to the Joel Test
    1 project | news.ycombinator.com | 22 Apr 2024
    One of the first things we did when GPT-4 became available was talk to our Azure rep and get access to the OpenAI models that they'd partnered with Microsoft to host in Azure. Now, we have our own private, not-datamined (so they claim, contractually) API endpoint and we use an OpenAI integration in VS Code[1] to connect to, allowing anyone in the company to use it to help them code.

    I also spun up an internal chat UI[2] to replace ChatGPT so people can feel comfortable discussing proprietary data with the LLM endpoint.

    The only thing that would make it more secure would be running inference engines internally, but I wouldn't have access to as good of models, and I'd need a _lot_ of hardware to match the speeds.

    [1] - https://marketplace.visualstudio.com/items?itemName=AndrewBu...

    [2] - https://github.com/mckaywrigley/chatbot-ui (legacy branch)

  • Ask HN: Has Anyone Trained a personal LLM using their personal notes?
    10 projects | news.ycombinator.com | 3 Apr 2024
    [3] https://github.com/mckaywrigley/chatbot-ui
  • Show HN: I made an app to use local AI as daily driver
    31 projects | news.ycombinator.com | 27 Feb 2024
    Thank you for the work.

    Please take this in a nice way: I can't see why I would use this over ChatbotUI+Ollama https://github.com/mckaywrigley/chatbot-ui

    Seem the only advantage is having it as MacOS native app and only real distinction is maybe fast import and search - I've yet to try that though.

    ChatbotUI (and other similar stuff) are cross-platform, customizable, private, debuggable. I'm easily able to see what it's trying to do.

  • ChatGPT for Teams
    2 projects | news.ycombinator.com | 11 Jan 2024
    You can make a privacy request for OpenAI to not train on your data here: https://privacy.openai.com/

    Alternatively, you could also use your own UI/API token (API calls aren't trained on). Chatbot UI just got a major update released and has nice things like folders, and chat search: https://github.com/mckaywrigley/chatbot-ui

  • Chatbot UI 2.0
    1 project | news.ycombinator.com | 9 Jan 2024
  • webui similar to chatgpt
    2 projects | /r/LocalLLaMA | 9 Dec 2023
  • They made ChatGPT worse at coding for some reason, and it’s caused me to look at alternative AI options
    1 project | /r/ChatGPT | 7 Dec 2023
    Also chatbotUI is great https://github.com/mckaywrigley/chatbot-ui it has a ui similar to chatgpt
  • Please Don't Ask If an Open Source Project Is Dead
    3 projects | news.ycombinator.com | 14 Nov 2023
    > The comment I screenshotted is passive-aggressive at best, and there's no really good way to ask "is this repo dead" without being passive-aggressive. My day-to-day job that actually pays me a salary wouldn't ever provide a bulleted list of the reasons I suck, let alone a project I develop in my spare time.

    There is nothing passive-aggressive about that comment. There is nothing problematic about it at all. Nobody's calling you slurs or making demands. I see one guy who might as well be a Mormon Boy Scout from Canada. "Is this repo dead" is not passive-aggressive, just ineloquent. Fuck my eyes until the jelly leaks out my ears if a courteous and professionally-written question constitutes "applying pressure and being rude" these days.

    I don't know what a "bulleted list of the reasons [you] suck" has to do with anything (I don't see where anybody sent you one) but you're coming across as someone who invites people to your garage sale and then brandishes a shotgun and starts screaming when they set foot on your property.

    > I’ve never seen any discussions or articles about whether it’s appropriate to ask if an open source repository is dead. Is there an implicit contract to actively maintain any open source software you publish? Are you obligated to provide free support if you hit a certain star amount on GitHub or ask for funding through GitHub Sponsorships/Patreon? After all, most permissive open source code licenses like the MIT License contain some variant of “the software is provided ‘as is’, without warranty of any kind.”

    Here's an example of why everyone should ask if an open source project is dead:

    https://github.com/mckaywrigley/chatbot-ui/issues

    A number of issues complain about it leaking OpenAI keys. Nobody's figured out how, but it'd be nice to know if anybody's working on it, if it's worth submitting a PR, if it should be forked, if it's worth bothering with at all. This code is a massive liability in its current state. Its creator is absent. It warrants questions being asked about its future. Yeah, it's as-is software, but it's not an affront to your mother's virtue when someone asks if your shit still works or if you have plans to fix it.

    > I’ve had an existential crisis about my work in open source AI on GitHub, particularly as there has been both increasingly toxic backlash against AI and because the AI industry has been evolving so rapidly that I flat-out don’t have enough bandwidth to keep up

    Herein lies the problem? You sound overwhelmed. I've been there myself. I don't know what your year's been like but you genuinely might want to get away from the screen and get some fresh air. This is a good time of year to do it, since things generally slow down at work.

  • I need help with getting an API
    2 projects | /r/github | 13 Aug 2023
  • I need help with getting an api
    1 project | /r/Frontend | 13 Aug 2023

llamafile

Posts with mentions or reviews of llamafile. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-06.
  • FLaNK-AIM Weekly 06 May 2024
    45 projects | dev.to | 6 May 2024
  • llamafile v0.8
    1 project | news.ycombinator.com | 24 Apr 2024
  • Mistral AI Launches New 8x22B Moe Model
    4 projects | news.ycombinator.com | 9 Apr 2024
    I think the llamafile[0] system works the best. Binary works on the command line or launches a mini webserver. Llamafile offers builds of Mixtral-8x7B-Instruct, so presumably they may package this one up as well (potentially a quantized format).

    You would have to confirm with someone deeper in the ecosystem, but I think you should be able to run this new model as is against a llamafile?

    [0] https://github.com/Mozilla-Ocho/llamafile

  • Apple Explores Home Robotics as Potential 'Next Big Thing'
    3 projects | news.ycombinator.com | 4 Apr 2024
    Thermostats: https://www.sinopetech.com/en/products/thermostat/

    I haven't tried running a local text-to-speech engine backed by an LLM to control Home Assistant. Maybe someone is working on this already?

    TTS: https://github.com/SYSTRAN/faster-whisper

    LLM: https://github.com/Mozilla-Ocho/llamafile/releases

    LLM: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-D...

    It would take some tweaking to get the voice commands working correctly.

  • LLaMA Now Goes Faster on CPUs
    16 projects | news.ycombinator.com | 31 Mar 2024
    While I did not succeed in making the matmul code from https://github.com/Mozilla-Ocho/llamafile/blob/main/llamafil... work in isolation, I compared eigen, openblas, and mkl: https://gist.github.com/Dobiasd/e664c681c4a7933ef5d2df7caa87...

    In this (very primitive!) benchmark, MKL was a bit better than eigen (~10%) on my machine (i5-6600).

    Since the article https://justine.lol/matmul/ compared the new kernels with MLK, we can (by transitivity) compare the new kernels with Eigen this way, at least very roughly for this one use-case.

  • Llamafile 0.7 Brings AVX-512 Support: 10x Faster Prompt Eval Times for AMD Zen 4
    3 projects | news.ycombinator.com | 31 Mar 2024
    Yes, they're just ZIP files that also happen to be actually portable executables.

    https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file...

  • Show HN: I made an app to use local AI as daily driver
    31 projects | news.ycombinator.com | 27 Feb 2024
    have you seen llamafile[0]?

    [0] https://github.com/Mozilla-Ocho/llamafile

  • FLaNK Stack 26 February 2024
    50 projects | dev.to | 26 Feb 2024
  • Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models
    7 projects | news.ycombinator.com | 23 Feb 2024
    llama.cpp has integrated gemma support. So you can use llamafile for this. It is a standalone executable that is portable across most popular OSes.

    https://github.com/Mozilla-Ocho/llamafile/releases

    So, download the executable from the releases page under assets. You want either just main or just server. Don't get the huge ones with the model inlined in the file. The executable is about 30MB in size,

    https://github.com/Mozilla-Ocho/llamafile/releases/download/...

  • Ollama releases OpenAI API compatibility
    12 projects | news.ycombinator.com | 8 Feb 2024
    The improvements in ease of use for locally hosting LLMs over the last few months have been amazing. I was ranting about how easy https://github.com/Mozilla-Ocho/llamafile is just a few hours ago [1]. Now I'm torn as to which one to use :)

    1: Quite literally hours ago: https://euri.ca/blog/2024-llm-self-hosting-is-easy-now/

What are some alternatives?

When comparing chatbot-ui and llamafile you can also consider the following projects:

BetterChatGPT - An amazing UI for OpenAI's ChatGPT (Website + Windows + MacOS + Linux)

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.

gpt4all - gpt4all: run open-source LLMs anywhere

langchain - 🦜🔗 Build context-aware reasoning applications

Flowise - Drag & drop UI to build your customized LLM flow

ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]

chatgpt-clone - Enhanced ChatGPT Clone: Features OpenAI, Bing, PaLM 2, AI model switching, message search, langchain, Plugins, Multi-User System, Presets, completely open-source for self-hosting. More features in development [Moved to: https://github.com/danny-avila/LibreChat]

LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

safetensors - Simple, safe way to store and distribute tensors

turbogpt.ai

llama.cpp - LLM inference in C/C++