chat-ui
llamafile
chat-ui | llamafile | |
---|---|---|
40 | 36 | |
6,369 | 15,410 | |
10.8% | 30.4% | |
9.7 | 9.6 | |
4 days ago | 7 days ago | |
TypeScript | C++ | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chat-ui
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
Zephyr 141B is a Mixtral 8x22B fine-tune. Here are some interesting details
- Base model: Mixtral 8x22B, 8 experts, 141B total params, 35B activated params
- Fine-tuned with ORPO, a new alignment algorithm with no SFT step (hence much faster than DPO/PPO)
- Trained with 7K open data instances -> high-quality, synthetic, multi-turn
- Apache 2
Everything is open:
- Final Model: https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v...
- Base Model: https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1
- Fine-tune data: https://huggingface.co/datasets/argilla/distilabel-capybara-...
- Recipe/code to train the model: https://huggingface.co/datasets/argilla/distilabel-capybara-...
- Open-source inference engine: https://github.com/huggingface/text-generation-inference
- Open-source UI code https://github.com/huggingface/chat-ui
Have fun!
-
AI enthusiasm - episode #2🚀
As long as you have a free Hugging Face account, you can sign up and exploit HuggingChat, a web-based chat interface where you will find 5 large language models to play with (Mixtral-7B-it v0.1 and v0.2, Command R plus, Gemma 1.1-7B-it, Dolphin). You will also have the possibility to exploit several assistants made by the Hugging Face community, or even create your own!
-
OpenAI Startup Fund: GP Hallucination
I submitted something about this the other day (and it got flagged)- poked around a little bit and the only interesting thing I could find is this: https://github.com/huggingface/chat-ui/issues/254 and I don't really even understand what it is, it references the stuff the dude who wrote this is discussing. I had kinda written the whole thing off as someone with too much time on their hands and is just f'ing around with stuff for whatever reason.
I think they made this as well: https://chat.openai.com/g/g-KT4gusP3Y-a-l-i-s-t-a-i-r-e-earl... - it doesn't seem very useful.
*¯\_(ツ)_/¯ to me after spending an hr or so poking around, it seemed like a bored modern tech savvy young person playing around.
- ⚔️ Embeddings, Chatbots RAG Arena et forfaits Telecom OPT-NC
-
Show HN: I made an app to use local AI as daily driver
- https://github.com/huggingface/chat-ui
-
Deconstructing Hugging Face Chat: Explore open-source chat UI/UX for generative AI
Hugging Face Chat - open-source repo powering Hugging Chat!
-
What are you guys using local LLMs for?
If you don't want to do coding, I think HuggingFace's chat-ui can come in handy with web retrieval RAG and llama-cpp running as a server. Please check their documentation on how to setup( See "Running your own models using a custom endpoint" section on their Github).
-
The founder of OpenAI/ChatGPT is a Zionist calling people that are against Israeli genocide “antisemitist”, how dare the American left speak against genocide!?
yes! it's proprietary, invasive, and harvests your data and use it for improving the AI, Ultman went to Israel weeks after Chatgpt was introduced, Israel like any other tech-giant-country needs to make sure that it has control over that data and/or use it to achieve its goals, so it's better to find offline FOSS alternatives (if you have a decent enough PC) or use HuggingChat as an online FOSS alternative, I find it better than GPT 3.5 in many aspects
-
Smartphone Brands Sorted Out, So You Don't Have To
I have categorized some of the smartphone brands by their parent company using HuggingChat based on RLHF, Google's Bard, ChatGPT, and Perplexity. All of them are powered by LLMs, and both ChatGPT and Perplexity use GPT-3.5.
-
Accessing ChatGPT in non-official UI
I'm looking for something like https://huggingface.co/chat/ or OpenAssistant, but it should target OpenAI's api.
llamafile
- FLaNK-AIM Weekly 06 May 2024
- llamafile v0.8
-
Mistral AI Launches New 8x22B Moe Model
I think the llamafile[0] system works the best. Binary works on the command line or launches a mini webserver. Llamafile offers builds of Mixtral-8x7B-Instruct, so presumably they may package this one up as well (potentially a quantized format).
You would have to confirm with someone deeper in the ecosystem, but I think you should be able to run this new model as is against a llamafile?
[0] https://github.com/Mozilla-Ocho/llamafile
-
Apple Explores Home Robotics as Potential 'Next Big Thing'
Thermostats: https://www.sinopetech.com/en/products/thermostat/
I haven't tried running a local text-to-speech engine backed by an LLM to control Home Assistant. Maybe someone is working on this already?
TTS: https://github.com/SYSTRAN/faster-whisper
LLM: https://github.com/Mozilla-Ocho/llamafile/releases
LLM: https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-D...
It would take some tweaking to get the voice commands working correctly.
-
LLaMA Now Goes Faster on CPUs
While I did not succeed in making the matmul code from https://github.com/Mozilla-Ocho/llamafile/blob/main/llamafil... work in isolation, I compared eigen, openblas, and mkl: https://gist.github.com/Dobiasd/e664c681c4a7933ef5d2df7caa87...
In this (very primitive!) benchmark, MKL was a bit better than eigen (~10%) on my machine (i5-6600).
Since the article https://justine.lol/matmul/ compared the new kernels with MLK, we can (by transitivity) compare the new kernels with Eigen this way, at least very roughly for this one use-case.
-
Llamafile 0.7 Brings AVX-512 Support: 10x Faster Prompt Eval Times for AMD Zen 4
Yes, they're just ZIP files that also happen to be actually portable executables.
https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file...
-
Show HN: I made an app to use local AI as daily driver
have you seen llamafile[0]?
[0] https://github.com/Mozilla-Ocho/llamafile
- FLaNK Stack 26 February 2024
-
Gemma.cpp: lightweight, standalone C++ inference engine for Gemma models
llama.cpp has integrated gemma support. So you can use llamafile for this. It is a standalone executable that is portable across most popular OSes.
https://github.com/Mozilla-Ocho/llamafile/releases
So, download the executable from the releases page under assets. You want either just main or just server. Don't get the huge ones with the model inlined in the file. The executable is about 30MB in size,
https://github.com/Mozilla-Ocho/llamafile/releases/download/...
-
Ollama releases OpenAI API compatibility
The improvements in ease of use for locally hosting LLMs over the last few months have been amazing. I was ranting about how easy https://github.com/Mozilla-Ocho/llamafile is just a few hours ago [1]. Now I'm torn as to which one to use :)
1: Quite literally hours ago: https://euri.ca/blog/2024-llm-self-hosting-is-easy-now/
What are some alternatives?
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
DiscordChatExporter-frontend - Browse json files exported by Tyrrrz/DiscordChatExporter in familiar discord like user interface
ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
langchain - 🦜🔗 Build context-aware reasoning applications
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
LLaVA - [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
llama.cpp - LLM inference in C/C++
AgileRL - Streamlining reinforcement learning with RLOps. State-of-the-art RL algorithms and tools.
safetensors - Simple, safe way to store and distribute tensors