mlc-llm
ollama
mlc-llm | ollama | |
---|---|---|
89 | 229 | |
17,555 | 72,781 | |
3.4% | 14.0% | |
9.9 | 9.9 | |
3 days ago | 5 days ago | |
Python | Go | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mlc-llm
- FLaNK 04 March 2024
-
Ai on a android phone?
This one uses gpu, it doesn't support Mistral yet: https://github.com/mlc-ai/mlc-llm
-
MLC vs llama.cpp
I have tried running mistral 7B with MLC on my m1 metal. And it kept crushing (git issue with description). Memory inefficiency problems.
-
[Project] Scaling LLama2 70B with Multi NVIDIA and AMD GPUs under 3k budget
Project: https://github.com/mlc-ai/mlc-llm
- Scaling LLama2-70B with Multi Nvidia/AMD GPU
-
AMD May Get Across the CUDA Moat
For LLM inference, a shoutout to MLC LLM, which runs LLM models on basically any API that's widely available: https://github.com/mlc-ai/mlc-llm
-
ROCm Is AMD's #1 Priority, Executive Says
One of your problems might be that gfx1032 is not supported by AMD's ROCm packages, which has a laughably short list of supported hardware: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...
The normal workaround is to assign the closest architecture, eg gfx1030, so `HSA_OVERRIDE_GFX_VERSION=10.3.0` might help
Also, it looks like some of your tested projects are OpenCL? For me, I do something like: `yay -S rocm-hip-sdk rocm-ml-sdk rocm-opencl-sdk` to cover all the bases.
My recent interest has been LLMs and this is my general step by step for those (llama.cpp, exllama) for those interested: https://llm-tracker.info/books/howto-guides/page/amd-gpus
I didn't port the docs back in, but also here's a step-by-step w/ my adventures getting TVM/MLC working w/ an APU: https://github.com/mlc-ai/mlc-llm/issues/787
From my experience, ROCm is improving, but there's a good reason that Nvidia has 90% market share even at big price premiums.
-
Show HN: Ollama for Linux – Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
-
Show HN: Fine-tune your own Llama 2 to replace GPT-3.5/4
you already have TVM for the cross platform stuff
see https://tvm.apache.org/docs/how_to/deploy/android.html
or https://octoml.ai/blog/using-swift-and-apache-tvm-to-develop...
or https://github.com/mlc-ai/mlc-llm
- Ask HN: Are you training and running custom LLMs and how are you doing it?
ollama
-
Ollama v0.1.45
I think the two main maintainers of Ollama have good intentions but suffer from a combination of being far too busy, juggling their forked llama.cpp server and not having enough automation/testing for PRs.
There is a new draft PR up to look at moving away from trying to juggle maintaining a llama.cpp fork to using llama.cpp with cgo bindings which I think will really help: https://github.com/ollama/ollama/pull/5034
-
SpringAI, llama3 and pgvector: bRAGging rights!
To support the exploration, I've developed a simple Retrieval Augmented Generation (RAG) workflow that works completely locally on the laptop for free. If you're interested, you can find the code itself here. Basically, I've used Testcontainers to create a Postgres database container with the pgvector extension to store text embeddings and an open source LLM with which I send requests to: Meta's llama3 through ollama.
-
RAG with OLLAMA
Note: Before proceeding further you need to download and run Ollama, you can do so by clicking here.
-
Ollama 0.1.42
`file://*` URLs are now allowed => ollama works with simple html files now
https://github.com/ollama/ollama/commit/1a29e9a879433fc55cf1...
-
How to setup a free, self-hosted AI model for use with VS Code
This guide assumes you have a supported NVIDIA GPU and have installed Ubuntu 22.04 on the machine that will host the ollama docker image. AMD is now supported with ollama but this guide does not cover this type of setup.
-
beginner guide to fully local RAG on entry-level machines
Nowadays, running powerful LLMs locally is ridiculously easy when using tools such as ollama. Just follow the installation instructions for your #OS. From now on, we'll assume using bash on Ubuntu.
- Codestral: Mistral's Code Model
- AIM Weekly 27 May 2024
-
Devoxx Genie Plugin : an Update
I focused on supporting Ollama, GPT4All, and LMStudio, all of which run smoothly on a Mac computer. Many of these tools are user-friendly wrappers around Llama.cpp, allowing easy model downloads and providing a REST interface to query the available models. Last week, I also added "👋🏼 Jan" support because HuggingFace has endorsed this provider out-of-the-box.
- Ask HN: Are companies self hosting LLMs?
What are some alternatives?
llama.cpp - LLM inference in C/C++
ggml - Tensor library for machine learning
gpt4all - gpt4all: run open-source LLMs anywhere
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
llama-cpp-python - Python bindings for llama.cpp
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
llama - Inference code for Llama models