open_llama
mlc-llm
Our great sponsors
open_llama | mlc-llm | |
---|---|---|
52 | 89 | |
7,193 | 16,955 | |
1.3% | 7.1% | |
5.3 | 9.9 | |
10 months ago | 1 day ago | |
Python | ||
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open_llama
-
How Open is Generative AI? Part 2
The RedPajama dataset was adapted by the OpenLLaMA project at UC Berkeley, creating an open-source LLaMA equivalent without Metaβs restrictions. The model's later version also included data from Falcon and StarCoder. This highlights the importance of open-source models and datasets, enabling free repurposing and innovation.
-
GPT-4 API general availability
OpenLLaMA is though. https://github.com/openlm-research/open_llama
All of these are surmountable problems.
We can beat OpenAI.
We can drain their moat.
-
Recommend me a computer for local a.i for 500 $
#1: π Open-source Reproduction of Meta AIβs LLaMA OpenLLaMA-13B released. (trained for 1T tokens) | 0 comments #2: π #1 on HuggingFace.co's Leaderboard Model Falcon 40B is now Free (Apache 2.0 License) | 0 comments #3: π Have you seen this repo? "running LLMs on consumer-grade hardware. compatible models: llama.cpp, alpaca.cpp, gpt4all.cpp, rwkv.cpp, whisper.cpp, vicuna, koala, gpt4all-j, cerebras and many others!" | 0 comments
-
Who is openllama from?
Trained OpenLLaMA models are from the OpenLM Research team in collaboration with Stability AI: https://github.com/openlm-research/open_llama
-
Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
I can't use Llama or any model from the Llama family, due to license restrictions. Although now there's also the OpenLlama family of models, which have the same architecture but were trained on an open dataset (RedPajama, the same dataset the base model in my app was trained on). I'd love to pursue the direction of extended context lengths for on-device LLMs. Likely in a month or so, when I've implemented all the product feature that I currently have on my backlog.
-
XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
https://github.com/openlm-research/open_llama#update-0615202...).
XGen-7B is probably the superior 7B model, it's trained on more tokens and a longer default sequence length (although both presumably can adopt SuperHOT (Position Interpolation) to extend context), but larger models still probably perform better on an absolute basis.
-
MosaicML Agrees to Join Databricks to Power Generative AI for All
Compare it to openllama. It github doesn't have a single script on how to do anything.
-
Databricks Strikes $1.3B Deal for Generative AI Startup MosaicML
OpenLLaMA models up to 13B parameters have now been trained on 1T tokens:
https://github.com/openlm-research/open_llama
-
Containerized AI before Apocalypse π³π€
The deployed LLM binary, orca mini, has 3 billion parameters. Orca mini is based on the OpenLLaMA project.
-
AI β weekly megathread!
OpenLM Research released its 1T token version of OpenLLaMA 13B - the permissively licensed open source reproduction of Meta AI's LLaMA large language model. [Details].
mlc-llm
- FLaNK 04 March 2024
-
Ai on a android phone?
This one uses gpu, it doesn't support Mistral yet: https://github.com/mlc-ai/mlc-llm
-
MLC vs llama.cpp
I have tried running mistral 7B with MLC on my m1 metal. And it kept crushing (git issue with description). Memory inefficiency problems.
-
[Project] Scaling LLama2 70B with Multi NVIDIA and AMD GPUs under 3k budget
Project: https://github.com/mlc-ai/mlc-llm
- Scaling LLama2-70B with Multi Nvidia/AMD GPU
-
AMD May Get Across the CUDA Moat
For LLM inference, a shoutout to MLC LLM, which runs LLM models on basically any API that's widely available: https://github.com/mlc-ai/mlc-llm
-
ROCm Is AMD's #1 Priority, Executive Says
One of your problems might be that gfx1032 is not supported by AMD's ROCm packages, which has a laughably short list of supported hardware: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.h...
The normal workaround is to assign the closest architecture, eg gfx1030, so `HSA_OVERRIDE_GFX_VERSION=10.3.0` might help
Also, it looks like some of your tested projects are OpenCL? For me, I do something like: `yay -S rocm-hip-sdk rocm-ml-sdk rocm-opencl-sdk` to cover all the bases.
My recent interest has been LLMs and this is my general step by step for those (llama.cpp, exllama) for those interested: https://llm-tracker.info/books/howto-guides/page/amd-gpus
I didn't port the docs back in, but also here's a step-by-step w/ my adventures getting TVM/MLC working w/ an APU: https://github.com/mlc-ai/mlc-llm/issues/787
From my experience, ROCm is improving, but there's a good reason that Nvidia has 90% market share even at big price premiums.
-
Show HN: Ollama for Linux β Run LLMs on Linux with GPU Acceleration
Maybe they're talking about https://github.com/mlc-ai/mlc-llm which is used for web-llm (https://github.com/mlc-ai/web-llm)? Seems to be using TVM.
-
Show HN: Fine-tune your own Llama 2 to replace GPT-3.5/4
you already have TVM for the cross platform stuff
see https://tvm.apache.org/docs/how_to/deploy/android.html
or https://octoml.ai/blog/using-swift-and-apache-tvm-to-develop...
or https://github.com/mlc-ai/mlc-llm
- Ask HN: Are you training and running custom LLMs and how are you doing it?
What are some alternatives?
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
llama.cpp - LLM inference in C/C++
ggml - Tensor library for machine learning
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
gpt4all - gpt4all: run open-source LLMs anywhere
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
gorilla - Gorilla: An API store for LLMs
llama-cpp-python - Python bindings for llama.cpp
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.