SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 llama Open-Source Projects
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
-
LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
shell_gpt
A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.
-
serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
-
Huatuo-Llama-Med-Chinese
Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models with Chinese Medical Knowledge. 本草(原名:华驼)模型仓库,基于中文医学知识的大语言模型指令微调
-
h2o-llmstudio
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
-
InternGPT
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
-
aichat
All-in-one AI-Powered CLI Chat & Copilot that integrates 10+ AI platforms, including OpenAI, Azure-OpenAI, Gemini, VertexAI, Claude, Mistral, Cohere, Ollama, Ernie, Qianwen...
-
xTuring
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
-
Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Depends what model you want to train, and how well you want your computer to keep working while you're doing it.
If you're interested in large language models there's a table of vram requirements for fine-tuning at [1] which says you could do the most basic type of fine-tuning on a 7B parameter model with 8GB VRAM.
You'll find that training takes quite a long time, and as a lot of the GPU power is going on training, your computer's responsiveness will suffer - even basic things like scrolling in your web browser or changing tabs uses the GPU, after all.
Spend a bit more and you'll probably have a better time.
[1] https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#...
Project mention: LocalAI: Self-hosted OpenAI alternative reaches 2.14.0 | news.ycombinator.com | 2024-05-03
I'd like to share with you today the Chinese-Alpaca-Plus-13B-GPTQ model, which is the GPTQ format quantised 4bit models of Yiming Cui's Chinese-LLaMA-Alpaca 13B for GPU reference.
Project mention: Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real | news.ycombinator.com | 2023-12-10Update: For anyone else facing the commercial use question on LLaVA - it is licensed under Apache 2.0. Can be used commercially with attribution: https://github.com/haotian-liu/LLaVA/blob/main/LICENSE
Project mention: Ask HN: What are the capabilities of consumer grade hardware to work with LLMs? | news.ycombinator.com | 2023-08-03I agree, I've definitely seen way more information about running image synthesis models like Stable Diffusion locally than I have LLMs. It's counterintuitive to me that Stable Diffusion takes less RAM than an LLM, especially considering it still needs the word vectors. Goes to show I know nothing.
I guess it comes down to the requirement of a very high end (or multiple) GPU that makes it impractical for most vs just running it in Colab or something.
Tho there are some efforts:
https://github.com/cocktailpeanut/dalai
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
https://github.com/TheR1D/shell_gpt?tab=readme-ov-file#shell...
Project mention: Ask HN: What are the drawbacks of caching LLM responses? | news.ycombinator.com | 2024-03-15Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
Project mention: Baichuan 7B reaches top of LLM leaderboard for it's size (New foundation model 4K tokens) | /r/LocalLLaMA | 2023-06-17GitHub: baichuan-inc/baichuan-7B: A large-scale 7B pretraining language model developed by BaiChuan-Inc. (github.com)
Project mention: Show HN: I made an app to use local AI as daily driver | news.ycombinator.com | 2024-02-27
Project mention: K8sgpt-AI/k8sgpt: Giving Kubernetes Superpowers to everyone | news.ycombinator.com | 2024-03-31
Huatuo-Llama-Med-Chinese https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese
I'm using the Instruction Tuning with GPT-4 dataset, which is hosted on Huggingface.
Project mention: Paid dev gig: develop a basic LLM PEFT finetuning utility | /r/LocalLLaMA | 2023-06-02
Project mention: Language Models Are Super Mario: Absorbing Abilities from Homologous Models | news.ycombinator.com | 2024-04-06For others like me who’d not heard of merging before, this seems to be one tool[0] (there may be others)
[0] https://github.com/arcee-ai/mergekit
You can also create an issue and ask the developers for help.
Project mention: Show HN: A shell CLI tool to predict your next command enhanced by LLM and RAG | news.ycombinator.com | 2024-04-16thanks for sharing this, I have been using aichat (https://github.com/sigoden/aichat) and shell_gpt for a while. Let's see how it works.
Project mention: I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts! | /r/machinelearningnews | 2023-09-07Explore the project on GitHub here.
llama related posts
-
Xmake: A modern C/C++ build tool
-
LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
-
Better and Faster Large Language Models via Multi-Token Prediction
-
Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
-
GGML Flash Attention support merged into llama.cpp
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
-
A note from our sponsor - SaaSHub
www.saashub.com | 5 May 2024
Index
What are some of the best open-source llama projects? This list will help you:
Project | Stars | |
---|---|---|
1 | llama.cpp | 57,463 |
2 | LLaMA-Factory | 20,248 |
3 | LocalAI | 19,862 |
4 | Chinese-LLaMA-Alpaca | 17,348 |
5 | LLaVA | 16,333 |
6 | dalai | 13,051 |
7 | petals | 8,684 |
8 | shell_gpt | 8,303 |
9 | BELLE | 7,549 |
10 | PowerInfer | 6,969 |
11 | GPTCache | 6,430 |
12 | Baichuan-7B | 5,633 |
13 | serge | 5,543 |
14 | k8sgpt | 4,926 |
15 | Huatuo-Llama-Med-Chinese | 4,272 |
16 | GPT-4-LLM | 3,978 |
17 | h2o-llmstudio | 3,602 |
18 | mergekit | 3,427 |
19 | Anima | 3,139 |
20 | InternGPT | 3,128 |
21 | aichat | 2,871 |
22 | xTuring | 2,524 |
23 | Alpaca-CoT | 2,474 |
Sponsored