llama

Open-source projects categorized as llama

Top 23 llama Open-Source Projects

  • llama.cpp

    LLM inference in C/C++

  • Project mention: Xmake: A modern C/C++ build tool | news.ycombinator.com | 2024-05-04
  • LLaMA-Factory

    Unify Efficient Fine-Tuning of 100+ LLMs

  • Project mention: Show HN: GPU Prices on eBay | news.ycombinator.com | 2024-02-23

    Depends what model you want to train, and how well you want your computer to keep working while you're doing it.

    If you're interested in large language models there's a table of vram requirements for fine-tuning at [1] which says you could do the most basic type of fine-tuning on a 7B parameter model with 8GB VRAM.

    You'll find that training takes quite a long time, and as a lot of the GPU power is going on training, your computer's responsiveness will suffer - even basic things like scrolling in your web browser or changing tabs uses the GPU, after all.

    Spend a bit more and you'll probably have a better time.

    [1] https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#...

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • LocalAI

    :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.

  • Project mention: LocalAI: Self-hosted OpenAI alternative reaches 2.14.0 | news.ycombinator.com | 2024-05-03
  • Chinese-LLaMA-Alpaca

    中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

  • Project mention: Chinese-Alpaca-Plus-13B-GPTQ | /r/LocalLLaMA | 2023-05-30

    I'd like to share with you today the Chinese-Alpaca-Plus-13B-GPTQ model, which is the GPTQ format quantised 4bit models of Yiming Cui's Chinese-LLaMA-Alpaca 13B for GPU reference.

  • LLaVA

    [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

  • Project mention: Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real | news.ycombinator.com | 2023-12-10

    Update: For anyone else facing the commercial use question on LLaVA - it is licensed under Apache 2.0. Can be used commercially with attribution: https://github.com/haotian-liu/LLaVA/blob/main/LICENSE

  • dalai

    The simplest way to run LLaMA on your local machine

  • Project mention: Ask HN: What are the capabilities of consumer grade hardware to work with LLMs? | news.ycombinator.com | 2023-08-03

    I agree, I've definitely seen way more information about running image synthesis models like Stable Diffusion locally than I have LLMs. It's counterintuitive to me that Stable Diffusion takes less RAM than an LLM, especially considering it still needs the word vectors. Goes to show I know nothing.

    I guess it comes down to the requirement of a very high end (or multiple) GPU that makes it impractical for most vs just running it in Colab or something.

    Tho there are some efforts:

    https://github.com/cocktailpeanut/dalai

  • petals

    🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

  • Project mention: Mistral Large | news.ycombinator.com | 2024-02-26

    So how long until we can do an open source Mistral Large?

    We could make a start on Petals or some other open source distributed training network cluster possibly?

    [0] https://petals.dev/

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  • shell_gpt

    A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.

  • Project mention: Oh My Zsh | news.ycombinator.com | 2024-01-22

    https://github.com/TheR1D/shell_gpt?tab=readme-ov-file#shell...

  • BELLE

    BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型) (by LianjiaTech)

  • PowerInfer

    High-speed Large Language Model Serving on PCs with Consumer-grade GPUs

  • Project mention: FLaNK 25 December 2023 | dev.to | 2023-12-26
  • GPTCache

    Semantic cache for LLMs. Fully integrated with LangChain and llama_index.

  • Project mention: Ask HN: What are the drawbacks of caching LLM responses? | news.ycombinator.com | 2024-03-15

    Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.

  • Baichuan-7B

    A large-scale 7B pretraining language model developed by BaiChuan-Inc.

  • Project mention: Baichuan 7B reaches top of LLM leaderboard for it's size (New foundation model 4K tokens) | /r/LocalLLaMA | 2023-06-17

    GitHub: baichuan-inc/baichuan-7B: A large-scale 7B pretraining language model developed by BaiChuan-Inc. (github.com)

  • serge

    A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.

  • Project mention: Show HN: I made an app to use local AI as daily driver | news.ycombinator.com | 2024-02-27
  • k8sgpt

    Giving Kubernetes Superpowers to everyone

  • Project mention: K8sgpt-AI/k8sgpt: Giving Kubernetes Superpowers to everyone | news.ycombinator.com | 2024-03-31
  • Huatuo-Llama-Med-Chinese

    Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models with Chinese Medical Knowledge. 本草(原名:华驼)模型仓库,基于中文医学知识的大语言模型指令微调

  • Project mention: Local medical LLM | /r/LocalLLaMA | 2023-06-09

    Huatuo-Llama-Med-Chinese https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese

  • GPT-4-LLM

    Instruction Tuning with GPT-4

  • Project mention: Fine-tuning LLMs with LoRA: A Gentle Introduction | dev.to | 2023-08-22

    I'm using the Instruction Tuning with GPT-4 dataset, which is hosted on Huggingface.

  • h2o-llmstudio

    H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/

  • Project mention: Paid dev gig: develop a basic LLM PEFT finetuning utility | /r/LocalLLaMA | 2023-06-02
  • mergekit

    Tools for merging pretrained large language models.

  • Project mention: Language Models Are Super Mario: Absorbing Abilities from Homologous Models | news.ycombinator.com | 2024-04-06

    For others like me who’d not heard of merging before, this seems to be one tool[0] (there may be others)

    [0] https://github.com/arcee-ai/mergekit

  • Anima

    33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU

  • Project mention: AirLLM | news.ycombinator.com | 2023-12-28
  • InternGPT

    InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)

  • Project mention: How do I use the programs on Github? | /r/github | 2023-06-16

    You can also create an issue and ask the developers for help.

  • aichat

    All-in-one AI-Powered CLI Chat & Copilot that integrates 10+ AI platforms, including OpenAI, Azure-OpenAI, Gemini, VertexAI, Claude, Mistral, Cohere, Ollama, Ernie, Qianwen...

  • Project mention: Show HN: A shell CLI tool to predict your next command enhanced by LLM and RAG | news.ycombinator.com | 2024-04-16

    thanks for sharing this, I have been using aichat (https://github.com/sigoden/aichat) and shell_gpt for a while. Let's see how it works.

  • xTuring

    Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

  • Project mention: I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts! | /r/machinelearningnews | 2023-09-07

    Explore the project on GitHub here.

  • Alpaca-CoT

    We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

llama related posts

Index

What are some of the best open-source llama projects? This list will help you:

Project Stars
1 llama.cpp 57,463
2 LLaMA-Factory 20,248
3 LocalAI 19,862
4 Chinese-LLaMA-Alpaca 17,348
5 LLaVA 16,333
6 dalai 13,051
7 petals 8,684
8 shell_gpt 8,303
9 BELLE 7,549
10 PowerInfer 6,969
11 GPTCache 6,430
12 Baichuan-7B 5,633
13 serge 5,543
14 k8sgpt 4,926
15 Huatuo-Llama-Med-Chinese 4,272
16 GPT-4-LLM 3,978
17 h2o-llmstudio 3,602
18 mergekit 3,427
19 Anima 3,139
20 InternGPT 3,128
21 aichat 2,871
22 xTuring 2,524
23 Alpaca-CoT 2,474

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com