llama2

Open-source projects categorized as llama2
Topics: llm llama chatgpt Gpt AI

Top 23 llama2 Open-Source Projects

  • open-interpreter

    A natural language interface for computers

  • Project mention: OpenInterpreter – Natural language interface to your computer | news.ycombinator.com | 2024-04-23
  • jan

    Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)

  • Project mention: Introducing Jan | dev.to | 2024-05-05

    As we continue this blog series, let's explore a fully open-source alternative to LM Studio - Jan, a project from Southeast Asia.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • LLaVA

    [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

  • Project mention: Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real | news.ycombinator.com | 2023-12-10

    Update: For anyone else facing the commercial use question on LLaVA - it is licensed under Apache 2.0. Can be used commercially with attribution: https://github.com/haotian-liu/LLaVA/blob/main/LICENSE

  • h2ogpt

    Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/

  • Project mention: Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023? | news.ycombinator.com | 2023-12-24

    As others have said you want RAG.

    The most feature complete implementation I've seen is h2ogpt[0] (not affiliated).

    The code is kind of a mess (most of the logic is in an ~8000 line python file) but it supports ingestion of everything from YouTube videos to docx, pdf, etc - either offline or from the web interface. It uses langchain and a ton of additional open source libraries under the hood. It can run directly on Linux, via docker, or with one-click installers for Mac and Windows.

    It has various model hosting implementations built in - transformers, exllama, llama.cpp as well as support for model serving frameworks like vLLM, HF TGI, etc or just OpenAI.

    You can also define your preferred embedding model along with various other parameters but I've found the out of box defaults to be pretty sane and usable.

    [0] - https://github.com/h2oai/h2ogpt

  • petals

    🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

  • Project mention: Mistral Large | news.ycombinator.com | 2024-02-26

    So how long until we can do an open source Mistral Large?

    We could make a start on Petals or some other open source distributed training network cluster possibly?

    [0] https://petals.dev/

  • Baichuan2

    A series of large language models developed by Baichuan Intelligent Technology

  • Project mention: Baichuan 2 | news.ycombinator.com | 2023-10-12
  • h2o-llmstudio

    H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/

  • Project mention: Paid dev gig: develop a basic LLM PEFT finetuning utility | /r/LocalLLaMA | 2023-06-02
  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  • opencompass

    OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.

  • Project mention: Show HN: Times faster LLM evaluation with Bayesian optimization | news.ycombinator.com | 2024-02-13

    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

  • casibase

    ⚡️Open-source AI LangChain-like RAG (Retrieval-Augmented Generation) knowledge database with web UI and Enterprise SSO⚡️, supports OpenAI, Azure, LLaMA, Google Gemini, HuggingFace, Claude, Grok, etc., chat bot demo: https://demo.casibase.com, admin UI demo: https://demo-admin.casibase.com

  • Project mention: Open-source AI knowledge database with web UI and Enterprise SSO | news.ycombinator.com | 2023-12-21
  • api-for-open-llm

    Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc. 开源大模型的统一后端接口

  • Project mention: FLaNK Stack Weekly for 14 Aug 2023 | dev.to | 2023-08-14
  • SolidGPT

    Developer AI Persona Search Agent

  • Project mention: Best coding AI to use with entire codebase | /r/ChatGPTCoding | 2023-12-10

    Another thing to try is one of the repositories like SolidGPT: https://github.com/AI-Citizen/SolidGPT

  • cortex

    Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan (by janhq)

  • Project mention: Introducing Jan | dev.to | 2024-05-05

    Jan incorporates a lightweight, built-in inference server called Nitro. Nitro supports both llama.cpp and NVIDIA's TensorRT-LLM engines. This means many open LLMs in the GGUF format are supported. Jan's Model Hub is designed for easy installation of pre-configured models but it also allows you to install virtually any model from Hugging Face or even your own.

  • DemoGPT

    Create 🦜️🔗 LangChain apps by just using prompts🌟 Star to support our work! | 只需使用句子即可创建 LangChain 应用程序。 给个star支持我们的工作吧!

  • Project mention: Llama 2 Code Interpreter | news.ycombinator.com | 2023-07-23
  • llm-applications

    A comprehensive guide to building RAG-based LLM applications for production.

  • Project mention: A comprehensive guide to building RAG-based LLM applications for production | news.ycombinator.com | 2023-10-25
  • enchanted

    Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.

  • Project mention: FLaNK Stack Weekly 19 Feb 2024 | dev.to | 2024-02-19
  • refact

    WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding

  • Project mention: RefactAI: Use best-in-class LLMs for coding in your IDE | news.ycombinator.com | 2024-04-30
  • llama2.c

    Llama 2 Everywhere (L2E) (by trholding)

  • Project mention: What would an LLM OS look like? | news.ycombinator.com | 2024-03-14

    Nice article. We did a demo for booting to LLM and also as Kernel Module: https://github.com/trholding/llama2.c The whole things was funny and buggy, but since then we have been developing in stealth, even trying to raise VC capital. Our goal is to make computers like a buddy to whom you can talk to and explain things and get work done, kinda like a Jarvis. The way we interact with computers haven't changed for decades, its time to disrupt that to get more productivity. I also believe with this approach one can avoid installing different applications, when the computer (models) emulate activities done through applications. For example, cutting and pasting a dog from a dog photo onto a banner for a dog racing competition would not require you to be a graphics artist nor use tools like photshop / gimp. You could tell the computer and it would use segment anything to cut the dog, use Text and SD for banner text and bg paste the dog, seek your approval, search for the fastest, best and cheapest banner printing service and submit it. 10 years ago this could have been sci-fi, but now it is a possibility. Just need to connect the dots, package and polish it to make it a good product.

  • LLMCompiler

    [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling

  • Project mention: FLaNK Weekly 18 Dec 2023 | dev.to | 2023-12-18
  • tlm

    Local CLI Copilot, powered by CodeLLaMa. 💻🦙 (by yusufcanb)

  • Project mention: What AI assistants are already bundled for Linux? | news.ycombinator.com | 2024-03-01

    Perhaps this: https://github.com/yusufcanb/tlm?

    it is not distro bundled (yet), but I have it running on my Fedora Linux 39 running on a NUC with 16GB of RAM. Performance is good enough for me.

  • Get-Things-Done-with-Prompt-Engineering-and-LangChain

    LangChain & Prompt Engineering tutorials on Large Language Models (LLMs) such as ChatGPT with custom data. Jupyter notebooks on loading and indexing data, creating prompt templates, CSV agents, and using retrieval QA chains to query the custom data. Projects for using a private LLM (Llama 2) for chat with PDF files, tweets sentiment analysis.

  • Project mention: Get-Things-Done-with-Prompt-Engineering-and-LangChain: NEW Data - star count:617.0 | /r/algoprojects | 2023-12-10
  • autollm

    Ship RAG based LLM web apps in seconds.

  • Project mention: FLaNK Stack Weekly 06 Nov 2023 | dev.to | 2023-11-06
  • chatd

    Chat with your documents using local AI

  • Project mention: feed pdf files into an LLM for question answering tasks | /r/LocalLLaMA | 2023-11-08

    IYH use chatd

  • distributed-llama

    Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.

  • Project mention: Distributed Grok-1 (314B) | news.ycombinator.com | 2024-04-15
  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

llama2 related posts

  • Introducing Jan

    4 projects | dev.to | 5 May 2024
  • Limitless: Personalized AI powered by what you've seen, said, and heard

    1 project | news.ycombinator.com | 15 Apr 2024
  • Distributed Grok-1 (314B)

    1 project | news.ycombinator.com | 15 Apr 2024
  • AI enthusiasm - episode #2🚀

    2 projects | dev.to | 11 Apr 2024
  • Do you Know! Llama ?

    1 project | dev.to | 11 Apr 2024
  • Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?

    11 projects | news.ycombinator.com | 1 Apr 2024
  • Half-Quadratic Quantization of Large Machine Learning Models

    1 project | news.ycombinator.com | 14 Mar 2024
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 7 May 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Index

What are some of the best open-source llama2 projects? This list will help you:

Project Stars
1 open-interpreter 48,604
2 jan 17,877
3 LLaVA 16,333
4 h2ogpt 10,458
5 petals 8,684
6 Baichuan2 3,936
7 h2o-llmstudio 3,602
8 opencompass 2,559
9 casibase 2,151
10 api-for-open-llm 1,999
11 SolidGPT 1,945
12 cortex 1,600
13 DemoGPT 1,573
14 llm-applications 1,504
15 enchanted 1,579
16 refact 1,422
17 llama2.c 1,385
18 LLMCompiler 1,083
19 tlm 1,041
20 Get-Things-Done-with-Prompt-Engineering-and-LangChain 958
21 autollm 914
22 chatd 797
23 distributed-llama 756

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com