Python llama

Open-source Python projects categorized as llama

Top 23 Python llama Projects

  • LLaMA-Factory

    Unify Efficient Fine-Tuning of 100+ LLMs

  • Project mention: FLaNK-AIM Weekly 06 May 2024 | dev.to | 2024-05-06
  • Scout Monitoring

    Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.

    Scout Monitoring logo
  • Chinese-LLaMA-Alpaca

    中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

  • LLaVA

    [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

  • Project mention: PaliGemma: Open-Source Multimodal Model by Google | news.ycombinator.com | 2024-05-15

    Here's a tutorial https://wandb.ai/byyoung3/ml-news/reports/How-to-Fine-Tune-L...

    There's not really a super easy to use software solution yet, but a few different ones have cropped up. Right now you'll have to read papers to get the training recipes.

    - https://github.com/haotian-liu/LLaVA/blob/main/scripts/finet...

  • petals

    🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

  • Project mention: Chameleon: Meta's New Multi-Modal LLM | news.ycombinator.com | 2024-05-21

    Things like [petals](https://github.com/bigscience-workshop/petals) exist, distributed computing over willing participants. Right now corporate cash is being rammed into the space so why not snap it up while you can, but the moment it dries up projects like petals will see more of the love they deserve.

    I envision a future where crypto-style booms happen over tokens useful for purchasing priority computational time, which is earned by providing said computational time. This way researchers can daisy-chain their independent smaller rigs together into something with gargantuan capabilities.

  • shell_gpt

    A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.

  • Project mention: Oh My Zsh | news.ycombinator.com | 2024-01-22

    https://github.com/TheR1D/shell_gpt?tab=readme-ov-file#shell...

  • GPTCache

    Semantic cache for LLMs. Fully integrated with LangChain and llama_index.

  • Project mention: Ask HN: What are the drawbacks of caching LLM responses? | news.ycombinator.com | 2024-03-15

    Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.

  • Baichuan-7B

    A large-scale 7B pretraining language model developed by BaiChuan-Inc.

  • Project mention: Baichuan 7B reaches top of LLM leaderboard for it's size (New foundation model 4K tokens) | /r/LocalLLaMA | 2023-06-17

    GitHub: baichuan-inc/baichuan-7B: A large-scale 7B pretraining language model developed by BaiChuan-Inc. (github.com)

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • Huatuo-Llama-Med-Chinese

    Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models with Chinese Medical Knowledge. 本草(原名:华驼)模型仓库,基于中文医学知识的大语言模型指令微调

  • mergekit

    Tools for merging pretrained large language models.

  • Project mention: Language Models Are Super Mario: Absorbing Abilities from Homologous Models | news.ycombinator.com | 2024-04-06

    For others like me who’d not heard of merging before, this seems to be one tool[0] (there may be others)

    [0] https://github.com/arcee-ai/mergekit

  • h2o-llmstudio

    H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/

  • InternGPT

    InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)

  • xTuring

    Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6

  • Project mention: I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts! | /r/machinelearningnews | 2023-09-07

    Explore the project on GitHub here.

  • Video-LLaMA

    [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

  • Project mention: Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding | /r/aipromptprogramming | 2023-06-19
  • AGiXT

    AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.

  • Project mention: Conversational "memory loss"? | /r/LocalLLaMA | 2023-07-07

    If you are more interested in AI assistants check out AGiXT. It has some really cool features but it is under heavy development. Not everything works jet and updates break sometimes already working functions. But it is still far better than babyAGI and other proof of concepts.

  • EasyLM

    Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.

  • Project mention: Maxtext: A simple, performant and scalable Jax LLM | news.ycombinator.com | 2024-04-23
  • fish-speech

    Brand new TTS solution

  • Project mention: Fish Speech TTS: clone OpenAI TTS in 30 minutes | news.ycombinator.com | 2024-05-22

    While we are still figuring out ways to improve the agent's emotional response to OpenAI GPT-4 level, we have already made significant progress in aligning OpenAI's TTS performance. To begin this experiment, we collected 10 hours of OpenAI TTS data to perform supervised fine-tuning (SFT) on both the LLM and VITS models, which took approximately 30 minutes. After that, we used 15 seconds of audio as a prompt during inference.

    Demos Available: https://firefly-ai.notion.site/OpenAI-Examples-34975ae263a9496c84e89fb7b1ea25a4?pvs=4

    As you can see, the model's emotion, rhythm, accent, and timbre match the OpenAI speakers, though there is some degradation in audio quality, which we are working on. To avoid any legal issues, we are unable to release the fine-tuned model, but I believe everyone can tune Fish Speech to this level within hours and for around $20.

    Our experiment shows that with only 25 seconds of prompts (few-shot learning), without any fine-tuning, the model can mimic most behaviors except for how it reads numbers. To the best of our knowledge, you can clone how someone speaks in English, Chinese, and Japanese with 30 minutes of data using this framework.

    Repo: https://github.com/fishaudio/fish-speech

  • api-for-open-llm

    Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc. 开源大模型的统一后端接口

  • Project mention: FLaNK Stack Weekly for 14 Aug 2023 | dev.to | 2023-08-14
  • mPLUG-Owl

    mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model

  • lightllm

    LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.

  • Project mention: FLaNK Weekly 31 December 2023 | dev.to | 2023-12-31
  • Multimodal-GPT

    Multimodal-GPT

  • safe-rlhf

    Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

  • LLMCompiler

    [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling

  • Project mention: FLaNK Weekly 18 Dec 2023 | dev.to | 2023-12-18
  • lag-llama

    Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting

  • Project mention: Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting | news.ycombinator.com | 2024-02-26
  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Python llama discussion

Log in or Post with

Python llama related posts

Index

What are some of the best open-source llama projects in Python? This list will help you:

Project Stars
1 LLaMA-Factory 24,388
2 Chinese-LLaMA-Alpaca 17,745
3 LLaVA 17,404
4 petals 8,819
5 shell_gpt 8,644
6 GPTCache 6,595
7 Baichuan-7B 5,649
8 Huatuo-Llama-Med-Chinese 4,369
9 mergekit 3,832
10 h2o-llmstudio 3,722
11 InternGPT 3,154
12 xTuring 2,545
13 Video-LLaMA 2,540
14 AGiXT 2,514
15 EasyLM 2,274
16 fish-speech 2,240
17 api-for-open-llm 2,111
18 mPLUG-Owl 1,993
19 lightllm 1,960
20 Multimodal-GPT 1,425
21 safe-rlhf 1,199
22 LLMCompiler 1,158
23 lag-llama 1,051

Sponsored
Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com