SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 Python llama Projects
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
shell_gpt
A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.
-
-
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Huatuo-Llama-Med-Chinese
Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models with Chinese Medical Knowledge. 本草(原名:华驼)模型仓库,基于中文医学知识的大语言模型指令微调
-
-
h2o-llmstudio
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
-
InternGPT
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
-
xTuring
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
-
Video-LLaMA
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
-
AGiXT
AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
-
EasyLM
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
-
-
api-for-open-llm
Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc. 开源大模型的统一后端接口
-
-
lightllm
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
-
-
safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: PaliGemma: Open-Source Multimodal Model by Google | news.ycombinator.com | 2024-05-15Here's a tutorial https://wandb.ai/byyoung3/ml-news/reports/How-to-Fine-Tune-L...
There's not really a super easy to use software solution yet, but a few different ones have cropped up. Right now you'll have to read papers to get the training recipes.
- https://github.com/haotian-liu/LLaVA/blob/main/scripts/finet...
Things like [petals](https://github.com/bigscience-workshop/petals) exist, distributed computing over willing participants. Right now corporate cash is being rammed into the space so why not snap it up while you can, but the moment it dries up projects like petals will see more of the love they deserve.
I envision a future where crypto-style booms happen over tokens useful for purchasing priority computational time, which is earned by providing said computational time. This way researchers can daisy-chain their independent smaller rigs together into something with gargantuan capabilities.
https://github.com/TheR1D/shell_gpt?tab=readme-ov-file#shell...
Project mention: Ask HN: What are the drawbacks of caching LLM responses? | news.ycombinator.com | 2024-03-15Just found this: https://github.com/zilliztech/GPTCache which seems to address this idea/issue.
Project mention: Baichuan 7B reaches top of LLM leaderboard for it's size (New foundation model 4K tokens) | /r/LocalLLaMA | 2023-06-17GitHub: baichuan-inc/baichuan-7B: A large-scale 7B pretraining language model developed by BaiChuan-Inc. (github.com)
Project mention: Language Models Are Super Mario: Absorbing Abilities from Homologous Models | news.ycombinator.com | 2024-04-06For others like me who’d not heard of merging before, this seems to be one tool[0] (there may be others)
[0] https://github.com/arcee-ai/mergekit
Project mention: I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts! | /r/machinelearningnews | 2023-09-07Explore the project on GitHub here.
Project mention: Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding | /r/aipromptprogramming | 2023-06-19
If you are more interested in AI assistants check out AGiXT. It has some really cool features but it is under heavy development. Not everything works jet and updates break sometimes already working functions. But it is still far better than babyAGI and other proof of concepts.
Project mention: Maxtext: A simple, performant and scalable Jax LLM | news.ycombinator.com | 2024-04-23
Project mention: Fish Speech TTS: clone OpenAI TTS in 30 minutes | news.ycombinator.com | 2024-05-22While we are still figuring out ways to improve the agent's emotional response to OpenAI GPT-4 level, we have already made significant progress in aligning OpenAI's TTS performance. To begin this experiment, we collected 10 hours of OpenAI TTS data to perform supervised fine-tuning (SFT) on both the LLM and VITS models, which took approximately 30 minutes. After that, we used 15 seconds of audio as a prompt during inference.
Demos Available: https://firefly-ai.notion.site/OpenAI-Examples-34975ae263a9496c84e89fb7b1ea25a4?pvs=4
As you can see, the model's emotion, rhythm, accent, and timbre match the OpenAI speakers, though there is some degradation in audio quality, which we are working on. To avoid any legal issues, we are unable to release the fine-tuned model, but I believe everyone can tune Fish Speech to this level within hours and for around $20.
Our experiment shows that with only 25 seconds of prompts (few-shot learning), without any fine-tuning, the model can mimic most behaviors except for how it reads numbers. To the best of our knowledge, you can clone how someone speaks in English, Chinese, and Japanese with 30 minutes of data using this framework.
Repo: https://github.com/fishaudio/fish-speech
Project mention: Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting | news.ycombinator.com | 2024-02-26
Python llama discussion
Python llama related posts
-
Why YC Went to DC
-
Ask HN: I have many PDFs – what is the best local way to leverage AI for search?
-
Fish Speech TTS: clone OpenAI TTS in 30 minutes
-
Llama3.np: pure NumPy implementation of Llama3
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
-
Mixture-of-Depths: Dynamically allocating compute in transformers
-
A note from our sponsor - SaaSHub
www.saashub.com | 16 Jun 2024
Index
What are some of the best open-source llama projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | LLaMA-Factory | 24,388 |
2 | Chinese-LLaMA-Alpaca | 17,745 |
3 | LLaVA | 17,404 |
4 | petals | 8,819 |
5 | shell_gpt | 8,644 |
6 | GPTCache | 6,595 |
7 | Baichuan-7B | 5,649 |
8 | Huatuo-Llama-Med-Chinese | 4,369 |
9 | mergekit | 3,832 |
10 | h2o-llmstudio | 3,722 |
11 | InternGPT | 3,154 |
12 | xTuring | 2,545 |
13 | Video-LLaMA | 2,540 |
14 | AGiXT | 2,514 |
15 | EasyLM | 2,274 |
16 | fish-speech | 2,240 |
17 | api-for-open-llm | 2,111 |
18 | mPLUG-Owl | 1,993 |
19 | lightllm | 1,960 |
20 | Multimodal-GPT | 1,425 |
21 | safe-rlhf | 1,199 |
22 | LLMCompiler | 1,158 |
23 | lag-llama | 1,051 |