SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 Python llama2 Projects
-
LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Project mention: Show HN: LLM Aided OCR (Correcting Tesseract OCR Errors with LLMs) | news.ycombinator.com | 2024-08-09This package seems to use llama_cpp for local inference [1] so you can probably use anything supported by that [2]. However, I think it's just passing OCR output for correction - the language model doesn't actually see the original image.
That said, there are some large language models you can run locally which accept image input. Phi-3-Vision [3], LLaVA [4], MiniCPM-V [5], etc.
[1] - https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...
[2] - https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#de...
[3] - https://huggingface.co/microsoft/Phi-3-vision-128k-instruct
[4] - https://github.com/haotian-liu/LLaVA
[5] - https://github.com/OpenBMB/MiniCPM-V
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
h2ogpt
Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
Project mention: Major Technologies Worth Learning in 2025 for Data Professionals | dev.to | 2024-12-07Artificial Intelligence (AI) is becoming a ubiquitous, and dare I say, indispensable part of data workflows. Tools like ChatGPT have made it easier to review data and write reports. But diving even deeper, tools like DataRobot, H2O.ai, and Google’s AutoML are also simplifying machine learning pipelines and automating repetitive tasks, enabling professionals to focus on high-value activities like model optimization and data storytelling. Mastering these tools will not only boost productivity but also ensure you remain competitive in an AI-first world.
-
Project mention: Show HN: Toolkit for LLM Fine-Tuning, Ablating and Testing | news.ycombinator.com | 2024-04-07
This is a great project, little bit similar to https://github.com/ludwig-ai/ludwig, but it includes testing capabilities and ablation.
questions regarding the LLM testing aspect: How extensive is the test coverage for LLM use cases, and what is the current state of this project area? Do you offer any guarantees, or is it considered an open-ended problem?
Would love to see more progress toward this area!
-
OpenLLM
Run any open-source LLMs, such as Llama, Mistral, as OpenAI compatible API endpoint in the cloud.
OpenLLM is a powerful platform that empowers developers to leverage the potential of open-source large language models (LLMs). It is like a Swiss Army knife for LLMs. It's a set of tools that helps developers overcome these deployment hurdles.
-
-
opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
Project mention: Show HN: Times faster LLM evaluation with Bayesian optimization | news.ycombinator.com | 2024-02-13Fair question.
Evaluate refers to the phase after training to check if the training is good.
Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!
So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.
-
xtuner
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
Project mention: PaliGemma: Open-Source Multimodal Model by Google | news.ycombinator.com | 2024-05-15 -
-
h2o-llmstudio
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
-
api-for-open-llm
Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc. 开源大模型的统一后端接口
-
llm_aided_ocr
Enhance Tesseract OCR output for scanned PDFs by applying Large Language Model (LLM) corrections.
Project mention: Show HN: LLM Aided OCR (Correcting Tesseract OCR Errors with LLMs) | news.ycombinator.com | 2024-08-09https://github.com/Dicklesworthstone/llm_aided_ocr/blob/main...
In our approach, we're just zero shot asking for markdown from the image. Vs this approach of passing in the Tesseract result + image context and asking for correction. I'm curious if there is a meaningful accuracy difference.
My first thought it that the tesseract result may decrease accuracy, especially with tables or multi column pdfs. The tesseract model has a tendency to take everything from a table and throw it into one text blob.
-
DemoGPT
🤖 Everything you need to create an LLM Agent—tools, prompts, frameworks, and models—all in one place.
-
-
swiss_army_llama
A FastAPI service for semantic text search using precomputed embeddings and advanced similarity measures, with built-in support for various file types through textract.
-
Project mention: Show HN: Toolkit for LLM Fine-Tuning, Ablating and Testing | news.ycombinator.com | 2024-04-07
-
code-llama-for-vscode
Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
-
Project mention: Limitless: Personalized AI powered by what you've seen, said, and heard | news.ycombinator.com | 2024-04-15
-
AnglE
Train and Infer Powerful Sentence Embeddings with AnglE | 🔥 SOTA on STS and MTEB Leaderboard (by SeanLee97)
-
Project mention: Zetascale, Build high-performance AI models with modular building blocks | news.ycombinator.com | 2024-02-09
-
-
-
-
Python llama2 discussion
Python llama2 related posts
-
Big Money vs. Small Money - FAV0 Weekly #020
-
Meta's Open Source NotebookLM
-
Revolutionizing CLI Tools with `Ophrase` and `Oproof`
-
Show HN: LLM Aided OCR (Correcting Tesseract OCR Errors with LLMs)
-
Game of Firsts
-
How to Run Llama 3 405B on Home Devices? Build AI Cluster
-
Llama3.np: pure NumPy implementation of Llama3
-
A note from our sponsor - SaaSHub
www.saashub.com | 25 Jan 2025
Index
What are some of the best open-source llama2 projects in Python? This list will help you:
# | Project | Stars |
---|---|---|
1 | LLaVA | 21,180 |
2 | h2ogpt | 11,601 |
3 | ludwig | 11,279 |
4 | OpenLLM | 10,403 |
5 | lmdeploy | 5,279 |
6 | opencompass | 4,529 |
7 | xtuner | 4,172 |
8 | Baichuan2 | 4,119 |
9 | h2o-llmstudio | 4,125 |
10 | api-for-open-llm | 2,392 |
11 | llm_aided_ocr | 2,317 |
12 | DemoGPT | 1,759 |
13 | LLMCompiler | 1,580 |
14 | swiss_army_llama | 962 |
15 | LLM-Finetuning-Toolkit | 805 |
16 | code-llama-for-vscode | 557 |
17 | Owl | 544 |
18 | AnglE | 505 |
19 | zeta | 457 |
20 | Finetune_LLMs | 454 |
21 | slowllama | 448 |
22 | IncognitoPilot | 434 |
23 | llama2.py | 414 |