SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 llm Open-Source Projects
-
MetaGPT
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
dify
Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
chatgpt-on-wechat
基于大模型搭建的聊天机器人,同时支持 企业微信、微信 公众号、飞书、钉钉 等接入,可选择GPT3.5/GPT4.0/Claude/文心一言/讯飞星火/通义千问/Gemini/GLM-4/Claude/Kimi/LinkAI,能处理文本、语音和图片,访问操作系统和互联网,支持基于自有知识库进行定制企业智能客服。
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
-
FastGPT
FastGPT is a knowledge-based platform built on the LLMs, offers a comprehensive suite of out-of-the-box capabilities such as data processing, RAG retrieval, and visual AI workflow orchestration, letting you easily develop and deploy complex question-answering systems without the need for extensive setup or configuration.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Project mention: Ask HN: Affordable hardware for running local large language models? | news.ycombinator.com | 2024-05-05Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
https://github.com/geekan/MetaGPT :
> MetaGPT takes a one line requirement as input and outputs user stories / competitive analysis / requirements / data structures / APIs / documents, etc.
https://news.ycombinator.com/item?id=29141796 ; "Co-Founder Equity Calculator"
"Ask HN: What are your go to SaaS products for startups/MVPs?" (2020) https://news.ycombinator.com/item?id=23535828 ; FounderKit, StackShare
> USA Small Business Administration: "10 steps to start your business." https://www.sba.gov/starting-business/how-start-business/10-...
>> "Startup Incorporation Checklist: How to bootstrap a Delaware C-corp (or S-corp) with employee(s) in California" https://github.com/leonar15/startup-checklist
Project mention: LlamaIndex: A data framework for your LLM applications | news.ycombinator.com | 2024-04-07
Project mention: Ask HN: People who switched from GPT to their own models. How was it? | news.ycombinator.com | 2024-02-26This is a very nice resource: https://github.com/mlabonne/llm-course
Project mention: Ask HN: LLM workflows to avoid copying and pasting from the web interfaces? | news.ycombinator.com | 2024-05-03This visual IDE for LLM pipelines was posted recently: https://github.com/langgenius/dify
See if it helps.
Project mention: Computer Vision Meetup: Develop a Legal Search Application from Scratch using Milvus and DSPy! | dev.to | 2024-05-02Legal practitioners often need to find specific cases and clauses across thousands of dense documents. While traditional keyword-based search techniques are useful, they fail to fully capture semantic content of queries and case files. Vector search engines and large language models provide an intriguing alternative. In this talk, I will show you how to build a legal search application using the DSPy framework and the Milvus vector search engine.
Project mention: The world's most-powerful AI model suddenly got 'lazier' and 'dumber.' A radical redesign of OpenAI's GPT-4 could be behind the decline in performance. | /r/ChatGPT | 2023-07-13
Project mention: What’s the Difference Between Fine-tuning, Retraining, and RAG? | dev.to | 2024-04-08Check us out on GitHub.
Project mention: LocalAI: Self-hosted OpenAI alternative reaches 2.14.0 | news.ycombinator.com | 2024-05-03
Project mention: AI leaderboards are no longer useful. It's time to switch to Pareto curves | news.ycombinator.com | 2024-04-30I guess the root cause of my claim is that OpenAI won't tell us whether or not GPT-3.5 is an MoE model, and I assumed it wasn't. Since GPT-3.5 is clearly nondeterministic at temp=0, I believed the nondeterminism was due to FPU stuff, and this effect was amplified with GPT-4's MoE. But if GPT-3.5 is also MoE then that's just wrong.
What makes this especially tricky is that small models are truly 100% deterministic at temp=0 because the relative likelihoods are too coarse for FPU issues to be a factor. I had thought 3.5 was big enough that some of its token probabilities were too fine-grained for the FPU. But that's probably wrong.
On the other hand, it's not just GPT, there are currently floating-point difficulties in vllm which significantly affect the determinism of any model run on it: https://github.com/vllm-project/vllm/issues/966 Note that a suggested fix is upcasting to float32. So it's possible that GPT-3.5 is using an especially low-precision float and introducing nondeterminism by saving money on compute costs.
Sadly I do not have the money[1] to actually run a test to falsify any of this. It seems like this would be a good little research project.
[1] Or the time, or the motivation :) But this stuff is expensive.
Project mention: The Era of 1-Bit LLMs: Training_Tips, Code And_FAQ [pdf] | news.ycombinator.com | 2024-03-21
Luckily, there are some open-source projects like Open WebUI, which provide a web-based experience similar to ChatGPT, that you can also run locally and point to any model. To start the Open WebUI Docker container locally, run the command below in your Terminal (make sure, that ollama serve is still running).
Project mention: #SemanticKernel – 📎Chat Service demo running Phi-2 LLM locally with #LMStudio | dev.to | 2024-02-08There is an amazing sample on how to create your own LLM Service class to be used in Semantic Kernel. You can view the Sample here: https://github.com/microsoft/semantic-kernel/blob/3451a4ebbc9db0d049f48804c12791c681a326cb/dotnet/samples/KernelSyntaxExamples/Example16_CustomLLM.cs
I'd like to share with you today the Chinese-Alpaca-Plus-13B-GPTQ model, which is the GPTQ format quantised 4bit models of Yiming Cui's Chinese-LLaMA-Alpaca 13B for GPU reference.
Project mention: Finetune a GPT Model for Spam Detection on Your Laptop in Just 5 Minutes | news.ycombinator.com | 2024-05-03
Project mention: Ask HN: What are the capabilities of consumer grade hardware to work with LLMs? | news.ycombinator.com | 2023-08-03I agree, I've definitely seen way more information about running image synthesis models like Stable Diffusion locally than I have LLMs. It's counterintuitive to me that Stable Diffusion takes less RAM than an LLM, especially considering it still needs the word vectors. Goes to show I know nothing.
I guess it comes down to the requirement of a very high end (or multiple) GPU that makes it impractical for most vs just running it in Colab or something.
Tho there are some efforts:
https://github.com/cocktailpeanut/dalai
llm related posts
-
TimesFM (Time Series Foundation Model) for time-series forecasting
-
Show HN: Text-to-SQL Focus on Semantics and UI/UX
-
WrenAI: Make Your Database RAG-Ready. Text-to-SQL Focus on Semantics and UI/UX
-
DeepSeek-V2 integrated, RAGFlow v0.5.0 is released
-
Mini-assistant: OpenAI Assistant compatible API at your service locally
-
Finnally LangChain for C++ World?
-
FinRAG Datasets and Study
-
A note from our sponsor - SaaSHub
www.saashub.com | 8 May 2024
Index
What are some of the best open-source llm projects? This list will help you:
Project | Stars | |
---|---|---|
1 | llama.cpp | 57,463 |
2 | MetaGPT | 39,468 |
3 | llama_index | 31,389 |
4 | llm-course | 29,169 |
5 | dify | 27,030 |
6 | Milvus | 26,979 |
7 | Mr.-Ranedeer-AI-Tutor | 26,708 |
8 | chatgpt-on-wechat | 25,142 |
9 | Flowise | 24,426 |
10 | MindsDB | 21,354 |
11 | LLaMA-Factory | 20,971 |
12 | LocalAI | 19,862 |
13 | vllm | 18,931 |
14 | unilm | 18,407 |
15 | open-webui | 18,333 |
16 | semantic-kernel | 18,332 |
17 | Chinese-LLaMA-Alpaca | 17,348 |
18 | mlc-llm | 17,053 |
19 | ChatGLM2-6B | 15,514 |
20 | LLMs-from-scratch | 14,440 |
21 | peft | 13,962 |
22 | FastGPT | 13,166 |
23 | dalai | 13,060 |
Sponsored