langchaingo
LocalAI
langchaingo | LocalAI | |
---|---|---|
9 | 83 | |
3,195 | 20,076 | |
- | 9.3% | |
9.8 | 9.9 | |
2 days ago | 4 days ago | |
Go | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langchaingo
-
How to use Retrieval Augmented Generation (RAG) for Go applications
Generative AI development has been democratised, thanks to powerful Machine Learning models (specifically Large Language Models such as Claude, Meta's LLama 2, etc.) being exposed by managed platforms/services as API calls. This frees developers from the infrastructure concerns and lets them focus on the core business problems. This also means that developers are free to use the programming language best suited for their solution. Python has typically been the go-to language when it comes to AI/ML solutions, but there is more flexibility in this area. In this post you will see how to leverage the Go programming language to use Vector Databases and techniques such as Retrieval Augmented Generation (RAG) with langchaingo. If you are a Go developer who wants to how to build learn generative AI applications, you are in the right place!
-
Build a Serverless GenAI solution with Lambda, DynamoDB, LangChain and Amazon Bedrock
This use-case here is a similar one - a chat application. I will switch back to implementing things in Go using langchaingo (I used Python for the previous one) and continue to use Amazon Bedrock. But there are few unique things you can explore in this blog post:
- LangChain for Go, the easiest way to write LLM-based programs in Go
- Langchaingo – LangChain in Idiomatic Go
- Agency: Pure Go LangChain Alternative
-
Building LangChain applications with Amazon Bedrock and Go - An introduction
langchaingo is the LangChain implementation for the Go programming language. This blog post covers how to extend langchaingo to use foundation model from Amazon Bedrock.
-
Zep: A long-term memory store for LLM apps, written in Go
Langchain Go is being actively developed https://github.com/tmc/langchaingo
LocalAI
- LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
What are some alternatives?
yao - :rocket: A performance app engine to create web services and applications in minutes.Suitable for AI, IoT, Industrial Internet, Connected Vehicles, DevOps, Energy, Finance and many other use-cases.
gpt4all - gpt4all: run open-source LLMs anywhere
langchain - 🦜🔗 Build context-aware reasoning applications
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
llama-cpp-python - Python bindings for llama.cpp
zep - Zep: Long-Term Memory for AI Assistants.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
TaskEaseGPT - (WIP) A user-friendly, AI-powered task manager emphasizing efficient work over planning. Streamlines workflow with intelligent task generation & execution. Boost your productivity today!
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
langchaingo-amazon-bedrock-llm - Amazon Bedrock extension for langchaingo
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.