langflow
private-gpt
langflow | private-gpt | |
---|---|---|
28 | 131 | |
17,467 | 51,882 | |
12.6% | 2.3% | |
10.0 | 9.2 | |
1 day ago | about 23 hours ago | |
JavaScript | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langflow
-
News DataStax just bought our startup Langflow
Hey folks I'm the Head of DevRel @ DataStax here and just wanted to share to the HN community that in conjunction with this big acquisition news, the LF team has shipped 1.0-alpha of Langflow.
It's a simple `pip install` and the team would love any and all feedback!
https://github.com/logspace-ai/langflow/
-
Node-based AutoGen with local LLMs inside ComfyUI
You can also check langflow, a node UI for langchain https://github.com/logspace-ai/langflow
- Show HN: Rivet – open-source AI Agent dev env with real-world applications
-
Using Retrieval Augmented Generation to Clear Our GitHub Backlog
There's a few tools out there like AgentGPT (https://github.com/reworkd/AgentGPT, although it's a more conversational interface), and (https://github.com/logspace-ai/langflow) and others. I think most developers definitely prefer a code-first interface though like a library but haven't found one that's great yet. We've used them in the past but didn't have the best experience so would love to hear if anyone has worked with a library they found really flexible.
- Show HN: ChainForge, a visual tool for prompt engineering and LLM evaluation
-
Anyone know how to get LangFlow working with oobabooga?
I found this thread talking about it here: https://github.com/logspace-ai/langflow/issues/263
-
Found a fun little open source project called Flowise. It's a drag & drop UI to build your customized LLM flow using LangchainJS
also check https://github.com/logspace-ai/langflow
-
What exactly is AutoGPT?
AutoGPT is basically a demo of what you can do with Langchain. If you want to play with Langchain in a drag and drop blueprint environment I suggest Langflow
-
Launch HN: Fastgen (YC W23) – Visual Low-Code Back End Builder
Hi, I like this! I'm curious what drove the decision to use the vertical block builder style you chose. I'm partial to node-based editors and have been building things with React Flow recently. LangFlow [1] is a good example, but there's lots of UIs that use a similar interface (e.g. Blender [2] and Unity [3]).
[1] https://github.com/logspace-ai/langflow
[2] https://docs.blender.org/manual/en/3.5/interface/controls/no...
[3] https://unity.com/features/unity-visual-scripting
-
Having fun testing CanvasGPT - a new project launching soon
Here's an open source version that's very similar LangFlow
private-gpt
-
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
PrivateGPT is a nice tool for this. It's not exactly what you're asking for, but it gets part of the way there.
https://github.com/zylon-ai/private-gpt
-
PrivateGPT exploring the Documentation
Further details available at: https://docs.privategpt.dev/api-reference/api-reference/ingestion
- Show HN: I made an app to use local AI as daily driver
-
privateGPT VS quivr - a user suggested alternative
2 projects | 12 Jan 2024
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
Run https://github.com/imartinez/privateGPT
Then
make ingest /path/to/folder/with/files
Then chat to the LLM.
Done.
Docs: https://docs.privategpt.dev/overview/welcome/quickstart
-
Mozilla "MemoryCache" Local AI
PrivateGPT repository in case anyone's interested: https://github.com/imartinez/privateGPT . It doesn't seem to be linked from their official website.
-
What Is Retrieval-Augmented Generation a.k.a. RAG
I’m preparing a small internal tool for my work to search documents and provide answers (with references), I’m thinking of using GPT4All [0], Danswer [1] and/or privateGPT [2].
The RAG technique is very close to what I have in mind, but I don’t want the LLM to “hallucinate” and generate answers on its own by synthesizing the source documents. As stated by many others, we’re living in interesting times.
[0] https://gpt4all.io/index.html
[1] https://www.danswer.ai/
[2] https://github.com/imartinez/privateGPT
- LM Studio – Discover, download, and run local LLMs
-
Ask HN: Local LLM Recommendation?
https://www.reddit.com/r/LocalLLaMA/comments/14niv66/using_a...
https://github.com/imartinez/privateGPT
-
Run ChatGPT-like LLMs on your laptop in 3 lines of code
I've been playing around with https://github.com/imartinez/privateGPT and https://github.com/simonw/llm and wanted to create a simple Python package that made it easier to run ChatGPT-like LLMs on your own machine, use them with non-public data, and integrate them into practical applications.
This resulted in Python package I call OnPrem.LLM.
In the documentation, there are examples for how to use it for information extraction, text generation, retrieval-augmented generation (i.e., chatting with documents on your computer), and text-to-code generation: https://amaiya.github.io/onprem/
Enjoy!
What are some alternatives?
Flowise - Drag & drop UI to build your customized LLM flow
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
langchain-visualizer - Visualization and debugging tool for LangChain workflows
gpt4all - gpt4all: run open-source LLMs anywhere
Local-LLM-Comparison-Colab-UI - Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
GPTQ-for-LLaMa - 4 bits quantization of LLaMa using GPTQ
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
serge - A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
llama.cpp - LLM inference in C/C++