privateGPT
ollama-webui
privateGPT | ollama-webui | |
---|---|---|
1 | 14 | |
50,198 | 5,789 | |
- | - | |
- | 9.8 | |
about 1 month ago | 2 months ago | |
Python | Svelte | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
privateGPT
-
PrivateGPT exploring the Documentation
# install developer tools xcode-select --install # create python sandbox mkdir PrivateGTP cd privateGTP/ python3 -m venv . # actiavte local context source bin/activate # privateGTP uses poetry for python module management privateGTP> pip install poetry # sync privateGTP project privateGTP> git clone https://github.com/imartinez/privateGPT # enable MPS for model loading and processing privateGTP> CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python privateGTP> cd privateGPT # Import configure python dependencies privateGTP> poetry run python3 scripts/setup # launch web interface to confirm operational on default model privateGTP> python3 -m private_gpt # navigate safari browser to http://localhost:8001/ # To bulk import documentation needed to stop the web interface as vector database not in multi-user mode privateGTP> [control] + "C" # import some PDFs privateGTP> curl "https://docs.intersystems.com/irislatest/csp/docbook/pdfs.zip" -o /tmp/pdfs.zip privateGTP> unzip /tmp/pdfs.zip -d /tmp # took a few hours to process privateGTP> make ingest /tmp/pdfs/pdfs/ # launch web interface again for query documentation privateGTP> python3 -m private_gpt
ollama-webui
- Run copilot locally
- Show HN: I made an app to use local AI as daily driver
-
Exploring Podman: A More Secure Docker Alternative
I'm a podman beginner, trying to install ollama-webui(1) using Podman on M2 MBA.
I started up Podman Desktop, and did a terminal command "docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main" based on Github's instructions, but it gave a error message something about "host".
Do you know what is the problem and how do I overcome this?
If I run the above command using Docker Desktop, it runs and installs Ollama-WebUI just fine.
(1) https://github.com/ollama-webui/ollama-webui ("Installing with Docker")
Thank you.
- Mixtral: Mixture of Experts
- Run Mistral 7B on M1 Mac
-
OpenAI's New Strategy
I set up Ollama in a docker container (really easy) and I use the Ollama web-ui here, that's very much like ChatGPT, and I have that in a container as well.
- chatgpt alternative
- How can I configure the same settings I get in the llama.cpp webUI with Ollama on macOS?
-
LibreChat
The primary use case here seems to be that it might be possible to use this tool to spend <$20/mo for the same feature set as ChatGPT+. It does not currently make any effort to support locally-hosted open source models, which is what I would have assumed from its name.
If you're interested in a fully Libre LLM stack, I've had fun lately with ollama [0] and ollama-webui [1]. It was pretty trivial to take ollama-webui's docker-compose file and set up a locally-running chat server with Mistral 7B. Trying out different models and prompts was likewise very easy to get started with.
Mistral isn't anything like as good as GPT-4, but it's Apache licensed and fully local, which meets my definition of Libre. I'll continue to use both while the FOSS stacks catch up, but it's fun to keep up with the progress on the open source stuff as tooling develops.
[0] https://github.com/jmorganca/ollama
[1] https://github.com/ollama-webui/ollama-webui
- LM Studio – Discover, download, and run local LLMs
What are some alternatives?
localGPT - Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
anything-llm - The all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities.
LibreChat - Enhanced ChatGPT Clone: Features OpenAI, Assistants API, Azure, Groq, GPT-4 Vision, Mistral, Bing, Anthropic, OpenRouter, Vertex AI, Gemini, AI model switching, message search, langchain, DALL-E-3, ChatGPT Plugins, OpenAI Functions, Secure Multi-User System, Presets, completely open-source for self-hosting. More features in development
gpt4all - gpt4all: run open-source LLMs anywhere
llamafile - Distribute and run LLMs with a single file.
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
open-webui - User-friendly WebUI for LLMs (Formerly Ollama WebUI)
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
stable-diffusion-webui - Stable Diffusion web UI
langchain - 🦜🔗 Build context-aware reasoning applications
MindMac - Issue Tracker for elegant client for MacOS