ollama-webui
MindMac
ollama-webui | MindMac | |
---|---|---|
14 | 5 | |
5,789 | 13 | |
- | - | |
9.8 | 0.0 | |
3 months ago | 19 days ago | |
Svelte | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ollama-webui
- Run copilot locally
- Show HN: I made an app to use local AI as daily driver
-
Exploring Podman: A More Secure Docker Alternative
I'm a podman beginner, trying to install ollama-webui(1) using Podman on M2 MBA.
I started up Podman Desktop, and did a terminal command "docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main" based on Github's instructions, but it gave a error message something about "host".
Do you know what is the problem and how do I overcome this?
If I run the above command using Docker Desktop, it runs and installs Ollama-WebUI just fine.
(1) https://github.com/ollama-webui/ollama-webui ("Installing with Docker")
Thank you.
- Mixtral: Mixture of Experts
- Run Mistral 7B on M1 Mac
-
OpenAI's New Strategy
I set up Ollama in a docker container (really easy) and I use the Ollama web-ui here, that's very much like ChatGPT, and I have that in a container as well.
- chatgpt alternative
- How can I configure the same settings I get in the llama.cpp webUI with Ollama on macOS?
-
LibreChat
The primary use case here seems to be that it might be possible to use this tool to spend <$20/mo for the same feature set as ChatGPT+. It does not currently make any effort to support locally-hosted open source models, which is what I would have assumed from its name.
If you're interested in a fully Libre LLM stack, I've had fun lately with ollama [0] and ollama-webui [1]. It was pretty trivial to take ollama-webui's docker-compose file and set up a locally-running chat server with Mistral 7B. Trying out different models and prompts was likewise very easy to get started with.
Mistral isn't anything like as good as GPT-4, but it's Apache licensed and fully local, which meets my definition of Libre. I'll continue to use both while the FOSS stacks catch up, but it's fun to keep up with the progress on the open source stuff as tooling develops.
[0] https://github.com/jmorganca/ollama
[1] https://github.com/ollama-webui/ollama-webui
- LM Studio – Discover, download, and run local LLMs
MindMac
-
Ask HN: Is anyone considering cancelling their ChatGPT subscription?
To avoid both instability and strict limitations, you can utilize the ChatGPT API. By adding the API key into clients like MindMac[0], you will gain access to a pleasant UI with numerous additional features.
[0] https://mindmac.app
-
Show HN: NotesOllama – I added local LLM support to Apple Notes (through Ollama)
I highly recommend MindMac (https://mindmac.app) which adds os-wide support for Ollama (and "Open"AI et el) along with optional clipboard access and text entry.
- MindMac ● Privacy-first & feature-rich ChatGPT client to use OpenAI API ● Perpetual license ● 55% OFF on all plans with code CYBERMONDAY2023 ● Only from $13.
-
LM Studio – Discover, download, and run local LLMs
LMStudio is great to run local LLMs, also support OpenAI-compatible API. In the case you need more advance UI/UX, you can use LMStudio with MindMac(https://mindmac.app), just check this video for details https://www.youtube.com/watch?v=3KcVp5QQ1Ak.
-
Is the web version of ChatGPT still a way to go, or is there any similar App to OpenAI's iOS?
Hello, I am the creator of MindMac. I would like to apologize for any inconvenience you may have experienced. If you could kindly provide information about the version in which you encountered the CPU issue, it would be greatly appreciated. Please note that earlier versions of MindMac were known to use CPU intensively. However, I want to assure you that this problem has been resolved in the latest release, version 1.6.2.
What are some alternatives?
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
LLM-Prompt-Library - Advanced Code and Text Manipulation Prompts for Various LLMs. Suitable for GPT-4, Claude, Llama3, Gemini, and other high-performing open-source LLMs.
LibreChat - Enhanced ChatGPT Clone: Features OpenAI, Assistants API, Azure, Groq, GPT-4 Vision, Mistral, Bing, Anthropic, OpenRouter, Vertex AI, Gemini, AI model switching, message search, langchain, DALL-E-3, ChatGPT Plugins, OpenAI Functions, Secure Multi-User System, Presets, completely open-source for self-hosting. More features in development
hoof - "Just hoof it!" - A spotlight like interface to Ollama
llamafile - Distribute and run LLMs with a single file.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
open-webui - User-friendly WebUI for LLMs (Formerly Ollama WebUI)
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
stable-diffusion-webui - Stable Diffusion web UI
B9ChatAI - ChatGPT client built specifically for macOS, utilizing macOS features
caasa - Container as a Service admin
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.