YetAnotherChatUI
ollama
YetAnotherChatUI | ollama | |
---|---|---|
1 | 225 | |
1 | 72,781 | |
- | 14.0% | |
8.2 | 9.9 | |
about 1 month ago | 3 days ago | |
HTML | Go | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
YetAnotherChatUI
-
Ollama releases OpenAI API compatibility
I had trouble installing Ollama last time I tried, I'm going to try again tomorrow.
I've already got a web UI that "should" work with anything that matches OpenAI's chat API, though I'm sure everyone here knows how reliable air-quotes like that are when a developer says them.
https://github.com/BenWheatley/YetAnotherChatUI
ollama
-
RAG with OLLAMA
Note: Before proceeding further you need to download and run Ollama, you can do so by clicking here.
-
Ollama 0.1.42
`file://*` URLs are now allowed => ollama works with simple html files now
https://github.com/ollama/ollama/commit/1a29e9a879433fc55cf1...
-
How to setup a free, self-hosted AI model for use with VS Code
This guide assumes you have a supported NVIDIA GPU and have installed Ubuntu 22.04 on the machine that will host the ollama docker image. AMD is now supported with ollama but this guide does not cover this type of setup.
-
beginner guide to fully local RAG on entry-level machines
Nowadays, running powerful LLMs locally is ridiculously easy when using tools such as ollama. Just follow the installation instructions for your #OS. From now on, we'll assume using bash on Ubuntu.
- Codestral: Mistral's Code Model
- AIM Weekly 27 May 2024
-
Devoxx Genie Plugin : an Update
I focused on supporting Ollama, GPT4All, and LMStudio, all of which run smoothly on a Mac computer. Many of these tools are user-friendly wrappers around Llama.cpp, allowing easy model downloads and providing a REST interface to query the available models. Last week, I also added "👋🏼 Jan" support because HuggingFace has endorsed this provider out-of-the-box.
- Ask HN: Are companies self hosting LLMs?
- Ollama v0.1.39 Pre-release. Support Phi-3 Medium
What are some alternatives?
model_navigator - Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.
llama.cpp - LLM inference in C/C++
dali_backend - The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
gpt4all - gpt4all: run open-source LLMs anywhere
tensorrtllm_backend - The Triton TensorRT-LLM Backend
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
client - Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
tensorrtllm_backe
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
lookma - LookMa connects Android devices to locally-run LLMs
llama - Inference code for Llama models