SaaSHub helps you find the best software and product alternatives Learn more →
Ollama Alternatives
Similar projects and alternatives to ollama
-
-
InfluxDB
Purpose built for real-time analytics at any scale. InfluxDB Platform is powered by columnar analytics, optimized for cost-efficient storage, and built with open data standards.
-
-
-
-
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
FLiPStackWeekly
FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
khoj
Your AI second brain. Get answers to your questions, whether they be online or in your own notes. Use online AI models (e.g gpt4) or private, local LLMs (e.g llama3). Self-host locally or use our cloud instance. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
-
-
-
-
litellm
Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
-
jan
Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)
-
-
-
-
ollama-webui
Discontinued ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
-
cortex
Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM, ONNX). Powers 👋 Jan (by janhq)
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
ollama discussion
ollama reviews and mentions
- Ollama – Get up and running with large language models
-
Yi-Coder: A Small but Mighty LLM for Code
You can run this LLM on Ollama [0] and then use Continue [1] on VS Code.
The setup is pretty simple:
* Install Ollama (instructions for your OS on their website - for macOS, `brew install ollama`)
* Download the model: `ollama pull yi-coder`
* Install and configure Continue on VS Code (https://docs.continue.dev/walkthroughs/llama3.1 <- this is for Llama 3.1 but it should work by replacing the relevant bits)
[0] https://ollama.com/
[1] https://www.continue.dev/
- Ollama: Simplify Access to Llama 3.1, Mistral, and Gemma 2 LLMs
-
With 10x growth since 2023, Llama is the leading engine of AI innovation
It is! Just downloaded it the other day and while far from perfect it's pretty neat. I run LLAVA and llama (among other models) using https://ollama.com
-
Working with LLMs in Ruby on Rails: A Simple Guide
Llama server API documentation.
-
The 6 Best LLM Tools To Run Models Locally
ollama pull modelname, where modelname is the name of the model you want to install. Checkout Ollama on GitHub for some example models to download. The pull command is also used for updating a model. Once it is used, only the difference will be fetched.
-
PHP and LLMs book - Local LLMs: Streamlining Your Development Workflow
This is the easy part! They have a nice download page: http://ollama.com/. Once installed, you'll have an icon in your tool bar (or taskbar on Windows) and you can basically "Restart" when needed since they do a ton of updates.
- Como Instalar e Rodar uma IA Localmente no seu Computador
-
Markov chains are funnier than LLMs
there sort of is, if you install ollama (https://ollama.com) and then execute: ollama run llama2-uncensored it will install and run the local chat interface for llama2 in an uncensored version which gives a little bit better results with less guardrails. Same with wizardlm-uncensored and wizard-vicuna-uncensored.
- How can I use the new Llama models through an API?
-
A note from our sponsor - SaaSHub
www.saashub.com | 7 Sep 2024
Stats
ollama/ollama is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of ollama is Go.