ollama
llama-cpp-python
ollama | llama-cpp-python | |
---|---|---|
209 | 55 | |
66,540 | 6,658 | |
23.9% | - | |
9.9 | 9.8 | |
about 15 hours ago | 1 day ago | |
Go | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ollama
- Ollama v0.1.34 Is Out
-
Ask HN: What do you use local LLMs for?
- Basic internet search (I start ollama CLI faster than I can start a browser - https://ollama.com)
- Formatting/changing text
- Troubleshooting code, esp. new frameworks/libs
- Recipes
- Data entry
- Organizing thoughts: High-level lists, comparison, classification, synonyms, jargon & nomenclature
- Learning esp. by analogy and example
RAG for:
- Website assistants (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Game NPCs (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Discord/Slack/forum bots (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Character-driven storytelling and creating art in a specific style for video game loading screens, background images, avatars, website art, etc. (https://github.com/bennyschmidt/ragdoll-studio/tree/master/r...)
- FLaNK-AIM Weekly 06 May 2024
-
Introducing Jan
Jan goes a step further by integrating with other local engines like LM Studio and ollama.
- Ollama v0.1.33
-
Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain
# install the Ollama curl -fsSL https://ollama.com/install.sh | sh # get the llama3 model ollama pull llama2 # install the MLFlow pip install mlflow
-
Create an AI prototyping environment using Jupyter Lab IDE with Typescript, LangChain.js and Ollama for rapid AI prototyping
Ollama for running LLMs locally
-
Setup Llama 3 using Ollama and Open-WebUI
curl -fsSL https://ollama.com/install.sh | sh
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Streaming is not a problem (it's just a simple flag: https://github.com/wiktor-k/llama-chat/blob/main/index.ts#L2...) but I've never used voice input.
The examples show image input though: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
Maybe you can file an issue here: https://github.com/ollama/ollama/issues
-
I Said Goodbye to ChatGPT and Hello to Llama 3 on Open WebUI - You Should Too
I’m a huge fan of open source models, especially the newly release Llama 3. Because of the performance of both the large 70B Llama 3 model as well as the smaller and self-host-able 8B Llama 3, I’ve actually cancelled my ChatGPT subscription in favor of Open WebUI, a self-hostable ChatGPT-like UI that allows you to use Ollama and other AI providers while keeping your chat history, prompts, and other data locally on any computer you control.
llama-cpp-python
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
There's a Python binding for llama.cpp which is actively maintained and has worked well for me: https://github.com/abetlen/llama-cpp-python
- FLaNK AI for 11 March 2024
-
OpenAI: Memory and New Controls for ChatGPT
I'll share the core bit that took a while to figure out the right format, my main script is a hot mess using embeddings with SentenceTransformer, so I won't share that yet. E.g: last night I did a PR for llama-cpp-python that shows how Phi might be used with JSON only for the author to write almost exactly the same code at pretty much the same time. https://github.com/abetlen/llama-cpp-python/pull/1184
-
TinyLlama LLM: A Step-by-Step Guide to Implementing the 1.1B Model on Google Colab
Python Bindings for llama.cpp
- Mistral-8x7B-Chat
-
Running Mistral LLM on Apple Silicon Using Apple's MLX Framework Is Much Faster
If the model could be made to work with llama.cpp, then https://github.com/abetlen/llama-cpp-python might be more compact. llama.cpp only supports a limited list of model types though.
- Run ChatGPT-like LLMs on your laptop in 3 lines of code
-
Code Llama, a state-of-the-art large language model for coding
https://github.com/abetlen/llama-cpp-python has a web server mode that replicates openai's API iirc and the readme shows it has docker builds already.
-
Meta: Code Llama, an AI Tool for Coding
LocalAI https://localai.io/ and LMStudio https://lmstudio.ai/ both have fairly complete OpenAI compatibility layers. llama-cpp-python has a FastAPI server as well: https://github.com/abetlen/llama-cpp-python/blob/main/llama_... (as of this moment it hasn't merged GGUF update yet though)
-
First steps with llama
I went with Python, llama-cpp-python, since my goal is just to get a small project up and running locally.
What are some alternatives?
llama.cpp - LLM inference in C/C++
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
gpt4all - gpt4all: run open-source LLMs anywhere
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
text-generation-inference - Large Language Model Text Generation Inference
llama - Inference code for Llama models
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.