Spliit
ollama
Spliit | ollama | |
---|---|---|
7 | 224 | |
578 | 71,334 | |
22.1% | 12.2% | |
9.2 | 9.9 | |
6 days ago | 7 days ago | |
TypeScript | Go | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Spliit
-
Ask HN: What have you built with LLMs?
For my expense sharing app [1], I added receipt scanning in a few minutes and a few lines of code by using GPT 4 with Vision. I am aware that LLMs often are a solution looking for a problem, but there are some situations where a bit of magic is just great :)
It is a Next.js application, calling OpenAI’s API using a plain API route.
[1] https://spliit.app
-
TripSplit VS spliit2 - a user suggested alternative
2 projects | 15 Jan 2024
-
splitio VS spliit2 - a user suggested alternative
2 projects | 15 Jan 2024
- Show HN: Spliit – Free and Open Source Alternative to Splitwise
-
Show HN: Spliit v2 – Free and Open Source Alternative to Splitwise
I created the project a couple of years ago to learn Go, but I just rewrote it using a stack I am more comfortable with (Next.js). I also made it open source [1], to feel free to contribute ;)
[1] https://github.com/scastiel/spliit2
-
Spliit v2 – Open Source alternative to Splitwise
I totally rewrote it (migrating the existing data of course) with a technology I am more familiar with (Next.js, React, TailwindCSS, Prisma…), and made the new version open source! Feel free to contribute by creating an issue or even a pull-request if you want 😉.
ollama
-
Ollama 0.1.42
`file://*` URLs are now allowed => ollama works with simple html files now
https://github.com/ollama/ollama/commit/1a29e9a879433fc55cf1...
-
How to setup a free, self-hosted AI model for use with VS Code
This guide assumes you have a supported NVIDIA GPU and have installed Ubuntu 22.04 on the machine that will host the ollama docker image. AMD is now supported with ollama but this guide does not cover this type of setup.
-
beginner guide to fully local RAG on entry-level machines
Nowadays, running powerful LLMs locally is ridiculously easy when using tools such as ollama. Just follow the installation instructions for your #OS. From now on, we'll assume using bash on Ubuntu.
- Codestral: Mistral's Code Model
- AIM Weekly 27 May 2024
-
Devoxx Genie Plugin : an Update
I focused on supporting Ollama, GPT4All, and LMStudio, all of which run smoothly on a Mac computer. Many of these tools are user-friendly wrappers around Llama.cpp, allowing easy model downloads and providing a REST interface to query the available models. Last week, I also added "👋🏼 Jan" support because HuggingFace has endorsed this provider out-of-the-box.
- Ask HN: Are companies self hosting LLMs?
- Ollama v0.1.39 Pre-release. Support Phi-3 Medium
-
Ask HN: Which LLMs can run locally on most consumer computers
I was able to successfully run Llama 3 8B, mistral 7B, phi and other 7B models using Ollama [1] on my M1 MacBook Air.
[1] https://ollama.com
What are some alternatives?
Language-games - Dead simple games made with word vectors.
llama.cpp - LLM inference in C/C++
CX_DB8 - a contextual, biasable, word-or-sentence-or-paragraph extractive summarizer powered by the latest in text embeddings (Bert, Universal Sentence Encoder, Flair)
gpt4all - gpt4all: run open-source LLMs anywhere
data-analytics - Welcome to the Data-Analytics repository
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
grand-slams-dashboard - ATP / WTA Grand Slam Tennis Dashboard. View all time major leaders, filter by tournament, analyze player major final performance
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
farfalle - 🔍 AI search engine - self-host with local or cloud LLMs
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
llama - Inference code for Llama models
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI