marvin
LocalAI
Our great sponsors
marvin | LocalAI | |
---|---|---|
16 | 82 | |
4,739 | 19,593 | |
6.4% | 12.9% | |
9.9 | 9.9 | |
3 days ago | 3 days ago | |
Python | C++ | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
marvin
-
Show HN: Marvin 2.0 – a lightweight, multi-modal AI toolkit
Hey HN! We just released Marvin 2.0.
Marvin is an AI toolkit for developers who want to use LLMs with traditional software. We still see significant challenges integrating LLMs because of how difficult it is to get them to reliably accept and return structured data. Marvin consists of independent, functional tools that address this problem in a variety of ways.
Marvin has always been focused on using LLMs to work with native Python datatypes and Pydantic models. In 2.0 we've expanded this significantly with dedicated APIs for the most common use cases we've seen over the last year: classification, entity extraction, transforming data to types, and generating synthetic data. Marvin 2.0 is also fully multi-modal and supports images as inputs for classification, extraction, and transformation tasks (as well as simple image and speech generation). We've also introduces a Pythonic interface to OpenAI's assistants API, which now powers all of Marvin's interactive components.
We've tried to make an LLM framework that "sparks joy" and captures that same feeling you had the first time you saw an LLM in action. Try it out and let us know what you think!
(Repo: https://github.com/PrefectHQ/marvin)
-
Show HN: Magentic – Use LLMs as simple Python functions
Seems a lot like https://github.com/PrefectHQ/marvin?
The prompting you do seems an awfully like:
https://www.askmarvin.ai/prompting/prompt_function/
-
Amazon CodeWhisperer, Free for Individual Use, Is Now Generally Available
You can try the decorator ai_fn in marvin https://github.com/PrefectHQ/marvin
-
4-Apr-2023
Marvin: a batteries-included library for building AI-powered software. Marvin's job is to integrate AI directly into your codebase by making it look and feel like any other function (https://github.com/PrefectHQ/marvin)
-
Magic - AI functions for Typescript
Sure! I was inspired by this Python library: https://github.com/PrefectHQ/marvin
-
Show HN: A ChatGPT TUI with custom bots
I see Langchain has support for Azure chat models, and Marvin is built on Langchain so it may not be so difficult! Tracking issue here: https://github.com/PrefectHQ/marvin/issues/189
- FLaNK Stack Weekly 3 April 2023
- Meet Marvin: A batteries-included library for building AI-powered software, aka “woah-code”
-
Show HN: Marvin – build AI functions that use an LLM as a runtime
We have a related issue open (https://github.com/PrefectHQ/marvin/issues/64) but haven't designed anything yet.
LocalAI
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
-
Retrieval Augmented Generation in Go
Neither of this really requires OpenAI. You can do it with locally-running models via something like https://github.com/mudler/LocalAI
What are some alternatives?
bpytop - Linux/OSX/FreeBSD resource monitor
gpt4all - gpt4all: run open-source LLMs anywhere
aide - LLM shell and document interogator
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
magentic - Seamlessly integrate LLMs as Python functions
llama-cpp-python - Python bindings for llama.cpp
lazydocker - The lazier way to manage everything docker
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
the-algorithm
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
use_gpt_as_programming_lang - use gpt as programming language
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.