chatty-llama
floneum
chatty-llama | floneum | |
---|---|---|
1 | 10 | |
27 | 979 | |
- | 10.5% | |
7.2 | 9.8 | |
8 months ago | 6 days ago | |
Rust | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chatty-llama
-
Chatty LLama: A fullstack Rust + react chat app using Meta's Llama-2 LLMs https://github.com/Sollimann/chatty-llama
Link to repo: https://github.com/Sollimann/chatty-llama FYI, I'm using Rust for model hosting and inference, React for chat app and Caddy as web server. Inference currently runs purely on CPU, but with the option of running on GPU.
floneum
-
Dioxus 0.5: Web, Desktop, Mobile Apps in Rust
It is pretty good. I am working on an application that uses SVGs as a way to draw a workflow editor UI with Dioxus: https://github.com/floneum/floneum
-
Show HN: Kalosm an embeddable framework for pre-trained models in Rust
```
## What can you build with Kalosm?
Kalosm is designed to be a flexible and powerful tool for building AI into your applications. It is a great fit for any application that uses AI models to process sensitive information where local processing is important.
Here are a few examples of applications that are built with Kalosm:
- Floneum (https://floneum.com/): A local open source workflow editor and automation tool that uses Kalosm to provide natural language processing and other AI features.
-
Launch HN: AgentHub (YC W24) – A no-code automation platform
This reminds me of Floneum (https://github.com/floneum/floneum), this open-sourced tool for graph-based workflows using local LLMs.
-
Announcing Kalosm - an local first AI meta-framework for Rust
Kalosm is a meta-framework for AI written in Rust using candle. Kalosm supports local quantized large language models like Llama, Mistral, Phi-1.5, and Zephyr. It also supports other quantized models like Wuerstchen, Segment Anything, and Whisper. In addition to local models, Kalosm supports remote models like GPT-4 and ada embeddings.
- Show HN: Kalosm – an local first AI meta-framework in Rust
- Floneum 0.2 released: Headless browsing, package manager, cloud saves, and more
- Floneum, a graph editor for local AI workflows
-
Show HN: Floneum, a graph editor for local AI workflows
1. I would love to support additional model runners including exLlama and API based models like chat GPT. I'm less familiar with how c transformers and GPTQ compare to llama.cpp. GPTQ used to run faster because it supported GPU acceleration, but now llama.cpp supports the GPU as well so that may have changed. Feel free to open a GitHub issue to discuss this: https://github.com/floneum/floneum/issues/new/choose
2. There are a few differences:
What are some alternatives?
tenere - 🔥 TUI interface for LLMs written in Rust
indexify - A scalable realtime and continuous indexing and structured extraction engine for Unstructured Data to build Generative AI Applications
fullstack-rust - Reference implementation of a full-stack Rust application
text-embeddings-inference - A blazing fast inference solution for text embeddings models
smartgpt - A program that provides LLMs with the ability to complete complex tasks using plugins.
awesome-ml - Curated list of useful LLM / Analytics / Datascience resources
llm-chain - `llm-chain` is a powerful rust crate for building chains in large language models allowing you to summarise text and complete complex tasks
opentau - Using Large Language Models for Gradual Type Inference