floneum
text-embeddings-inference
floneum | text-embeddings-inference | |
---|---|---|
10 | 3 | |
979 | 2,055 | |
10.5% | 9.8% | |
9.8 | 8.9 | |
6 days ago | 9 days ago | |
Rust | Rust | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
floneum
-
Dioxus 0.5: Web, Desktop, Mobile Apps in Rust
It is pretty good. I am working on an application that uses SVGs as a way to draw a workflow editor UI with Dioxus: https://github.com/floneum/floneum
-
Show HN: Kalosm an embeddable framework for pre-trained models in Rust
```
## What can you build with Kalosm?
Kalosm is designed to be a flexible and powerful tool for building AI into your applications. It is a great fit for any application that uses AI models to process sensitive information where local processing is important.
Here are a few examples of applications that are built with Kalosm:
- Floneum (https://floneum.com/): A local open source workflow editor and automation tool that uses Kalosm to provide natural language processing and other AI features.
-
Launch HN: AgentHub (YC W24) – A no-code automation platform
This reminds me of Floneum (https://github.com/floneum/floneum), this open-sourced tool for graph-based workflows using local LLMs.
-
Announcing Kalosm - an local first AI meta-framework for Rust
Kalosm is a meta-framework for AI written in Rust using candle. Kalosm supports local quantized large language models like Llama, Mistral, Phi-1.5, and Zephyr. It also supports other quantized models like Wuerstchen, Segment Anything, and Whisper. In addition to local models, Kalosm supports remote models like GPT-4 and ada embeddings.
- Show HN: Kalosm – an local first AI meta-framework in Rust
- Floneum 0.2 released: Headless browsing, package manager, cloud saves, and more
- Floneum, a graph editor for local AI workflows
-
Show HN: Floneum, a graph editor for local AI workflows
1. I would love to support additional model runners including exLlama and API based models like chat GPT. I'm less familiar with how c transformers and GPTQ compare to llama.cpp. GPTQ used to run faster because it supported GPU acceleration, but now llama.cpp supports the GPU as well so that may have changed. Feel free to open a GitHub issue to discuss this: https://github.com/floneum/floneum/issues/new/choose
2. There are a few differences:
text-embeddings-inference
-
HuggingFace text-generation-inference is reverting to Apache 2.0 License
Worth noting that this also impacts the great https://github.com/huggingface/text-embeddings-inference, which allows anyone to run state of the art embeddings with great performance.
- FLaNK Stack Weekly for 30 Oct 2023
- Fast inference for text models using Rust
What are some alternatives?
indexify - A scalable realtime and continuous indexing and structured extraction engine for Unstructured Data to build Generative AI Applications
llama-node - Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
chatty-llama - A fullstack Rust + React chat app using open-source Llama language models
smartgpt - A program that provides LLMs with the ability to complete complex tasks using plugins.
awesome-ml - Curated list of useful LLM / Analytics / Datascience resources
auto-rust - auto-rust is an experimental project that aims to automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.
opentau - Using Large Language Models for Gradual Type Inference
openv0 - AI generated UI components
CSGHub - CSGHub is an opensource large model assets platform just like on-premise huggingface which helps to manage datasets, model files, codes and more. CSGHub是一个开源、可信的大模型资产管理平台,可帮助用户治理LLM和LLM应用生命周期中涉及到的资产(数据集、模型文件、代码等)。CSGHub提供类似私有化的Huggingface功能,以类似OpenStack Glance管理虚拟机镜像、Harbor管理容器镜像以及Sonatype Nexus管理制品的方式,实现对LLM资产的管理。欢迎关注反馈和Star⭐️
anansi - open source tooling for AI search and understanding
frugal - ⚡️ Transform AI/ML operations: Transparency, Control and Cost Optimization. ⚡️