Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
llama-node
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
-
LLamaSharp
A cross-platform library to run 🦙LLaMA/LLaVA model (and others) on your local device efficiently.
-
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
Go: go-skynet/go-llama.cpp
Node.js: hlhr202/llama-node
Ruby: yoshoku/llama_cpp.rb
C#/.NET: SciSharp/LLamaSharp
I used the FastChat API to load two quantized Vicuna-13 models locally so I could repeatedly query them for the modern translation of a given paragraph from the complete works of Jonathan Swift. Then I LoRa+PEFTed Llama-7b to convert from modern English to Swift. Works great: https://huggingface.co/pcalhoun/LLaMA-7b-JonathanSwift
Related posts
- FreedomGPT: AI with no censorship
- Dify, an end-to-end, visualized workflow to build/test LLM applications
- A suite of tools designed to streamline the development cycle of LLM-based apps
- More Agents Is All You Need: LLMs performance scales with the number of agents
- Show HN: We got fine-tuning Mistral-7B to not suck