-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
What's the state of the art in GPT-3 alternatives right now, in practical terms? If your typical use case is taking a pretrained model and fine tuning it to a specific task, which LLM would yield the best results while running on consumer hardware? Note that I'm specifically asking for software that I can run on my own hardware, I'm not interested in paying OpenAI $0.02 per API request.
I'll start the recommendations with Karpathy's nanoGPT: https://github.com/karpathy/nanoGPT
What else do we have?
Related posts
-
How the RWKV language model works
-
[P] Raven 7B & 14B 🐦(RWKV finetuned on Alpaca+CodeAlpaca+Guanaco) and Gradio Demo for Raven 7B
-
[D] Totally Open Alternatives to ChatGPT
-
[R] RWKV 14B ctx8192 is a zero-shot instruction-follower without finetuning, 23 token/s on 3090 after latest optimization (16G VRAM is enough, and you can stream layers to save more VRAM)
-
[P] RWKV 14B is a strong chatbot despite only trained on Pile (16G VRAM for 14B ctx4096 INT8, more optimizations incoming)